Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
en:services:application_services:high_performance_computing:software:spark [2023/05/15 17:48] – [Further reading] ckoehle2 | en:services:application_services:high_performance_computing:software:spark [2023/06/03 09:42] (current) – [Creating a Spark Cluster on the SCC] vend | ||
---|---|---|---|
Line 9: | Line 9: | ||
===== Creating a Spark Cluster on the SCC ===== | ===== Creating a Spark Cluster on the SCC ===== | ||
<WRAP center round important 60%> | <WRAP center round important 60%> | ||
- | We assume that you have access to the HPC system already and are logged in to one of the frontend nodes '' | + | We assume that you have access to the HPC system already and are logged in to one of the frontend nodes '' |
</ | </ | ||
Line 18: | Line 18: | ||
</ | </ | ||
- | We’re now ready to deploy a Spark cluster. Since the resources of the HPC system are managed by [[en: | + | We’re now ready to deploy a Spark cluster. Since the resources of the HPC system are managed by [[en: |
+ | |||
+ | < | ||
+ | #SBATCH --partition fat | ||
+ | #SBATCH --time=0-02: | ||
+ | #SBATCH --qos=short | ||
+ | #SBATCH --nodes=4 | ||
+ | #SBATCH --job-name=Spark | ||
+ | #SBATCH --output=scc_spark_job-%j.out | ||
+ | #SBATCH --ntasks-per-node=1 | ||
+ | #SBATCH --cpus-per-task=24 | ||
+ | </ | ||
+ | |||
+ | If you would like to override these default values, you can do so, by handing over the Slurm parameters to the script: | ||
< | < | ||
Line 24: | Line 37: | ||
Submitted batch job 872699 | Submitted batch job 872699 | ||
</ | </ | ||
+ | |||
+ | Especially, if you do not want to share the nodes resources, you need to add '' | ||
In this case, the '' | In this case, the '' |