Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
en:services:application_services:high_performance_computing:running_jobs_slurm:nodes_tasks_cores [2021/05/19 15:23] – created mboden | en:services:application_services:high_performance_computing:running_jobs_slurm:nodes_tasks_cores [2021/06/11 11:15] (current) – [SMP Jobs] mboden | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== Number of Nodes, Tasks and Cores ====== | ||
+ | Chosing the right amount of nodes, tasks and cores can be a bit confusing at first, so we try to give some guidelines here. | ||
+ | There are two ways to get to more cores and therefore more compute power to your job: Either you requests lots of tasks with one core each, or you request one task with many cores.((Anything in between works as well: multiple tasks having multiple cores. This is called a hybrid job.)) These options are controlled with the '' | ||
+ | |||
+ | === MPI Jobs === | ||
+ | |||
+ | Tasks are mainly a tool for MPI jobs. If you use allocate many tasks, Slurm expects you to start your program many times in parallel with '' | ||
+ | |||
+ | TL;DR: Use '' | ||
+ | === SMP Jobs === | ||
+ | Many programs, however, do not use MPI. These use shared memory parallelization (SMP), can only run on a single node and only need to be started once. If, for example, you use python and the multiprocessing library to parallelize your calculations, | ||
+ | |||
+ | TL;DR: Use '' | ||
+ | === MPI Jobs and the Number of Nodes === | ||
+ | While MPI tasks work fine while communicating via the network/ | ||
+ | |||
+ | Our smallest medium nodes have 24 cores. This means, that up to 24 tasks, your job will always fit on a single node, 48 tasks will fit on two nodes, and so on. You should request the correct number of nodes using the '' | ||
+ | |||
+ | ~~NOTOC~~ |