Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
en:services:application_services:high_performance_computing:running_jobs_slurm:nodes_tasks_cores [2021/06/11 11:15]
mboden [MPI Jobs]
en:services:application_services:high_performance_computing:running_jobs_slurm:nodes_tasks_cores [2021/06/11 11:15] (current)
mboden [SMP Jobs]
Line 12: Line 12:
 Many programs, however, do not use MPI. These use shared memory parallelization (SMP), can only run on a single node and only need to be started once. If, for example, you use python and the multiprocessing library to parallelize your calculations, you only want your program to be started once, but still have access to multiple cores. In this case you want a single task with access to multiple cores. Many programs, however, do not use MPI. These use shared memory parallelization (SMP), can only run on a single node and only need to be started once. If, for example, you use python and the multiprocessing library to parallelize your calculations, you only want your program to be started once, but still have access to multiple cores. In this case you want a single task with access to multiple cores.
  
 +TL;DR: Use ''-c <cores>''
 === MPI Jobs and the Number of Nodes === === MPI Jobs and the Number of Nodes ===
 While MPI tasks work fine while communicating via the network/interconnect((That's what MPI is designed for!)), nothing beats the speed of shared memory communication. When MPI defaults to using the shared memory where possible and only uses the interconnect for communication between nodes. That means that you should pack your tasks as tightly onto nodes as possible. This not only speeds up your program, but also reduces the load on our network. While MPI tasks work fine while communicating via the network/interconnect((That's what MPI is designed for!)), nothing beats the speed of shared memory communication. When MPI defaults to using the shared memory where possible and only uses the interconnect for communication between nodes. That means that you should pack your tasks as tightly onto nodes as possible. This not only speeds up your program, but also reduces the load on our network.