Parallel execution of Jupyter notebooks on the cluster
if the resources of the interactive queue are enough for the Jupyter notebook itself, then the easiest way to use IPython parallel with Jupyter on the SCC would be to run it via Jupyter-Hub on SCC. Where you can spawn a parallel notebook along with a normal one.
This documentation is based on the IPython example used in our Singularity on the SCC documentation page.
When submitting that image, please add the -B option of singularity for mounting the /opt folder:
srun --pty -p int singularity shell -B /opt sjupyter.sif
To be able to run Slurm commands within the container, additional libraries and directories should be bound into the container:
singularity -B /var/run/munge,/run/munge,/usr/lib64/libmunge.so.2,/usr/lib64/libmunge.so.2.0.0,/etc/profile.d/slurm.sh ...
Also you need to add slurmadmin user to the container when building the image, with following commands:
echo "slurmadmin:x:300:300::/opt/slurm/slurm:/bin/false" >> /etc/passwd echo "slurmadmin:x:300:" >> /etc/group
For using IPython Parallel with the cluster, we need to configure it. These steps are required only once, everything will be kept in your $HOME/.ipython
directory, even if you destroy the container.
To create a new profile and configure it for the compute cluster, run following command:
ipython profile create --parallel --profile=myslurm
This will create the profile at $HOME/.ipython/profile_myslurm
. Now you need to configure it for Slurm.
Add following config lines to the file $HOME/.ipython/profile_myslurm/ipcluster_config.py
:
c.IPClusterEngines.engine_launcher_class = 'SlurmEngineSetLauncher' c.IPClusterStart.controller_launcher_class = 'SlurmControllerLauncher' c.SlurmControllerLauncher.batch_template_file ='slurm.controller.template' c.SlurmEngineSetLauncher.batch_template_file = 'slurm.engine.template'
and comment out the following parameters:
#c.SlurmControllerLauncher.batch_template = "..." #c.SlurmEngineSetLauncher.batch_template = "..."
Add the following line to $HOME/.ipython/profile_myslurm/ipcontroller_config.py
:
c.HubFactory.ip = '*'
IPython Parallel is almost ready to use. For submitting Slurm jobs in a specific queue and with additional parameters, create templates for batch jobs in the directory you want to start the container using the names specified in the configuration file, i.e. slurm.controller.template
and slurm.engine.template
.
slurm.controller.template:
#!/bin/bash #SBATCH -p medium #SBATCH -J ipcontroller #SBATCH -o jupyterhub-gwdg/current.ipcontroller.log #SBATCH -n 1 #SBATCH -t 1:00:00 export PATH=$PATH:/usr/bin:/usr/local/bin export PATH=$PATH:/cm/shared/apps/singularity/3.2.0/bin/ singularity exec sjupyter.sif ipcontroller --profile-dir={profile_dir} --location=$HOSTNAME
slurm.engine.template:
#!/bin/bash #SBATCH -p medium #SBATCH -J ipengine #SBATCH -n {n} #SBATCH -o jupyterhub-gwdg/current.ipengine.log #SBATCH -t 1:00:00 export PATH=$PATH:/usr/bin:/usr/local/bin export PATH=$PATH:/cm/shared/apps/singularity/3.2.0/bin/ srun singularity exec sjupyter.sif ipengine --profile-dir={profile_dir}
Now you can launch a jupyter instance:
jupyter notebook --port <port> --ip 0.0.0.0 --no-browser
For <port>
choose a random unrestricted port number, for example 8769. Tunnel the port from the node to your local PC:
ssh -L 0.0.0.0:<port>:0.0.0.0:<port> yourlogin@login.gwdg.de ssh -L 0.0.0.0:<port>:<host>:<port> gwdu101 -N
For <host>
insert the node where the container is running. Open the link in the Jupyter output in your browswer.
To start the cluster, use the IPython Clusters tab in the Jupyter interface, select the myslurm profile and amount of processes and click start. You will be able to see the engines running with the “squeue -u $USER” command.
To test if it is working, simply run following script in the Jupyter notebook:
import ipyparallel as ipp c = ipp.Client(profile="myslurm") c.ids c[:].apply_sync(lambda : "Hello, World")