Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
en:services:application_services:high_performance_computing:singularity [2019/04/08 13:31] – [Jupyter and IPython Parallel with Singularity] akhuziy | en:services:application_services:high_performance_computing:singularity [2020/06/24 14:46] (current) – akhuziy | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== Singularity on the SCC ====== | ||
+ | [[https:// | ||
+ | |||
+ | < | ||
+ | module load singularity | ||
+ | </ | ||
+ | |||
+ | After the module is loaded you are ready to pull and run your containers. In difference to Docker you can provide your own container images. | ||
+ | |||
+ | For building you can use Docker images or Singularity bootstrap files. You can find the documentation for a building process at | ||
+ | https:// | ||
+ | ====== Examples ====== | ||
+ | Several examples of Singularity usecases will be shown below. | ||
+ | ===== Jupyter and IPython Parallel with Singularity ===== | ||
+ | As an example we will pull and deploy the Singularity image containing Jupyter and IPython Parallel. | ||
+ | |||
+ | First create a new folder in your '' | ||
+ | |||
+ | For pulling the image run the following command: | ||
+ | |||
+ | < | ||
+ | singularity pull --name sjupyter.sif shub:// | ||
+ | </ | ||
+ | |||
+ | Now the sjupyter.sif image is ready to be containerized. To submit the corresponding job, run the command: | ||
+ | |||
+ | < | ||
+ | srun --pty -p int singularity shell sjupyter.sif | ||
+ | </ | ||
+ | Here we are requesting a shell to the container in the interactive partition. | ||
+ | |||
+ | ===== GPU access within the container ===== | ||
+ | GPU devices are visible within the container by default. Only driver and necessary libraries should be installed or binded to the container. | ||
+ | You can install Nvidia drivers yourself or bind it to the container. To bind it automatically you need to run the container with '' | ||
+ | < | ||
+ | export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/ | ||
+ | </ | ||
+ | When running a conitainer the corresponding path should be binded to it with '' | ||
+ | < | ||
+ | singularity shell -B / | ||
+ | </ | ||
+ | |||
+ | The libraries like CUDA and CuDNN should be mentioned in '' | ||
+ | < | ||
+ | export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/ | ||
+ | </ | ||
+ | Here we have CUDA v9.0 installed within the container at / | ||
+ | |||
+ | If you want to use '' | ||
+ | < | ||
+ | export PATH=${PATH}:/ | ||
+ | </ | ||
+ | |||
+ | The example below is Singularity container bootstrap file which can be used for building the container based on Nvidia Docker image with preinstalled CUDA v9.0 and CuDNN v7 on Ubuntu 16.04 (more images of Nvidia can be found on [[https:// | ||
+ | < | ||
+ | Bootstrap: docker | ||
+ | From: nvidia/ | ||
+ | |||
+ | %post | ||
+ | |||
+ | apt-get -y update | ||
+ | apt-get -y install python3-pip | ||
+ | |||
+ | pip3 install --upgrade pip | ||
+ | pip3 install tensorflow-gpu | ||
+ | |||
+ | %environment | ||
+ | |||
+ | PATH=${PATH}: | ||
+ | LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/ | ||
+ | CUDA_PATH=/ | ||
+ | CUDA_ROOT=/ | ||
+ | </ | ||
+ | |||
+ | You can shell into the container with: | ||
+ | < | ||
+ | singularity shell -B / | ||
+ | </ | ||
+ | |||
+ | ===== Distributed PyTorch on GPU ===== | ||
+ | In case if you are using PyTorch for ML, you may want to try out to run it in the container on our GPU nodes using its distributed package. Here is the link ([[https:// |