Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
en:services:application_services:jupyter:hpc [2018/11/19 09:47]
akhuziy [How to use Jupyter-Hub on the SCC]
en:services:application_services:jupyter:hpc [2020/04/03 11:10]
akhuziy [Installing additional Python modules]
Line 30: Line 30:
 In order to make use of IPython Parallel, Jupyter should be started with ''GWDG HPC with IPython Parallel'' spawner.  In order to make use of IPython Parallel, Jupyter should be started with ''GWDG HPC with IPython Parallel'' spawner. 
  
-After the Jupyter notebook is launched, you can run engines using "IPython Clusters" tab of the web interface. There you should select the amount of engines to run and click the start button. +After the Jupyter notebook is launched, you can run engines using "IPython Clusters" tab of the web interface. There at **slurm** profile you should select the amount of engines to run and click the start button. 
  
-**Note**, that workers start as normal jobs in the ''mpi'' queue and it might take some time. However, the GUI doesn't have any functionality to check the state of workers, thus please wait before the engines are spawned. Nevertheless, you can always check the current state of the jobs with ''bjobs'' command, which should be run in the terminal. +**Note**, that workers start as normal jobs in the ''medium'' partition and it might take some time. However, the GUI doesn't have any functionality to check the state of workers, thus please wait before the engines are spawned. Nevertheless, you can always check the current state of the jobs with ''squeue -u $USER'' command, which should be run in the terminal. 
  
 After the engines are up, the spawned cluster of workers can be checked by the following script: After the engines are up, the spawned cluster of workers can be checked by the following script:
 <code python> <code python>
 import ipyparallel as ipp import ipyparallel as ipp
-c = ipp.Client(profile="lsf")+c = ipp.Client(profile="slurm")
 c.ids c.ids
 c[:].apply_sync(lambda : "Hello, World") c[:].apply_sync(lambda : "Hello, World")
 </code> </code>
  
-Workers currently configured to run maximum **1 hour**. If you want to change that, you can edit the submission scripts of workers in ''~/.ipython/profile_lsf/ipcluster_config.py''+Workers currently configured to run maximum **1 hour**. If you want to change that, you can edit the submission scripts of workers in ''~/.ipython/profile_slurm/ipcluster_config.py''
  
 ==== Installing additional Python modules ==== ==== Installing additional Python modules ====
 Additional Python modules can be installed via the terminal and the Python package manager "pip". To do this, a terminal must be opened via the menu "New" -> "Terminal". Afterwards  Additional Python modules can be installed via the terminal and the Python package manager "pip". To do this, a terminal must be opened via the menu "New" -> "Terminal". Afterwards 
-<code bash>pip install --user <module></code> +<code bash>python3 -m pip install --user <module></code> 
 installs a new module in the home directory. installs a new module in the home directory.
 +
 +The installation of large Python modules like "tensorflow" may fail with a message "No space left on device". This is caused by the temporary space under "/tmp" being too small for pip to work the downloaded packages. The following steps use a temporary directory in the much larger user home directory for this one installation:
 +
 +<code bash>
 +mkdir -v ~/.user-temp
 +TMPDIR=~/.user-temp python3 -m pip install --user <module>
 +</code>
 +
 +You also can use self defined kernels and install conda environments on non-parallel notebook. Please refer to [[en:services:application_services:jupyter:start#installation_of_additional_packages_and_environments_via_conda|Installing additional environments via conda]]
 +