Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
en:services:application_services:high_performance_computing:start [2018/11/14 16:01]
akhuziy [Table] removed gwdb nodes
en:services:application_services:high_performance_computing:start [2018/11/14 16:07]
akhuziy [Table] fixed amount of gwda nodes
Line 1: Line 1:
 +====== High Performance Computing ======
 +
 +For using our compute cluster you need a GWDG account. This account is, by default, not activated for the use of the compute resources. To get it activated, please send an informal email to <hpc@gwdg.de>
 +
 +===== Access =====
 +Once you gain access, you can login to the frontend nodes gwdu101.gwdg.de, gwdu102.gwdg.de and gwdu103.gwdg.de. These nodes are accessible via ssh from the GÖNET. If you come from the internet, first login to login.gwdg.de. From there you can then reach the frontends. 
 +The frontends are meant for editing, compiling, and interacting with the batch system, but please don't use them for testing for more than a few minutes, since all users share resources on the frontends and will be impaired in their daily work, if you overuse them. gwdu101 is an AMD based system, while gwdu102 and gwdu103 are Intel based. If your software takes advantage of special CPU dependent features, it is recommended to use the same CPU architecture for compiling as targeted for running your jobs.
 +
 +=====  Hardware Overview  =====
 +
 +The following documentation is valid for this list of hardware:
 +
 +^ Nodes                        ^ #    ^ CPU                             ^ GPU              ^ Cores  ^ Frequency  ^ Memory  ^ IB    ^ Queue     ^ Launched  ^
 +| gwdd[001-168]                | 168  | Ivy-Bridge \\ Intel E5-2670 v2  | none             | 2✕10   | 2.5 GHz    | 64 GB   | QDR   | mpi       | 2013-11   |
 +| gwda[023-048]                | 25   | Abu-Dhabi \\ AMD Opteron 6378   | none             | 4✕16   | 2.4 GHz    | 256 GB  | QDR   | fat       | 2013-04   |
 +| sa[001-032]*                 | 32   | Haswell \\ Intel E5-2680 v3     | none             | 2✕12   | 2.5 GHz    | 256 GB  | QDR   | mpi       | 2015-03   |
 +| em[001-032]*\\ hh[001-040]*  | 72   | Haswell \\ Intel E5-2640 v3     | none             | 2✕8    | 2.6 GHz    | 128 GB  | QDR   | mpi       | 2015-03   |
 +| gwde001                      | 1    | Haswell \\ Intel E7-4809 v3     | none             | 4✕8    | 2.0 GHz    | 2 TB    | QDR   | fat+      | 2016-01   |
 +| dfa[001-015]                 | 15   | Broadwell \\ Intel E5-2650 v4   | none             | 2✕12   | 2.2 GHz    | 512 GB  | FDR   | fat/fat+  | 2016-08   |
 +| dmp[011-076]                 | 76   | Broadwell \\ Intel E5-2650 v4   | none             | 2✕12   | 2.2 GHz    | 128 GB  | FDR   | mpi       | 2016-08   |
 +| dsu[001-005]                 | 5    | Haswell \\ Intel E5-4620 v3     | none             | 4✕10   | 2.0 GHz    | 1.5 TB  | FDR   | fat+      | 2016-08   |
 +| gwdo[161-180]*               | 20   | Ivy-Bridge \\ Intel E3-1270 v2  | NVidia GTX 770   | 1✕4    | 3.5 GHz    | 16 GB   | none  | gpu       | 2014-01   |
 +| dge[001-007]                 | 7    | Broadwell \\ Intel E5-2650 v4   | NVidia GTX 1080  | 2✕12   | 2.2 GHz    | 128 GB  | FDR   | gpu       | 2016-08   |
 +| dge[008-015]                 | 8    | Broadwell \\ Intel E5-2650 v4   | NVidia GTX 980   | 2✕12   | 2.2 GHz    | 128 GB  | FDR   | gpu       | 2016-08   |
 +| dge[016-045]*                | 30   | Broadwell \\ Intel E5-2630 v4   | NVidia GTX 1070  | 2✕10   | 2.2 GHz    | 64 GB   | none  | gpu       | 2017-06   |
 +| dte[001-010]                 | 10   | Broadwell \\ Intel E5-2650 v4   | NVidia K40       | 2✕12   | 2.2 GHz    | 128 GB  | FDR   | gpu       | 2016-08   |
 +
 +//Explanation://
 +Systems marked with an asterisk (*) are only available for research group participating in the corresponding hosting agreement.
 +**GB** = Gigabyte,
 +**TB** = Terabyte,
 +**Gb/s** = Gigabit per second,
 +**GHz** = Gigahertz,
 +**GT/s** =  Giga transfer per second,
 +**IB** =  Infiniband,
 +**QDR** =  Quad data rate,
 +**FDR** = Fourteen Data Rate.
 +
 +For a complete overview of hardware, located in Göttingen, look at [[http://hpc.gwdg.de/systems.html]]
 +=====  Preparing Binaries  =====
 +
 +Most of the third-party software installed on the cluster is not located in the default path. To use it, the corresponding "module" must be loaded. Furthermore, through the module system you can setup environment settings for your compiler to use special libraries. The big advantage of this system is the (relative) simplicity with which one can coordinate environment settings, such as PATH, MANPATH, LD_LIBRARY_PATH and other relevant variables, dependent on the requirements of the use-case. You can find a list of installed modules, sorted by categories, by entering ''module avail'' on one of the frontends gwdu101 or gwdu102. The command ''module list'' gives you a list of currently loaded modules. 
 +
 +To use a module, you can explicitly load the version you want with ''module load software/version''. If you leave out the version, the default version will be used. Logging off and back in will unload all modules, as well as ''module purge''. You can unload single modules by entering ''module unload software''.
 +
 +The recommended compiler module for C, C++, and Fortran code is the default Intel compiler ''intel/compiler''. We also provide GNU and Open64 compilers, the PGI compiler suite will follow. Open64 is often recommended for AMD CPUs, but we do not have experience with it. For math (BLAS and fftw3) the Intel MKL is a good default choice ''intel/mkl'', with ACML being an alternative for AMD processors. Usually it is not necessary to use fftw3 modules alongside with the MKL, as the latter provides fftw support as well. Please note that the module ''python/scipy/mkl/0.12.0'' provides Python's numpy and scipy libraries compiled with Intel MKL math integration, thus offering good math function performance in a scripting language.
 +
 +''intel/mpi'' and the various OpenMPI flavors are recommended for MPI, mostly due to the fact that the mvapich and mvapich2 libraries lack testing.
 +=====  Running Jobs  =====
 +
 +  * [[Running Jobs]]
 +  * [[Running Jobs (for experienced users)]]
 +
 +===== Latest nodes =====
 +You can find all important information about the newest nodes [[en:services:application_services:high_performance_computing:new_nodes|here]]
 +
 +=====  Applications  =====
 +
 +  * [[Gaussian09]]
 +  * [[IPython Parallel]]
 +  * [[Jupyter]]
 +  * [[Molpro]]
 +  * [[Orca]]
 +  * [[PSI4]]
 +  * [[Turbomole]]
 +  * [[Singularity]]
 +
 +=====  User provided application documentation  =====
 +
 +[[https://info.gwdg.de/wiki/doku.php?id=wiki:hpc:start]]
 +
 +=====  Transfer Data  =====
 +
 +  * [[Transfer Data]]
 +
 +=====  Environment Setup  =====
 +
 +[[.bashrc]]
 +
 +=====  Courses for High Performance Scientific Computing  =====
 +
 +[[Courses]]
 +
 +=====  Downloads  =====
 +
 +{{:en:services:scientific_compute_cluster:parallelkurs.pdf|}}
 +
 +{{:en:services:scientific_compute_cluster:script.sh.gz|}}
 +
 +[[Kategorie: Scientific Computing]]