High Performance Computing

Welcome to the official documentation of the Scientific Compute Cluster (SCC). It is the high performance computing system operated by the GWDG for both the Max Planck Society and the University of Göttingen.

This documentation will give you the necessary information to get access to the system, find the right software or compile your own, and run calculations.

Latest News

An archive of all news items can be found at the HPC-announce maling list.

Accessing the system

to use the compute cluster, you need a full GWDG account. Most employees of the University of Göttingen and the Max Planck Institutes already have such an account. This account is not activated for the use of the compute resources by default. More information on how to get your account activated or how to get an account can be found here.

Once you are activated, you can login-mdc.hpc.gwdg.de and login-fas.hpc.gwdg.de. These nodes are only accessible via ssh from the GÖNET. If you come from the internet, you need to either use a VPN or use our login server. You can find detailed instructions here.

Submitting jobs

Our compute cluster is divided into frontends and compute nodes. The frontends are meant for editing, compiling, and interacting with the batch system. Please do not use them for intensive testing, i.e. calculations longer than a few minutes. All users share resources on the frontends and will be impaired in their daily work if you overuse them.

To run a program on one (or more) of the compute nodes, you need to interact with our batch system, or scheduler, Slurm. You can do this with several different commands, such as srun, sbatch, and squeue1). A very simple example for such an interaction would be this:

$ srun hostname
dmp023

This runs the program hostname2) on one of our compute nodes. However, the program would only get access to a single core and very little memory. Not a problem for the hostname program, but if you want to calculate something more serious, you will need access to more resources. You can find out how to do that in our Slurm documentation.

Software

We provide a growing number of programs, libraries, and software on our system. These are available as modules. You can find a list with the module avail command and load them via module load. For example, if you want to run GROMACS, you simply use module load gromacs to get the most recent version. Additionally, we use a package management tool called Spack to install software. A guide on how to use modules and Spack is available here.

We provide different compilers and libraries if you want to compile your software on your own. As with the rest of the software, these are available as modules. These include gcc, intel, and nvhpc as compilers, openmpi, intel-mpi as MPI libraries, and others such as fftw and hdf5. You can find more specific instructions on code compilation on our dedicated page.

A short note on naming

The frontends and transfer nodes also have descriptive names of the form $func-$site.hpc.gwdg.de based on their primary function and site, where $func is either login or transfer while $site is either mdc (mobile data center, access to scratch) or fas (GWDG at Faßberg, access to scratch2). For example, to reach any login node at the MDC site, you would connect to login-mdc.hpc.gwdg.de.

Hardware Overview

The following documentation is valid for this list of hardware:

Nodes # CPU GPU Cores Frequency Memory IB Partition Launched
gwdd[169-176] 8 Ivy-Bridge
Intel E5-2670 v2
none 2✕10 2.5 GHz 64 GB none medium 2013-11
gwde001 1 Haswell
Intel E7-4809 v3
none 4✕8 2.0 GHz 2 TB none fat+ 2016-01
sa[001-032]* 32 Haswell
Intel E5-2680 v3
none 2✕12 2.5 GHz 256 GB QDR sa 2015-03
em[001-032]*
hh[001-040]*
72 Haswell
Intel E5-2640 v3
none 2✕8 2.6 GHz 128 GB QDR em\\hh 2015-03
dfa[001-015] 15 Broadwell
Intel E5-2650 v4
none 2✕12 2.2 GHz 512 GB FDR fat/fat+ 2016-08
dmp[011-076] 76 Broadwell
Intel E5-2650 v4
none 2✕12 2.2 GHz 128 GB FDR medium 2016-08
dsu[001-005] 5 Haswell
Intel E5-4620 v3
none 4✕10 2.0 GHz 1.5 TB FDR fat+ 2016-08
gwdo[161-180]* 20 Ivy-Bridge
Intel E3-1270 v2
NVidia GTX 770 1✕4 3.5 GHz 16 GB none gpu-hub 2014-01
dge[001-007] 7 Broadwell
Intel E5-2650 v4
NVidia GTX 1080 2✕12 2.2 GHz 128 GB FDR gpu 2016-08
dge[008-015] 8 Broadwell
Intel E5-2650 v4
NVidia GTX 980 2✕12 2.2 GHz 128 GB FDR gpu 2016-08
dge[016-045]* 30 Broadwell
Intel E5-2630 v4
NVidia GTX 1070 2✕10 2.2 GHz 64 GB none gpu-hub 2017-06
dte[001-010] 10 Broadwell
Intel E5-2650 v4
NVidia K40 2✕12 2.2 GHz 128 GB FDR gpu 2016-08
amp[001-092] 92 Cascade Lake
Intel Platinum 9242
none 2✕48 2.3 GHz 384 GB OPA medium 2020-11
agq[001-012] 12 Cascade Lake
Intel Gold 6242
NVidia Quadro RTX5000 2✕16 2.8 GHz 192 GB OPA gpu 2020-11
agt[001-002] 2 Cascade Lake
Intel Gold 6252
NVidia Tesla V100 / 32G 2✕24 2.1 GHz 384 GB OPA gpu 2020-11

Explanation: Systems marked with an asterisk (*) are only available for research groups participating in the corresponding hosting agreement. GB = Gigabyte, TB = Terabyte, Gb/s = Gigabit per second, GHz = Gigahertz, GT/s = Giga transfer per second, IB = Infiniband, QDR = Quad data rate, FDR = Fourteen Data Rate.

For a complete overview of hardware, located in Göttingen, look at https://www.gwdg.de/web/guest/hpc-on-campus/scc

1)
As you may have noticed, they all start with an s
2)
a program that just prints the name of the host
This website uses cookies. By using the website, you agree with storing cookies on your computer. Also you acknowledge that you have read and understand our Privacy Policy. If you do not agree leave the website.More information about cookies