These courses will teach you how to use GWDG’s computer systems. Initially this training involves the introduction to the user environment that is provided by GWDG. This user environment consists of its computer hardware architecture, its operating system, its compilers, and its software that allocates computing resources. The introductory course addresses anyone interested to use GWDG’s computer systems for the first time. This course also addresses users who want to switch from an older computer system.
The other courses teach how to make use of hardware for parallel processing. The large computing power of modern computer systems generally comes from the use of networked clusters of multicore systems. Computational accelerators, based on massively parallel special systems such as GPUs, often complement these multicore systems. To control this kind of parallel computers, special programming languages and libraries allow to describe the distribution of the computational load onto a large number of compute nodes. All of GWDG’s computing clusters use the well-established Message Passing Interface (MPI. Those users who want to create a program for parallel computers for the first time, those who want to port sequential programs to parallel computers, and those who want to modify existing MPI programs will benefit from the introduction to the principles of parallel processing and from the introduction of MPI.
In addition to MPI, which is an established standard to program parallel computers, there are other languages for parallel processing These special languages either target the efficient use of special hardware parallelism or they target an easier integration of parallel programming concepts into sequential programming languages. The first group includes, for example, the Open Multi-Processing (OpenMP) API, which can be used to program shared memory multiprocessors. CUDA, which allows to develop parallel applications for general purpose GPUs, is another example for hardware-centred languages. The second group of special languages includes, for example, Unified Parallel C (UPC),Co-array Fortran (CAF), and the rather new high level language Julia. Courses on these topics are intended for those users, who want to apply one of these special languages either in order to achieve greater efficiency or to save programming effort.
All courses for programming parallel computers will be given in English or in German, depending on the audience.
The introductory courses on the use of compute clusters, on programming with MPI, on CUDA, and on programming with Julia are offered regularly, because they address the entire user base of GWDG’s compute clusters. The demand for the courses on specific topics cannot be predicted, these courses are therefore held as needed, with time, place, and thematic priorities coordinated with the interested participants. GWDG will try to enquire the demand for these courses. Unsolicited demands of users for the courses are very welcome and will be considered in the planning of the courses.
Detailed information about the courses:
Title | Using the GWDG Scientific Compute Cluster – An Introduction |
---|---|
Description | For first time users the Linux operating system on the compute cluster presents a substantial initial hurdle, as does preparing their programs and using the batch system. This course is intended to provide a smooth entry into the topic. We will start with connecting to the cluster and an overview on the most important Linux commands. After that, compilation and installation of software will be covered. Finally, an overview of efficiently using the compute resources with the batch system is given. |
Contents | Prerequisites for cluster access * Connecting to the frontend via ssh (openssh, or PuTTY on Windows) * The most important Linux commands * Preparing the compilation environment with “modules” * Compiling software * Efficiently submitting jobs to the cluster |
Prerequisits | * GWDG user-ID * An own notebook |
Location | GWDG “Vortragsraum”, external venues possible by agreement |
Time | Biannual, 0.5 days, always before “Parallel Programming with MPI” For the actual schedule, see here Additional dates by agreement |
Registration | See here |
Course Instructor | Dr. Christian Boehme, Dr. Tim Ehlers |
Title | Parallel Programming with MPI |
---|---|
Description | The efficient use of modern parallel computers is based on the exploitation of parallelism at all levels: hardware, programming and algorithms. After a brief overview of basic concepts for parallel processing the course presents in detail the specific concepts and language features of the Message Passing Interface (MPI) for programming parallel applications. The most important parallelization constructs of MPI are explained and applied in hands on exercises. The parallelization of algorithms is demonstrated in simple examples, their implementation as MPI programs will be studied in practical exercises |
Contents | Fundamentals of parallel processing (computer architectures and programming models) Introduction to the Message Passing Interface (MPI) The main language constructs of MPI-1 and MPI-2 * Point-to-point communication, * Collective communication incl. synchronization, * Parallel operations, * Data Structures * Parallel I / O, * Process management Demonstration and practical exercises with Fortran, C and Python source codes for all topics. Practice for the parallelization of sample programs. Analysis and optimization of parallel efficiency. |
Prerequisits | Participation in the course: “Using the GWDG Scientific Compute Cluster - An Introduction”, or equivalent knowledge Practical experience with Fortran , C or Python For the practical exercises: user ID for GWDG’s compute cluster, own notebook |
Location | GWDG “Vortragsraum”, external venues possible by agreement |
Time | Biannual, duration 2 days, always after the course “Using the GWDG Scientific Compute Cluster – An Introduction” For the actual schedule, see here Additional dates by agreement |
Registration | See here |
Course Instructor | Dr. Oswald Haan |
Title | GPU Programming with CUDA - An Introduction |
---|---|
Description | Graphic processors (GPUs) are increasingly used as computational accelerators for highly paral- lel applications. This course introduces hard- ware and parallelization concepts for GPUs and the CUDA programming environment for C and Fortran, including the language elements for con- trolling the processor parallelism and for acces- sing the various levels of memory. |
Prerequisits | Participation in the course: “Using the GWDG Scientific Compute Cluster - An Introduction”, or equivalent knowledge Practical experience with C For the practical exercises: user ID for GWDG's compute cluster (preferable) or course user ID (available upon request), own notebook |
Location | GWDG “Vortragsraum” |
Time | Duration 1 day |
Registration | See here |
Course Instructor | Prof. Dr. Oswald Haan |
Title | Parallel Programming using OpenMP |
---|---|
Description | Shared -memory parallel processors, including in particular the multicore systems in GWDG’s compute clusters can communicate via shared memory without explicit message exchange. This enables a simplification of the coding, and an increase in the efficiency of parallel processing. The course will briefly present the principles of shared-memory architecture and programming. Then the established standard OpenMP for programming parallel processors with shared memory will be presented in detail. The main parallelization constructs of OpenMP are explained and applied in practical exercises. The parallelization of algorithms is demonstrated in simple examples and their implementation as OpenMP programs will be explained in practical exercises |
Contents | Architecture and programming models for shared-memory parallel processors Introduction to OpenMP: execution model and language concept The main language constructs of OpenMP * Compiler directives for the distribution of computations * Compiler directives for data sharing * Intrinsic functions for parallel operations Hybrid parallelization with OpenMP and MPI Demonstration and practical exercises with Fortran or C codes on all topics Practice for the parallelization of sample algorithms Analysis and optimization of parallel efficiency |
Prerequisits | Participation in the course: “Using the GWDG Scientific Compute Cluster - An Introduction”, or equivalent knowledge Practical experience with Fortran or C For the practical exercises: user ID for the computer of the GWDG, own notebook |
Location | GWDG “Vortragsraum”, external venues possible by agreement |
Time | Duration 2-3 days, dates according to agreement |
Registration | Contact the service-hotline at support@gwdg.de |
Course Instructor | Dr. Oswald Haan |
Title | GPU Programming with CUDA |
---|---|
Description | Graphic processors with massive parallelism (GPU) are increasingly used as computational accelerators suitable for highly parallel applications. CUDA is a widely used programming environment for GPUs. The course explains hardware and parallelization concepts for GPUs. The CUDA programming environment is described in detail, the language elements for controlling the processor parallelism are explained and the access to the various levels of memory is illustrated. All topics are demonstrated by means of examples in practical exercises. Tools to support the programming and analysis of programs are introduced and can be tested in exercises. |
Contents | Determined by external experts |
Prerequisits | Participation in the course: “Using the GWDG Scientific Compute Cluster - An Introduction”, or equivalent knowledge Practical experience with C For the practical exercises: user ID for the computer of the GWDG, own notebook |
Location | GWDG “Vortragsraum”, external venues possible by agreement |
Time | Duration 2-3 days, dates according to agreement |
Registration | Contact the service-hotline at support@gwdg.de |
Course Instructor | GWDG in cooperation with external experts |
Title | Introduction to Unified Parallel C (UPC) and Co-array Fortran (CAF) |
---|---|
Description | PGAS (Partitioned Global Address Space) is a programming model, in which a global address space is logically distributed to different processes, with direct access from every process to all parts of the global address space. The advantage of PGAS is the simplification of parallelism based on the explicit control of the data layout for transparent access and implicit synchronization. PGAS is implemented as language extension of C: UPC (Unified parallel C) and Fortran: CAF (Co-array Fortran, included in the standard Fortran2008). The course describes the language constructs of these two implementations of PGAS and explains their use in practical exercises |
Contents | Introduction to the PGAS programming model The language elements of UPC The language elements of CAF Demonstration and practical exercises with Fortran or C-examples Practice for the parallelization of sample programs |
Prerequisits | Participation in the course: “Using the GWDG Scientific Compute Cluster - An Introduction”, or equivalent knowledge Practical experience with Fortran or C For the practical exercises: user ID for the computer of the GWDG, own notebook |
Location | GWDG “Vortragsraum”, external venues possible by agreement |
Time | Duration 2 Tage, dates according to agreement |
Registration | Contact the service-hotline at support@gwdg.de |
Course Instructor | Dr. Oswald Haan |