This website is no longer updated.

As of 1.10.2022, the Faculty of Physics has been merged into the TUM School of Natural Sciences with the website https://www.nat.tum.de/. For more information read Conversion of Websites.

de | en

Parallel Programming

Module IN2147

This Module is offered by TUM Department of Informatics.

This module handbook serves to describe contents, learning outcome, methods and examination type as well as linking to current dates for courses and module examination in the respective sections.

Basic Information

IN2147 is a semester module in English language at Bachelor’s level and Master’s level which is offered in summer semester.

This Module is included in the following catalogues within the study programs in physics.

  • Catalogue of non-physics elective courses
Total workloadContact hoursCredits (ECTS)
150 h 60 h 5 CP

Content, Learning Outcome and Preconditions

Content

The course starts with a motivation for parallel programming and a classification of parallel architectures. It focuses first on parallelization for distributed memory architectures with MPI. It introduces the major concepts, e.g., point to point communication, collective operations, communicators, virutal topologies, non-blocking communication, single-sided communication and parallel IO. In addition, it covers the overall parallelization approach based on four phases, i.e., decomposition, assignment, orchestration and mapping. The next section presents dependence analysis as the major theoretical basis for parallelization. It introduces program transformations and discusses their profitability and safety based on data dependence analysis. The second major programming interface in the course is OpenMP for shared memory systems. This section covers most of the language concepts as well as proposed extensions. In the last part, the lecture presents novel programming interfaces, such as PGAS languages, threading building blocks, CUDA, OpenCL, and OpenACC.

Learning Outcome

At the end of the module students are able to create parallel programs in MPI and OpenMP. They understand the performance aspects of differenct parallelization strategies and can evaluate those parallelization strategies in the context of applications. They are able to apply data dependence analysis and program transformations. They can analyze and tune the performance of parallel applications.

Preconditions

IN2076 Advanced Computer Architecture

Courses, Learning and Teaching Methods and Literature

Courses and Schedule

TypeSWSTitleLecturer(s)DatesLinks
VO 2 Parallel Programming (IN2147) Schulz, M.
Assistants: Elis, B.Herr, D.Huber, D.
Mon, 10:15–11:45, MI HS1
eLearning
documents
UE 2 Exercise - Parallel Programming (IN2147) Elis, B. Herr, D.
Responsible/Coordination: Schulz, M.
Tue, 12:00–14:00, Interims I 101
documents

Learning and Teaching Methods

The different parallel programming models and parallelization techniques are introduced in the lecture. Voluntary short student presentations demonstrate the techniques in application areas. Within a central exercise session, assignments are presented and discussed. The students solve the assignments and submit the solutions which are checked for correctness. In the assigments the students apply the learned concepts to larger example programs.

Media

Slides

Literature

- MPI: A Message-Passing Interface Standard (Language Standard)
- OpenMP: Open Application Program Interface (Language Standard)
- R. Allen, K. Kennedy: Optimizing Compilers for Modern Architectures, Morgan Kaufmann
- David E. Culler et.al.: Parallel Computer Architecture: A Hardware / Software Approach, Morgan Kaufmann

Module Exam

Description of exams and course work

The exam takes the form of a 90 minutes written test. Questions allow to asses acquaintance with the concepts of parallel programming models, languages, and tools. Code snippets of sequential and parallel programs are given. Students apply their knowledge on dependence analysis and code transformations to these codes. Based on code snippets the students apply the learned parallel models to demonstrate their ability to evaluate different parallelization strategies, to parallelize code, and tune applications.

Exam Repetition

There is a possibility to take the exam in the following semester.

Top of page