M.Sc. Seminar: Future Trends in High Performance Computing (IN218305, IN2107)
Seminar: Future Trends in Computing (IN218305, IN2107)
Lehrveranstaltung 0000002603 im SS 2020
Basisdaten
LV-Art | Seminar |
---|---|
Umfang | 2 SWS |
betreuende Organisation | Informatik 5 - Lehrstuhl für Scientific Computing (Prof. Bungartz) |
Dozent(inn)en |
Philipp Samfaß Leitung/Koordination: Michael Georg Bader |
Termine |
Zuordnung zu Modulen
-
IN2183: CSE Seminar Scientific Computing / CSE Seminar Course Scientific Computing
Dieses Modul ist in den folgenden Katalogen enthalten:- weitere Module aus anderen Fachrichtungen
weitere Informationen
Lehrveranstaltungen sind neben Prüfungen Bausteine von Modulen. Beachten Sie daher, dass Sie Informationen zu den Lehrinhalten und insbesondere zu Prüfungs- und Studienleistungen in der Regel nur auf Modulebene erhalten können (siehe Abschnitt "Zuordnung zu Modulen" oben).
ergänzende Hinweise | In the last ten years the period of vast increases in processing power mostly achieved by increasing the clock frequency of a processor has come to an end. Instead, computer architectures are getting more complex in order to accommodate the growing demand for processing power. Modern CPUs typically have a wide range of SIMD instructions for fine-grained data parallelism, and are capable of executing several threads on each of their several cores. Memory accesses are passed through multiple cache levels to hide memory access latencies. In addition to that, hardware specialized in performing massively parallel computations is getting more and more popular. Examples are GPUs and accelerators such as the Xeon Phi. In the HPC context, several nodes, each with its own CPU(s) and GPU(s) may be joined into a cluster. Regular programming techniques and paradigms are no longer sufficient to fully utilize this hardware. Frameworks such as OpenCL take the structure and heterogeneity of the underlying hardware into account and provide the programming environment to expose all available resources, such as GPUs and accelerators. The behavior of the hardware at runtime also needs to be considered. Modern Cluster architectures are not necessarily capable to run at peak utilization 100% of the time. To avoid the overheating of the hardware and the resulting degradation of the silicon, the clock frequency of the CPU may be drastically reduced, or single nodes may even be shut down completely for a time. In this seminar, we will explore runtime systems that try to alleviate these measures by automatic distribution of tasks, automatic tuning to the target platform or by performing an automatic offloading of work to accelerator devices. Each topic will cover a specific runtime system. Participants are expected to implement a shallow water proxy application using the runtime system they are assigned to, and to leverage the runtime system's specific benefits. Preliminary List of Runtime Systems: - OmPSS - StarPU - Raja - UPC++ - Kokkos - Chapel - Charm++/AMPI - dash - Legion/Regent - Chameleon - Legate NumPy - Julia - GPI - HPX - DaCe - Parsec - MPI ULFM - OpenCC/OpenACC - ... Your suggestions? NOTE REGARDING REGISTRATION: Registration via Matching System. If you are interested in the seminar, please write an email with some information about you: - What are you interested in? - Why are you interested in the seminar? - What knowledge do you already have in the area? This is not so much a pre-selection, but it allows us to see that you are actually genuinely interested in the seminar. |
---|---|
Links |
E-Learning-Kurs (z. B. Moodle) TUMonline-Eintrag |