Seminar: Future Trends in Computing (IN2183,IN2107,IN0014)
Seminar: Future Trends in Computing (IN2183,IN2107,IN0014)
Lehrveranstaltung 0000002603 im SS 2017
Basisdaten
LV-Art | Seminar |
---|---|
Umfang | 2 SWS |
betreuende Organisation | Informatik 5 - Lehrstuhl für Scientific Computing (Prof. Bungartz) |
Dozent(inn)en |
Leitung/Koordination: Michael Georg Bader |
Termine |
Zuordnung zu Modulen
-
IN2183: CSE Seminar Scientific Computing / CSE Seminar Course Scientific Computing
Dieses Modul ist in den folgenden Katalogen enthalten:- weitere Module aus anderen Fachrichtungen
weitere Informationen
Lehrveranstaltungen sind neben Prüfungen Bausteine von Modulen. Beachten Sie daher, dass Sie Informationen zu den Lehrinhalten und insbesondere zu Prüfungs- und Studienleistungen in der Regel nur auf Modulebene erhalten können (siehe Abschnitt "Zuordnung zu Modulen" oben).
ergänzende Hinweise | In the last ten years the period of vast increases in processing power mostly achieved by increasing the clock frequency of a processor has come to an end. Instead, computer architectures are getting more complex in order to accommodate the growing demand for processing power. Modern CPUs typically have a wide range of SIMD instructions for fine-grained data parallelism, and are capable of executing several threads on each of their several cores. Memory accesses are passed through multiple cache levels to hide memory access latencies. In addition to that, hardware specialized in performing massively parallel computations is getting more and more popular. Examples are GPUs and accelerators such as the Xeon Phi. In the HPC context, several nodes, each with its own CPU(s) and GPU(s) may be joined into a cluster. Regular programming techniques and paradigms are no longer sufficient to fully utilize this hardware. Frameworks such as OpenCL take the structure and heterogeneity of the underlying hardware into account and provide the programming environment to expose all available resources, such as GPUs and accelerators. The behavior of the hardware at runtime also needs to be considered. Modern Cluster architectures are not necessarily capable to run at peak utilization 100% of the time. To avoid the overheating of the hardware and the resulting degradation of the silicon, the clock frequency of the CPU may be drastically reduced, or single nodes may even be shut down completely for a time. In this seminar, we will explore these issues, along with software solutions that try to alleviate these measures by automatic distribution of tasks, automatic tuning to the target platform or by performing an automatic offloading of work to accelerator devices. |
---|---|
Links |
Zusatzinformationen TUMonline-Eintrag |