Direkt zum Seiteninhalt springen

Supercomputing and Big Data

given by Prof. Dr. - Ing. Morris Riedel

Prof. Dr. – Ing. Morris Riedel is an Adjunct Associate Professor at the School of Engineering and Natural Sciences of the University of Iceland. He received his PhD from the Karlsruhe Institute of Technology (KIT) and works in parallel and distributed systems since 15 years. He held various positions at the Juelich Supercomputing Centre of Forschungszentrum Juelich in Germany. At this institute, he is currently the head of a research group focused on ‘High Productivity Data Processing’ and a cross-sectional team on ‘Deep Learning’. His research is focussed on parallel and scalable machine learning algorithms and deep learning networks that leverage cutting edge High Performance Computing (HPC) technologies. Beyond university teaching lectures such as Statistical Data Mining, High Performance Computing (HPC), or Cloud Computing and Big Data, he has given many tutorials on machine learning and deep learning, including invited lectures at University of Ghent available on YouTube.

The fast training of traditional machine learning models and more innovative deep learning networks from increasingly growing large quantities of scientific and engineering datasets (aka ‘Big Data‘) requires high performance computing (HPC) on modern supercomputers today. HPC technologies such as those developed within the European DEEP-EST project provide innovative approaches w.r.t. processing, memory, and modular supercomputing usage during training, testing, and validation processes. This workshop thus focus on parallel and scalable machine learning driven by HPC and will pave the way for participants to use parallel processing on supercomputers as a key enabler for a wide variety of machine learning and deep learning algorithms used today. Examples include scientific and engineering applications that leverage traditional machine learning techniques such as scalable feature engineering, density-based spatial clustering of applications with noise (DBSCAN) and support vector machines (SVMs) with kernel methods. Those applications of traditional machine learning will be also compared with innovative deep learning models using Keras and TensorFlow taking advantage of convolutional neural networks (CNNs) for image datasets as well as long short-term memory (LSTM) networks for sequence data. Throughout learning these concrete models the participants will further learn required aspects of statistical learning theory and how to avoid overfitting in context of applications using various regularization and cross-validation techniques. The agenda is as follows:

10:00 – 11:30 HPC Introduction & Parallel and Scalable Clustering using DBSCAN

11:30 – 12:00 coffee break

12:00 – 13:30 Parallel and Scalable Classification using SVMs with Applications

13:30 – 14:30 lunch

14:30 – 16:00 Deep Learning using CNNs driven by HPC & GPUs

16:00 – 16:30 coffee break

16:30 – 17:30 Deep Learning using LSTMs driven by HPC & GPUs

Back