
Jenia Jitsev
Jenia Jitsev aims to significantly improve the training of the fundamental building blocks of AI, known as foundation models, by investigating scaling laws.
Open Foundation Models and open datasets
Scaling laws for model performance, evaluation, and reliability of AI methods
Open source in machine learning
Use and optimization of supercomputers for large-scale model training
Jenia Jitsev works at Forschungszentrum Jülich, where he heads the Scalable Learning & Multi-Purpose AI Lab at the Jülich Supercomputing Centre.
Jitsev conducts research into AI systems with strong generalization and transferability, known as foundation models. He is interested in the datasets that can be used to train these models and in how their correct functioning can be verified to ensure that they are suitable for a wide variety of tasks and applications. The goal of his research is to develop open foundation models that are easier to scale, reproducible, and transparent about which datasets they use and how they are trained and tested.
Jitsev has already received several awards for his work. Open AI models developed by him and his team at LAION have been downloaded more than 100 million times and now serve as the basis for well-known text-to-image generators.
Jitsev studied computer science and psychology at the University of Bonn, specializing in neuroscience and machine learning, and earned his doctorate at Goethe University Frankfurt. In 2013, Jitsev joined Forschungszentrum Jülich, where he initially conducted research in the field of neuroscience and medicine before moving to the Jülich Supercomputing Centre in 2017. Jitsev is also co-founder and Scientific Director of the non-profit research organization LAION, which is committed to democratizing AI. LAION provides open AI datasets and models and develops its own AI systems based on the open-source principle. In 2023, LAION received the Falling Walls Scientific Breakthrough Award.