Deep-learning algorithms don’t work…

… and yet they do. 🤖✨

Deep learning relies on highly complex loss functions. Algorithms should optimize them, but we know they can’t. So why are the results still so impressive?

Our new paper offers a mathematical explanation. 📘🧠 We rigorously prove that deep-learning algorithms don’t actually need to find the true optimum. Being close to a local optimum is already enough. In nerdier terms: We show that every reasonable stationary point of certain neural networks — and all points nearby — generalize essentially as well as the global optimum. 🔍📈

The paper has been accepted at TMLR! 🎉 Find it here.

Huge congratulations to Mahsa and Fang—two rising stars in machine learning. 🌟🌟

🚀 Additional Funding for High-Dimensional Time Series Research

Time-series analysis is one of the core pillars of statistics. However, the high dimensionality and sheer size of today’s datasets pose new statistical and algorithmic challenges.

Our project tackles these challenges while also addressing classical questions like stability and stationarity. More broadly, we aim to contribute to the modernization and expansion of the theoretical and applied foundations of time-series analysis.

Why does this matter?
Time series are everywhere:
📈 Stock markets
🛒 Sales forecasting
🌦 Weather prediction
🤖 Even text data (especially relevant in the era of ChatGPT)

A deeper understanding and more efficient, reliable models for high-dimensional time series can lead to significant advances across industries and research domains.

We’re grateful for the support from:
• Deutsche Forschungsgemeinschaft (DFG) – German Research Foundation for the funding
• University of Hamburg for continuous support
• Rainer von Sachs (Belgium) for the collaboration

Excited to get started — let’s get to work! 💪

Back to the roots

Two decades ago, I began studying physics at ETH Zürich, driven by the desire to understand the world around us. Along the way, I drifted into the depths of mathematics and the excitement of AI — and somewhere in that journey, physics slipped a bit into the background. Recently, though, my group and I have reconnected with the field through new collaborations, at UHH and beyond, and it has been genuinely rewarding to put my skills to use again. The most recent highlight is our astrophysics paper, which just appeared in Astronomy & Astrophysics. The astrophysicists on the team certainly carried the project to the stars, but it was a joy to be part of it.

🎲 Mit KI zum Spiel des Jahres? 🎲

Spieleautoren erschaffen Welten, erfinden Regeln und lassen Fantasie lebendig werden – doch wie sieht der Alltag hinter diesem Traumberuf wirklich aus?

Reiner Knizia, einer der erfolgreichsten Spieleautoren der Welt, gibt exklusive Einblicke:

💡Welche Rolle spielt KI schon heute in der Spieleentwicklung?
💡Versteht KI Spaß?
💡Und wie entsteht aus einer vagen Idee ein fertiges Spiel, das Millionen begeistert?

Außerdem verrät er, welche unerzählten Spielideen noch auf ihre Umsetzung warten und nach welchen Maßstäben er ganz persönlich Erfolg misst. 🏆

Die Folge gibt es wie immer hier.

Vielen Dank an das tolle Team um Nico Räcker und Jonathan Welle, und an die Uni Hamburg für die Unterstützung!

A New Type of Sparsity for More Efficient Matrix-Matrix Multiplications

We all love sparsity: it makes computations faster, guarantees tighter, and interpretations easier. In our paper, , which will appear in TMLR, we introduce a new type of sparsity, which we term “cardinality sparsity”. We show that cardinality sparsity has all the usual perks, and more importantly, we demonstrate that it is also a very powerful concept for matrix-matrix multiplications. Indeed, cardinality sparsity can speed up such computations and reduce memory usage dramatically. Well done, Ali! 👍👍👍

Kann KI gefälschte Lebensmittel entlarven?

Gefälschter Honig, gestrecktes Olivenöl, verdünnter Fruchtsaft – Lebensmittelbetrug ist ein Milliardengeschäft. Aber: Könnte bald dein Smartphone solche Fakes entlarven?

In unserer neuesten Podcast-Folge spricht Lebensmittelchemiker Stephan Seifert mit uns über:
🍯 Warum gerade Honig so oft gefälscht wird
🕵️‍♂️ Die raffinierten Tricks der Lebensmittel-Fälscher
🧠 Welche Analysemethoden es heute schon gibt
🚀 Worauf du beim Einkaufen achten kannst, um nicht über den Tisch gezogen zu werden

Infos zum Podcast findest Du hier und überall wo es Podcasts gibt. Und folg uns auf Insta @datasciencetalks_podcast, damit Du keine Folge verpasst!

Models for Leptonic Radiation From Galaxies

In a recent study, we explored the use of various machine learning techniques to model the radiation emitted by galactic objects called blazars. Blazars produce significant amounts of radiation, which make them a fascinating target for scientific exploration. This project serves as a perfect example of how data science and applied sciences can work hand-in-hand to tackle complex problems. While the methods were crucial, the true credit belongs to the incredible astrophysicists Anastasiia, Anna, and Frederike, who really did all of the heavy lifting! 💪💪💪 Here is the paper, and here is the wonderful astrophysics group!

Challenges and Opportunities for Statistics in the Era of Data Science

Our opinion on the state of statistics and its future has now appeared here in Havard Data Science Review. Three conclusions are:
📈 Statistics is still very much alive!
📈 Statistics can contribute to modern data science in many ways, through formal modeling, inference, mathematical guarantees, and much more.
📈 However, statistics also needs to ensure that it stays relevant, joining forces with other data-related fields and participating in the education of the future data scientists.

The paper is also featured in the journals editorial, which I found a very good read more generally.

Organizing the workshop and the paper together with Claudia was a rewarding journey.

Sincere thanks to:
🙏 VolkswagenStiftung, for hosting our workshop, where this paper originated. Everything was perfect: efficient and friendly organization, delicious food, wonderful scenery, practical seminar rooms, …
🙏 Soumendra Lahiri, especially for supporting Claudia and myself in the publishing process.
🙏 And all of the co-authors, for the inspiring discussions in Hannover and skillful contributions to the paper:
Harald Binder, Werner Brannath, Ivor Cribben, Holger Dette, Philipp Doebler, Oliver Feng, Axel Gandy, Sonja Greven, Barbara Hammer, Stefan Harmeling, Thomas Hotz, Göran Kauermann, PD Dr. Joscha Krause, Georg Krempl, Alicia Nieto-Reyes, Ostap Okhrin, Hernando Ombao, Florian Pein, Michal Pešta, Dimitris Politis, Li-Xuan Qin, Tom Rainforth, Holger Rauhut, Henry Reeve, David Salinas, Johannes Schmidt-Hieber, Clayton Scott, Johan Segers, Myra Spiliopoulou, Adalbert Wilhelm, Ines Wilms, and Yi Yu.

Team at GPSD

Our team has been showcasing our work at various workshops and conferences these months. Most recently, Ali, Francesco, Gitte and Mahsa gave talks and presented posters at the 17th German Probability and Statistics Days in Dresden. For example, Ali gave a presentation about geometry-inspired insights into deep-learning architectures, and Mahsa talked about stationary points in deep learning. Well done everyone—let’s go, Hamburg data science! 🧑‍🏫️