Andrea Agazzi's home page

Andrea Agazzi
Professor of Applied Stochastics
Institute for Mathematical Statistics and Actuarial Sciences
University of Bern
Alpengasse 22,
3012 Bern, Switzerland

Bio and Research Interests

Before moving to Bern, I was Tenure Track Assistant Professor (RTD/b) in the Mathematics Department at the University of Pisa. Previous to that, I was Griffiths research Assistant Professor in the Math Department at Duke University, where I worked with Jonathan Mattingly and Jianfeng Lu. I obtained my PhD in Theoretical Physics, under the supervision of Jean-Pierre Eckmann, at the University of Geneva, after graduating, in physics, from Imperial College London and ETH Zurich.

I am interested in applied probability theory, more specifically in interacting particle systems for real world applications. I have worked on scaling limits for models of chemical reaction networks, focusing on the relations between their dynamics and their structure. More recently, I have worked on the dynamics of scaling limits of machine learning algorithms seen as interacting particle systems, and on dynamics of fluid models.


Publications and Preprints (see also vitae or Google Scholar Page)

  1. Emergence of clustering in mean-field transformer models, with G. Bruno, and F. Pasqualotto, oral at International Conference on Learning representations (2025)
  2. Scalable bayesian inference for the generalized linear mixed model, with S. Berchuk, F. Medeiros, and S. Mukherjee, arXiv:2403.03007
  3. Fair Artificial Currency Incentives in Repeated Weighted Congestion Games: Equity vs. Equality , with L. Pedroso, M. Heemels, and M. Salazar, IEEE Conference in Decision and Control (2024)
  4. Random Splitting of Point Vortex Flows, with F. Grotto and J. Mattingly, Electronic Communications in Probability 29 (2024)
  5. Global optimality of Elman-type recurrent neural networks in the mean-field regime, with J. Lu and S. Mukherjee, International Conference on Machine Learning (2023)
  6. Random Splitting of Fluid Models: Positive Lyapunov Exponents, with J. Mattingly and O. Melikechi, arXiv:2210.02958
  7. Random Splitting of Fluid Models: Ergodicity and Convergence, with J. Mattingly and O. Melikechi, Communications in Mathematical Physics (2023)
  8. A homotopic approach to policy gradients for linear quadratic regulators with nonlinear controls, with C. Chen, IEEE Conference on Decision and Control (2022)
  9. Large deviations with Markov jump processes with uniformly diminishing rates, with L. Andreis, M. Renger, R. Patterson, Stochastic Processes and Their Applicatons (2022)
  10. Global optimality of softmax policy gradient with single hidden layer neural networks in the mean-field regime, with J. Lu, International Conference on Learning Representations (2021)
  11. Temporal Difference Learning with nonlinear function approximation in the lazy training regime, with J. Lu, Proceedings of Machine Learning Research, Mathematical and Scientific Machine Learning (2021)
  12. Seemingly stable chemical kinetics can be stable, marginally stable or unstable, with J. Mattingly, Comm. Math. Sci. 18 (6), 1605 - 1642 (2020)
  13. Large Deviations Theory for Markov Jump Models of Chemical Reaction Networks, with A. Dembo and J.-P. Eckmann, Ann. Appl. Prob. 28 (3), 1821-1855 (2018)
  14. On the Geometry of Chemical Network Theory: Lyapunov Function and Large Deviations Theory, with A. Dembo and J.-P. Eckmann, J. Stat. Phys. 172 (2), 321-352 (2018)
  15. The Colored Hofstadter Butterfly for the Honeycomb Lattice, with G. M. Graf and J.-P. Eckmann, J. Stat. Phys. 156 (3), 417-426 (2014)

Teaching at UNIPI

  1. (Deep) Learning Theory (PhD course in mathematics, 2023/24)
  2. Statistica I (Ingegneria Gestionale, a.a. 2023/24)
  3. Statistica Matematica (Matematica, a.a. 2023/24)

Teaching at Duke University

  1. Statistical Learning Theory (STATS 303, Duke)
  2. Stochastic Calculus (MATH 545, Duke)
  3. Introducton to Probabilty and Statistics (STATS 210, Duke)
  4. Probability theory (MATH 230, Duke)