Andrea Agazzi's home page

Andrea Agazzi
Professor of Applied Stochastics
Institute for Mathematical Statistics and Actuarial Sciences, University of Bern
Alpengasse 22,
3012 Bern, Switzerland

Research Interests

I am interested in applied probability theory, more specifically in interacting particle systems for real world applications. I have worked on scaling limits for models of chemical reaction networks, focusing on the relations between their dynamics and their structure. More recently, I have worked on the dynamics of scaling limits of machine learning algorithms seen as interacting particle systems, and on dynamics of fluid models.


Publications and Preprints (see also vitae or Google Scholar Page)

  1. Scalable bayesian inference for the generalized linear mixed model, with S. Berchuk, F. Medeiros, and S. Mukherjee, arXiv:2403.03007
  2. Fair Artificial Currency Incentives in Repeated Weighted Congestion Games: Equity vs. Equality , with L. Pedroso, M. Heemels, and M. Salazar, arXiv:2403.03999
  3. Random Splitting of Point Vortex Flows, with F. Grotto and J. Mattingly, arXiv:2311.15680
  4. Global optimality of Elman-type recurrent neural networks in the mean-field regime, with J. Lu and S. Mukherjee, International Conference on Machine Learning (2023)
  5. Random Splitting of Fluid Models: Positive Lyapunov Exponents, with J. Mattingly and O. Melikechi, arXiv:2210.02958
  6. Random Splitting of Fluid Models: Ergodicity and Convergence, with J. Mattingly and O. Melikechi, Communications in Mathematical Physics (2023)
  7. A homotopic approach to policy gradients for linear quadratic regulators with nonlinear controls, with C. Chen, IEEE Proceedings of of Conference on Decision and Control (2022)
  8. Large deviations with Markov jump processes with uniformly diminishing rates, with L. Andreis, M. Renger, R. Patterson, Stochastic Processes and Their Applicatons (2022)
  9. Global optimality of softmax policy gradient with single hidden layer neural networks in the mean-field regime, with J. Lu, International Conference on Learning Representations (2021)
  10. Temporal Difference Learning with nonlinear function approximation in the lazy training regime, with J. Lu, Proceedings of Machine Learning Research, Mathematical and Scientific Machine Learning (2021)
  11. Seemingly stable chemical kinetics can be stable, marginally stable or unstable, with J. Mattingly, Comm. Math. Sci. 18 (6), 1605 - 1642 (2020)
  12. Large Deviations Theory for Markov Jump Models of Chemical Reaction Networks, with A. Dembo and J.-P. Eckmann, Ann. Appl. Prob. 28 (3), 1821-1855 (2018)
  13. On the Geometry of Chemical Network Theory: Lyapunov Function and Large Deviations Theory, with A. Dembo and J.-P. Eckmann, J. Stat. Phys. 172 (2), 321-352 (2018)
  14. The Colored Hofstadter Butterfly for the Honeycomb Lattice, with G. M. Graf and J.-P. Eckmann, J. Stat. Phys. 156 (3), 417-426 (2014)

Teaching at UNIPI

  1. (Deep) Learning Theory (PhD course in mathematics, 2023/24)
  2. Statistica I (Ingegneria Gestionale, a.a. 2023/24)
  3. Statistica Matematica (Matematica, a.a. 2023/24)

Teaching at Duke University

  1. Statistical Learning Theory (STATS 303, Duke)
  2. Stochastic Calculus (MATH 545, Duke)
  3. Introducton to Probabilty and Statistics (STATS 210, Duke)
  4. Probability theory (MATH 230, Duke)