# Hessian-based toolbox for reliable and interpretable machine learning in physics

@inproceedings{Dawid2021HessianbasedTF, title={Hessian-based toolbox for reliable and interpretable machine learning in physics}, author={Anna Dawid and Patrick Huembeli and Michał Tomza and Maciej Lewenstein and Alexandre Dauphin}, year={2021} }

Anna Dawid, 2 Patrick Huembeli, Micha l Tomza, Maciej Lewenstein, 4 and Alexandre Dauphin Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland ICFO Institut de Ciències Fotòniques, The Barcelona Institute of Science and Technology, Av. Carl Friedrich Gauss 3, 08860 Castelldefels (Barcelona), Spain Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland ICREA, Pg. Llúıs Campanys 23, 08010 Barcelona, Spain (Dated: August 5, 2021)

#### References

SHOWING 1-10 OF 85 REFERENCES

Empirical Analysis of the Hessian of Over-Parametrized Neural Networks

- Computer Science, Mathematics
- ICLR
- 2018

A case that links the two observations: small and large batch gradient descent appear to converge to different basins of attraction but are in fact connected through their flat region and so belong to the same basin. Expand

An Investigation into Neural Net Optimization via Hessian Eigenvalue Density

- Computer Science, Mathematics
- ICML
- 2019

To understand the dynamics of optimization in deep neural networks, we develop a tool to study the evolution of the entire Hessian spectrum throughout the optimization process. Using this, we study a… Expand

Machine learning quantum phases of matter beyond the fermion sign problem

- Computer Science, Physics
- Scientific Reports
- 2017

It is demonstrated that convolutional neural networks (CNN) can be optimized for quantum many-fermion systems such that they correctly identify and locate quantum phase transitions in such systems. Expand

Kernel methods for interpretable machine learning of order parameters

- Computer Science, Physics
- 2017

Support vector machines (SVMs) are explored, which are a class of supervised kernel methods that provide interpretable decision functions and can learn the mathematical form of physical discriminators for a set of two-dimensional spin models: the ferromagnetic Ising model, a conserved-order-parameter Ising models, and the Ising gauge theory. Expand

Unsupervised machine learning account of magnetic transitions in the Hubbard model.

- Physics, Medicine
- Physical review. E
- 2018

We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t-distributed stochastic neighboring ensemble (t-SNE), to reduce the dimensionality of,… Expand

Identifying quantum phase transitions using artificial neural networks on experimental data

- Computer Science, Physics
- Nature Physics
- 2019

This work employs an artificial neural network and deep-learning techniques to identify quantum phase transitions from single-shot experimental momentum-space density images of ultracold quantum gases and obtains results that were not feasible with conventional methods. Expand

Identification of emergent constraints and hidden order in frustrated magnets using tensorial kernel methods of machine learning

- Computer Science, Physics
- Physical Review B
- 2019

It is demonstrated how a machine-learning approach can automatically learn the intricate phase diagram of a classical frustrated spin model, and paves the way for the search of new orders and spin liquids in generic frustrated magnets. Expand

Automated discovery of characteristic features of phase transitions in many-body localization

- Mathematics, Physics
- Physical Review B
- 2019

We identify a new "order parameter" for the disorder driven many-body localization (MBL) transition by leveraging artificial intelligence. This allows us to pin down the transition, as the point at… Expand

Unsupervised learning of phase transitions: from principal component analysis to variational autoencoders

- Mathematics, Computer Science
- Physical review. E
- 2017

Unsupervised machine learning techniques to learn features that best describe configurations of the two-dimensional Ising model and the three-dimensional XY model are examined, finding that the most promising algorithms are principal component analysis and variational autoencoders. Expand

Understanding Black-box Predictions via Influence Functions

- Mathematics, Computer Science
- ICML
- 2017

This paper uses influence functions — a classic technique from robust statistics — to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. Expand