Speaker
Description
Machine learning and deep learning have significantly advanced computational physics, particularly in simulating complex systems. Equivariance is critical for modeling physical systems, as it introduces a strong inductive bias into the probability distribution learned by the machine learning model. However, imposing symmetry often results in reduced acceptance rates in self-learning Monte Carlo (SLMC).
We propose a symmetry-equivariant attention mechanism for SLMC, designed for systematic improvement. The proposed architecture is evaluated on the spin-fermion model (double exchange model) on a two-dimensional lattice [1]. Our results demonstrate that the method overcomes the limitations of linear models, achieving improved acceptance rates. Furthermore, the model quality follows a scaling law akin to large language models, showing consistent improvement with increased model depth.
Our work paves the way for the development of more accurate and efficient Monte Carlo algorithms with machine learning for simulating complex physical systems.