Lattice Field Theory and Machine Learning

Asia/Taipei
Auditorium (6 Floor, Astro-Math. Building (NTU Campus))

Auditorium

6 Floor, Astro-Math. Building (NTU Campus)

No. 1, Sec. 4, Roosevelt Rd., Da'an Dist., Taipei City 106319, Taiwan
Miranda Cheng (Institute of Mathematics, Academia Sinica)
Description

Lattice Field Theory and Machine Learning

December 5-6,2024

Auditorium,6 Floor, Astro-Math. Building (NTU Campus)

Proving the confinement phenomenon in Yang-Mills theory is a fundamental problem and is in particular one of the seven Millennium Prize Problems in mathematics. Lattice field theories constitute the only widely accepted, ab initio approach to solving this problem. However, the steep computational cost has been a bottleneck in achieving the desired accuracy. The recent rapid advances in machine learning can potentially lead to transformative new tools in lattice field theories computations. The small workshop aims to bring together international interested experts in lattice field theories and machine learning to expedite progress in the AI-for-lattice effort.


Speakers (*Participate online)

Gert Aarts (Swansea University)
Mathis Gerdes (Universiteit van Amsterdam)
Gurtej Kanwar (The University of Edinburgh)
Yuki Nagai (Information Technology Center, The University of Tokyo)
Kim Nicoli (University of Bonn)
Fernando Romero-López (Bern University)*
Akio Tomiya (Tokyo Woman's Christian University)
Yi-Zhuang You (University of California, San Diego)

ORGANIZERS

Miranda Cheng (Academia Sinica)
C.-J. David Lin (National Yang Ming Chiao Tung University)
Ying-Jer Kao (National Taiwan University)


CONTACT

For administrative supports, please contact:

Ariel Wang (Ms.)
Email: conference@gmail.math.sinica.edu.tw
Phone Number: (+886)-2-23685999#341

Registration
Lattice Field Theory and Machine Learning
  • Thursday, December 5
    • 1
      Self-learning Monte Carlo with equivariant Transformer

      Machine learning and deep learning have significantly advanced computational physics, particularly in simulating complex systems. Equivariance is critical for modeling physical systems, as it introduces a strong inductive bias into the probability distribution learned by the machine learning model. However, imposing symmetry often results in reduced acceptance rates in self-learning Monte Carlo (SLMC).
      We propose a symmetry-equivariant attention mechanism for SLMC, designed for systematic improvement. The proposed architecture is evaluated on the spin-fermion model (double exchange model) on a two-dimensional lattice [1]. Our results demonstrate that the method overcomes the limitations of linear models, achieving improved acceptance rates. Furthermore, the model quality follows a scaling law akin to large language models, showing consistent improvement with increased model depth.
      Our work paves the way for the development of more accurate and efficient Monte Carlo algorithms with machine learning for simulating complex physical systems.

      Speaker: Yuki Nagai (Information Technology Center)
    • 11:30 AM
      Tea Break
    • 2
      Gauge covariant Transforme
      Speaker: Akio Tomiya (Tokyo Woman's Christian University)
    • 1:00 PM
      Lunch Break
    • 3
      Renormalization Group and Flow-Based Generative Models

      Abstract: Generative modeling in machine learning can be understood as the inverse process of renormalization, establishing a profound connection between lattice field theory and modern AI techniques. This perspective allows the development of unsupervised learning algorithms that inherently learn to perform renormalization from the field theory action. In this talk, I will explore how flow-based generative models, inspired by AdS-CFT duality, can accelerate the sampling of conformal field theories (CFTs) using network architectures defined on the AdS bulk geometry. Furthermore, I will discuss how these concepts extend to image generation, enabling more efficient error correction through holographic principles.

      Speaker: Yi-Zhuang You (University of California, San Diego)
    • 3:30 PM
      Tea Break
    • 4
      Hands-on NeuLat: A Toolbox for generative neural samplers in Lattice Field Theory
      Speaker: Kim Nicoli (University of Bonn)
  • Friday, December 6
    • 5
      What is the flow-based sampling endgame?

      Flow-based sampling has emerged as an effective way to parameterize sampling from many known and unknown distributions. Within the context of lattice quantum field theory, there are several ongoing efforts to incorporate these and related methods into practical calculations. I will first discuss our recent work in this area and then bring up the question and some proposed answers of what the "endgame" of this research direction might be.

      Speaker: Gurtej Kanwar (The University of Edinburgh)
    • 11:30 AM
      Tea Break
    • 6
      Continuous normalizing flows for gauge theories

      We define an expressive continuous normalizing flow architecture for matrix Lie groups that is equivariant under group transformations.
      We apply this to lattice gauge theories in two dimensions as a proof-of-principle and demonstrate competitive performance, showing its potential as a tool for future lattice sampling tasks.

      Speaker: Mathis Gerdes (Universiteit van Amsterdam)
    • 1:00 PM
      Lunch Break
    • 7
      Diffusion models: cumulants and lattice field theory
      Speaker: Gert Aarts (Swansea University)
    • 3:30 PM
      Tea Break
    • 8
      Applications of flow models to the generation of correlated lattice QCD ensembles (online)
      Speaker: Fernando Romero-López (Bern University)