Quark mass models and reinforcement learning
Abstract:
In this paper, we apply reinforcement learning to the problem of constructing models in particle physics. As an example environment, we use the space of Froggatt-Nielsen type models for quark masses. Using a basic policy-based algorithm we show that neural networks can be successfully trained to construct Froggatt-Nielsen models which are consistent with the observed quark masses and mixing. The trained policy networks lead from random to phenomenologically acceptable models for over 90% of episodes and after an average episode length of about 20 steps. We also show that the networks are capable of finding models proposed in the literature when starting at nearby configurations.Topological formulae for the zeroth cohomology of line bundles on del Pezzo and Hirzebruch surfaces
Abstract:
We show that the zeroth cohomology of effective line bundles on del Pezzo and Hirzebruch surfaces can always be computed in terms of a topological index.Swampland Conjectures and Infinite Flop Chains
Machine learning Calabi-Yau four-folds
Abstract:
Hodge numbers of Calabi-Yau manifolds depend non-trivially on the underlying manifold data and they present an interesting challenge for machine learning. In this letter we consider the data set of complete intersection Calabi-Yau four-folds, a set of about 900,000 topological types, and study supervised learning of the Hodge numbers h1,1 and h3,1 for these manifolds. We find that h1,1 can be successfully learned (to 96% precision) by fully connected classifier and regressor networks. While both types of networks fail for h3,1, we show that a more complicated two-branch network, combined with feature enhancement, can act as an efficient regressor (to 98% precision) for h3,1, at least for a subset of the data. This hints at the existence of an, as yet unknown, formula for Hodge numbers.