My name is Sebastian Liem. I'm a data scientist at ViriCiti where I work to make it easier for public transportation companies to transition to and use electric vehicles. I transition into industry from astroparticle physics research at the GRAPPA Institute where I worked on Beyond the Standard Model phenomenology, trying to figure out the particle nature of dark matter. My focus was on global statistical scans of high-dimensional models, and applying machine learning methods to accelerate those studies.
Easiest way to reach me is via e-mail, or twitter. I also have a github account with sundry projects.
Articles and equivalents I've produced.
One of the most promising strategies to identify the nature of dark matter consists in the search for new particles at accelerators and with so-called direct detection experiments. Working within the framework of simplified models, and making use of machine learning tools to speed up statistical inference, we address the question of what we can learn about dark matter from a detection at the LHC and a forthcoming direct detection experiment. We show that with a combination of accelerator and direct detection data, it is possible to identify newly discovered particles as dark matter, by reconstructing their relic density assuming they are weakly interacting massive particles (WIMPs) thermally produced in the early Universe, and demonstrating that it is consistent with the measured dark matter abundance. An inconsistency between these two quantities would instead point either towards additional physics in the dark sector, or towards a non-standard cosmology, with a thermal history substantially different from that of the standard cosmological model.
The interpretation of Large Hadron Collider (LHC) data in the framework of Beyond the Standard Model (BSM) theories is hampered by the need to run computationally expensive event generators and detector simulators. Performing statistically convergent scans of high-dimensional BSM theories is consequently challenging, and in practice unfeasible for very high-dimensional BSM theories. We present here a new machine learning method that accelerates the interpretation of LHC data, by learning the relationship between BSM theory parameters and data. As a proof-of-concept, we demonstrate that this technique accurately predicts natural SUSY signal events in two signal regions at the High Luminosity LHC, up to four orders of magnitude faster than standard techniques. The new approach makes it possible to rapidly and accurately reconstruct the theory parameters of complex BSM theories, should an excess in the data be discovered at the LHC.
Barrett is a Python package for processing and visualising statistical inferences made using the nested sampling algorithm MultiNest. The main differential feature from competitors are full out-of-core processing allowing barrett to handle arbitrarily large datasets. This is achieved by using the HDF5 data format.
A selection of searches by the ATLAS experiment at the LHC for the electroweak production of SUSY particles are used to study their impact on the constraints on dark matter candidates. The searches use 20 fb^−1 of proton-proton collision data at √s = 8 TeV. A likelihood-driven scan of a five-dimensional effective model focusing on the gaugino--higgsino and Higgs sector of the phenomenological minimal supersymmetric Standard Model is performed. This scan uses data from direct dark matter detection experiments, the relic dark matter density and precision flavour physics results. Further constraints from the ATLAS Higgs mass measurement and SUSY searches at LEP are also applied. A subset of models selected from this scan are used to assess the impact of the selected ATLAS searches in this five-dimensional parameter space. These ATLAS searches substantially impact those models for which the mass m(χ̃^0_1) of the lightest neutralino is less than 65 GeV, excluding 86% of such models. The searches have limited impact on models with larger m(χ̃^0_1) due to either heavy electroweakinos or compressed mass spectra where the mass splittings between the produced particles and the lightest supersymmetric particle is small.
We present global fits of an effective field theory description of real, and complex scalar dark matter candidates. We simultaneously take into account all possible dimension 6 operators consisting of dark matter bilinears and gauge invariant combinations of quark and gluon fields. We derive constraints on the free model parameters for both the real (five parameters) and complex (seven) scalar dark matter models obtained by combining Planck data on the cosmic microwave background, direct detection limits from LUX, and indirect detection limits from the Fermi Large Area Telescope. We find that for real scalars indirect dark matter searches disfavour a dark matter particle mass below 100 GeV. For the complex scalar dark matter particle current data have a limited impact due to the presence of operators that lead to p-wave annihilation, and also do not contribute to the spin-independent scattering cross- section. Although current data are not informative enough to strongly constrain the theory parameter space, we demonstrate the power of our formalism to reconstruct the theoretical parameters compatible with an actual dark matter detection, by assuming that the excess of gamma rays observed by the Fermi Large Area Telescope towards the Galactic centre is entirely due to dark matter annihilations. Please note that the excess can very well be due to astrophysical sources such as millisecond pulsars. We find that scalar dark matter interacting via effective field theory operators can in principle explain the Galactic centre excess, but that such interpretation is in strong tension with the non-detection of gamma rays from dwarf galaxies in the real scalar case. In the complex scalar case there is enough freedom to relieve the tension.