Event Classification With Masked Transformer Autoencoders

Description

One of the key tasks in particle physics analyses is proper classification of particle collision events based on the parent particles and the process that produced them. To handle this task, we’re developing a flexible machine learning pipeline which can be applied to a broad range of classification tasks. We’ll leverage a mix of older and newer techniques for transformer models like masking, pretraining using autoencoder architectures, and cross attention of task-specific attention heads.

Duration

Total project length: 175/350 hours.

Task ideas

Test

Please use this link to access the test for this project.

Requirements

Significant experience in Python and Machine Learning in Pytorch. Preferably some experience with Transformers and multi-GPU parallelization or with the ROOT library developed by CERN.

Difficulty Level

Advanced

Mentors

Please DO NOT contact mentors directly by email. Instead, please email ml4-sci@cern.ch with Project Title and include your CV and test results. The mentors will then get in touch with you.

Corresponding Project

Participating Organizations