You are viewing a single comment's thread from:

RE: Offline

in #blog8 years ago

In a collisions at the LHC, we get final state products that consist of many particles. We then cluster them in some way (that contains some free parameters and methods) and study the output of this clustering. From this output, we can study several observables, properties, etc...

Now, in terms of searches for new phenomena, we have the signal (the new phenomenon) and the background (the Standard Model expectation). We want to know what to check (how to select our collision events) to maximise the signal and reduce the background.

I would like to design something that first is capable to chose the reconstruction method automatically in the aim of improvement a signal-to-noise ratio, and second capable to chose the observable to focus on for distinguishing the signal from the background.

Dunno whether it is clear enough (I tried to be concise instead).

Thanks for sharing the other post. This dates from before my steemit time ^^ What I want to do is very different from that. It is more for the beauty of science than for making anyone rich :)

Sort:  

Btw referencing your earlier reply -

To say the truth, I would like to use this to develop new techniques for looking for new phenomena at particle colliders.

So machine-learning will be applied to which part of this process? Pattern recognition for collider configurations, or the output (both "live" and archived data) ?

In a collisions at the LHC, we get final state products that consist of many particles. We then cluster them in some way (that contains some free parameters and methods) and study the output of this clustering. From this output, we can study several observables, properties, etc...

I think this will need some lecture of its own - I'll go do some research on it. Hopefully there's something decent online :)

Mmmh I will have a look. I have never heard about them (I am also a newbie in ML).