Not directly related to this post, but do you use distributed computing in analysing the data? If so, do you use general purpose frameworks or you have dedicated one?
You are viewing a single comment's thread from:
Not directly related to this post, but do you use distributed computing in analysing the data? If so, do you use general purpose frameworks or you have dedicated one?
Thanks for the question. This is a very good one. I assume you refer to what we do today. Here, distributed computing is definitely in order, at least for specific experiments.
For what concerns the LHC, most analyses rely on the Worldwide LHC Computing Grid that allows us to store, distribute and analyse the dozens of petabytes of data available (this consists in what has been recorded, most collisions being ignored because of the electronic speed and the much higher collision rates; see here or there for more information). In practice, we are dealing with hundreds of thousands of computers, spread in more than 100 computing centres all over the world.
On my side, as a theorist, I do not need such a huge computing power and I can make my life easy enough with O(100) CPU cores for most of my research work.