Competitive Computing
The experience of using a computer today is nothing like it was a few decades ago. The sheer speed of computer systems has nearly doubled each year; transistors, once as big as a pencil eraser, have become so small that billions could fit on a fingernail. The average central processing unit (CPU) inside a current-day laptop can perform roughly 21 billion instructions per second—a number exponentially higher than even the most sophisticated computers in the 1970s.
But as computing power has grown, so too has the need to perform increasingly complex computations. We are collecting more and more data, all of which needs to be processed. New scientific fields, such as advanced weather forecasting, nuclear test simulations, cell modeling at the molecular level, and even simulating the human brain, have also become more complicated, warranting the need for even faster, more powerful supercomputers.
Where there is innovation, there is also competition. Organizations seek to create machines that can outdo one another in how many operations they can perform per second, a metric called floating-point operations per second (FLOPS). In the process, engineers swap out and engineer components of the computers so that they can race like Formula 1 cars. Some of these components (very much like a standard desktop computer) include:
Transistors: Electronic circuits require the fast and precise movement of electronic signals. Transistors allow these signals to be either amplified or switched to complete different types of operations of varying complexity. The more transistors on an integrated circuit, the greater its processing power and ability to perform a greater number of operations.
CPUs: A Central Processing Unit is, as the name suggests, the heart of a computer’s operations. It executes all instructions detailed in a computer program by running through a set list of operations at a specific speed (clock rate). Early supercomputers took advantage of a small number of CPUs working in parallel. Modern supercomputers have taken this idea to groundbreaking levels, often hooking together tens of thousands of consumer-grade processors into massive arrays.
Cooling: Supercomputers suck up a lot of energy—the Tianhe-2 consumes 24 MW of power, which is enough to power 24,000 average U.S. homes for a whole month. Some of that energy is released as heat, so supercomputers need to be kept cool enough for components to work efficiently. Once one of the biggest operational challenges for early, pre-silicon transistor supercomputers, overheating is now a minor concern thanks to the use of sophisticated liquid cooling, low power processors, and industrial air-conditioning.
Competitive Computing
The experience of using a computer today is nothing like it was a few decades ago. The sheer speed of computer systems has nearly doubled each year; transistors, once as big as a pencil eraser, have become so small that billions could fit on a fingernail. The average central processing unit (CPU) inside a current-day laptop can perform roughly 21 billion instructions per second—a number exponentially higher than even the most sophisticated computers in the 1970s.
But as computing power has grown, so too has the need to perform increasingly complex computations. We are collecting more and more data, all of which needs to be processed. New scientific fields, such as advanced weather forecasting, nuclear test simulations, cell modeling at the molecular level, and even simulating the human brain, have also become more complicated, warranting the need for even faster, more powerful supercomputers.
Where there is innovation, there is also competition. Organizations seek to create machines that can outdo one another in how many operations they can perform per second, a metric called floating-point operations per second (FLOPS). In the process, engineers swap out and engineer components of the computers so that they can race like Formula 1 cars. Some of these components (very much like a standard desktop computer) include:
Transistors: Electronic circuits require the fast and precise movement of electronic signals. Transistors allow these signals to be either amplified or switched to complete different types of operations of varying complexity. The more transistors on an integrated circuit, the greater its processing power and ability to perform a greater number of operations.
CPUs: A Central Processing Unit is, as the name suggests, the heart of a computer’s operations. It executes all instructions detailed in a computer program by running through a set list of operations at a specific speed (clock rate). Early supercomputers took advantage of a small number of CPUs working in parallel. Modern supercomputers have taken this idea to groundbreaking levels, often hooking together tens of thousands of consumer-grade processors into massive arrays.
Cooling: Supercomputers suck up a lot of energy—the Tianhe-2 consumes 24 MW of power, which is enough to power 24,000 average U.S. homes for a whole month. Some of that energy is released as heat, so supercomputers need to be kept cool enough for components to work efficiently. Once one of the biggest operational challenges for early, pre-silicon transistor supercomputers, overheating is now a minor concern thanks to the use of sophisticated liquid cooling, low power processors, and industrial air-conditioning.
These materials have become much more advanced in a very short period. Up until the early 2000s, China did not have a single supercomputer in the TOP500, the definitive ranking of the world’s most powerful supercomputers. In 2017, it holds almost a third of the TOP500 spots. The following list of the eight most powerful supercomputers in the world is based on the most up-to-date TOP500 ranking.
(Note: a computer’s real performance often falls short of its theoretical performance, which is calculated according to the Linpack benchmark for the TOP500. It’s more expensive to power a supercomputer than to deck it out with more processing components, so modern supercomputers are designed to contain more nodes than they could run. The theoretical peak performance is the upper limit of a computer’s performance. The Linpack benchmark approximates that, along with standardized, arithmetic speed tests.)
- Fujitsu’s K
Image Credit: Fujitsu
Fujitu’s K computer was the first supercomputer to have ever broken the ten petaFLOPS barrier in November 2011. The K in its name refers to the Japanese word “kei,” or 10 quadrillion—a reference to the number of FLOPS. To compute at this level, the K combines the power of 80,000 separate CPUs through specialized connectors designed to transmit data at high speeds. A water-cooling system makes individual CPU cores less likely to overheat.
- Oakforest-PACS
Image Credit: JCAHPC
Resulting from a collaboration between the University of Tokyo, the University of Tsukuba and Fujistu Limited, the supercomputer dubbed Oakforest-PACS broke the 25 petaFLOP barrier thanks to Intel’s latest generation of Xeon Phi processors, making it the fastest supercomputer in Japan. The system is made up of 8,208 computational nodes, and is used for furthering computational science research and teaching young researchers how to conduct high-performance computing.
- Cori (NERSC)
Image Credit: NERSC
The National Energy Research Scientific Computing Center near Oakland, California named its newest supercomputer creation “Cori,” after Gerty Cori, the first American woman to win a Nobel Prize. The system is a Cray XC40, manufactured by the company responsible for major breakthroughs in supercomputer performance during the 1970s. Cori can theoretically achieve a processing speed of 29.1 petaFLOPS. It achieves this through the use of Haswell architecture Intel Xeon and Xeon Phi processors.
- Sequoia
Image Credit: Bob Hirschfeld/LLNL
The Sequoia is a supercomputer built to measure the risks of nuclear warfare by making advanced weapons science calculations. It’s owned by the Lawrence Livermore National Laboratory in California. With 98,304 nodes, it’s ranked as the fifth most powerful supercomputer on the planet. According to the Linpack benchmark, it has a speed of 17.2 petaFLOPS.
- Titan
Image Credit: Oak Ridge National Laboratory
Perhaps one of the best-known
supercomputers in the Western world, Titan at Tennessee’s Oak Ridge National Laboratory was the fastest supercomputer on the planet until the Tianhe-2 (below) jockeyed it out of first place in 2013. Titan is the first supercomputer to combine AMD Opteron CPUs and NVIDIA Tesla GPUs, bringing its total theoretical peak output to 27 petaFLOPS (Linpack approximates its output at 17.6). This kind of power enables researchers to perform the complex simulations needed in climate science, astrophysics, and molecular physics.
3 Tianhe-2
Image Credit: National University of Defense Technology
The Tianhe-2, also known as MILKYWAY-2, is a supercomputer developed by China’s National University of Defense Technology. It became the world’s fastest supercomputer in June 2013 with a peak performance of 33.86 petaFLOPS (although peak theoretical performance could be much higher), though it has slid down to third place in the years since. 16,000 computer nodes, made up of Intel Ivy Bridge and Xeon Phi processors, enable simulations of government security applications. It also serves as an open research platform for scientists in southern China.
Piz Daint (2017)
Image Credit: hpc-ch
In late 2016, the Piz Daint supercomputer in Lugano, Switzerland gained a huge hardware upgrade. That new power tripled its computing performance and brought its theoretical peak performance up to 19.6 petaFLOPS (their own measurements pin it currently at 25.3), making it the fastest supercomputer outside Asia. Named after a mountain in the Swiss Alps, the Piz Daint also creates advanced visualizations and high-resolution imaging simulations. It will soon provide processing power to the Large Hadron Collider at CERN, helping it analyze huge amounts of data.
Sunway TaihuLight
Image Credit: NSCW
Currently ranked as the fastest supercomputer in the world, the Sunway TaihuLight supercomputer measures in at 125 petaFLOPS (theoretical peak)—five times as fast as the supercomputer in second place. Housed in the National Supercomputing Center in Wuxi, it is comprised of 10.6 million cores and is being used for climate research, earth systems modeling and data analytics. On top of being the fastest supercomputer in the world, the Sunway TaihuLight is currently ranked as the fourth most energy-efficient one as well, requiring substantially fewer megawatts per megaFLOPS.
Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://futurism.com/the-eight-most-powerful-supercomputers-in-the-world/