Computer performance by orders of magnitude
This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.
Deciscale computing (10−1)
- 5×10−1 Speed of the average human mental calculation for multiplication using pen and paper
Scale computing (100)
- 1 OP/S the speed of the average human addition calculation using pen and paper
- 1 OP/S the speed of Zuse Z1
- 5 OP/S world record for addition set
Decascale computing (101)
- 5×101 Upper end of serialized human perception computation (light bulbs do not flicker to the human observer)
Hectoscale computing (102)
Kiloscale computing (103)
- 92×103 Intel 4004 First commercially available full function CPU on a chip, released in 1971
- 500×103 Colossus computer vacuum tube supercomputer 1943
Megascale computing (106)
- 1×106 Motorola 68000 commercial computing 1979
- 1.2×106 IBM 7030 "Stretch" transistorized supercomputer 1961
Gigascale computing (109)
- 1×109 ILLIAC IV 1972 supercomputer does first computational fluid dynamics problems
- 1.354×109 Intel Pentium III commercial computing 1999
- 147.6×109 Intel Core i7-980X Extreme Edition commercial computing 2010[2]
Terascale computing (1012)
- 1.34×1012 Intel ASCI Red 1997 Supercomputer
- 1.344×1012 GeForce GTX 480 in 2010 from Nvidia at its peak performance
- 4.64×1012 Radeon HD 5970 in 2009 from AMD (under ATI branding) at its peak performance
- 5.152×1012 S2050/S2070 1U GPU Computing System from Nvidia
- 11.3×1012 GeForce GTX 1080 Ti in 2017
- 13.7×1012 Radeon RX Vega 64 in 2017
- 15.0×1012 Nvidia Titan V in 2017
- 80×1012 IBM Watson[3]
- 170×1012 Nvidia DGX-1 The initial Pascal based DGX-1 delivered 170 teraflops of half precision processing.[4]
- 478.2×1012 IBM BlueGene/L 2007 Supercomputer
- 960×1012 Nvidia DGX-1 The Volta-based upgrade increased calculation power of Nvidia DGX-1 to 960 teraflops.[5]
Petascale computing (1015)
- 1.026×1015 IBM Roadrunner 2009 Supercomputer
- 2×1015 Nvidia DGX-2 a 2 Petaflop Machine Learning system (the newer DGX A100 has 5 Petaflop performance)
- 11.5×1015 Google TPU pod containing 64 second-generation TPUs, May 2017[6]
- 17.17×1015 IBM Sequoia's LINPACK performance, June 2013[7]
- 20×1015 Roughly the hardware-equivalent of the human brain according to Kurzweil. Published in his 1999 book: The Age of Spiritual Machines: When Computers Exceed Human Intelligence[8]
- 33.86×1015 Tianhe-2's LINPACK performance, June 2013[7]
- 36.8×1015 Estimated computational power required to simulate a human brain in real time.[9]
- 93.01×1015 Sunway TaihuLight's LINPACK performance, June 2016[10]
- 143.5×1015 Summit's LINPACK performance, November 2018[11]
Exascale computing (1018)
- 1×1018 The U.S. Department of Energy and NSA estimated in 2008 that they would need exascale computing around 2018[12]
- 1×1018 Fugaku 2020 supercomputer in single precision mode[13]
- 1.88×1018 U.S. Summit achieves a peak throughput of this many operations per second, whilst analysing genomic data using a mixture of numerical precisions.[14]
- 2.43×1018 Folding@home distributed computing system during COVID-19 pandemic response[15]
Zettascale computing (1021)
- 1×1021 Accurate global weather estimation on the scale of approximately 2 weeks.[16] Assuming Moore's law remains constant, such systems may be feasible around 2030.
A zettascale computer system could generate more single floating point data in one second than was stored by any digital means on Earth in first quarter 2011.
beyond (>1021)
- 4×1048 Estimated computational power of a Matrioshka brain, where the power source is the Sun, the outermost layer operates at 10 kelvins, and the constituent parts operate at or near the Landauer limit and draws power at the efficiency of a Carnot engine. Approximate maximum computational power for a Kardashev 2 civilization.
- 5×1058 Estimated power of a galaxy equivalent in luminosity to the Milky Way converted into Matrioshka brains. Approximate maximum computational power for a Type III civilization on the Kardashev scale.
See also
- Futures studies – study of possible, probable, and preferable futures, including making projections of future technological advances
- History of computing hardware (1960s–present)
- List of emerging technologies – new fields of technology, typically on the cutting edge. Examples include genetics, robotics, and nanotechnology (GNR).
- Artificial intelligence – computer mental abilities, especially those that previously belonged only to humans, such as speech recognition, natural language generation, etc.
- History of artificial intelligence (AI)
- Strong AI – hypothetical AI as smart as a human. Such an entity would likely be recursive, that is, capable of improving its own design, which could lead to the rapid development of a superintelligence.
- Quantum computing
- Artificial intelligence – computer mental abilities, especially those that previously belonged only to humans, such as speech recognition, natural language generation, etc.
- Moore's law – observation (not actually a law) that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The law is named after Intel co-founder Gordon E. Moore, who described the trend in his 1965 paper.[17]
- Supercomputer
- Superintelligence
- Timeline of computing
- Technological singularity – hypothetical point in the future when computer capacity rivals that of a human brain, enabling the development of strong AI — artificial intelligence at least as smart as a human.
- The Singularity is Near – book by Raymond Kurzweil dealing with the progression and projections of development of computer capabilities, including beyond human levels of performance.
- TOP500 – list of the 500 most powerful (non-distributed) computer systems in the world
References
- "How many frames per second can the human eye see?". 2004-05-19. Retrieved 2013-02-19.
- Overclock3D - Sandra CPU
- Tony Pearson, IBM Watson - How to build your own "Watson Jr." in your basement, Inside System Storage
- "DGX-1 deep learning system" (PDF).
NVIDIA DGX-1 Delivers 75X Faster Training...Note: Caffe benchmark with AlexNet, training 1.28M images with 90 epochs
- "DGX Server". DGX Server. Nvidia. Retrieved 7 September 2017.
- https://blog.google/topics/google-cloud/google-cloud-offer-tpus-machine-learning/
- http://top500.org/list/2013/06/
- Kurzweil, Ray (1999). The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York, NY: Penguin. ISBN 9780140282023.
- http://hplusmagazine.com/2009/04/07/brain-chip/
- http://top500.org/list/2016/06/ Top500 list, June 2016
- "November 2018 | TOP500 Supercomputer Sites". www.top500.org. Retrieved 2018-11-30.
- "'Exaflop' Supercomputer Planning Begins". 2008-02-02. Archived from the original on 2008-10-01. Retrieved 2010-01-04.
Through the IAA, scientists plan to conduct the basic research required to create a computer capable of performing a million trillion calculations per second, otherwise known as an exaflop.
- https://www.top500.org/lists/top500/2020/06/
- "Genomics Code Exceeds Exaops on Summit Supercomputer". Oak Ridge Leadership Computing Facility. Retrieved 2018-11-30.
- Pande lab. "Client Statistics by OS". Archive.is. Archived from the original on 2020-04-12. Retrieved 2020-04-12.
- DeBenedictis, Erik P. (2005). "Reversible logic for supercomputing". Proceedings of the 2nd conference on Computing frontiers. pp. 391–402. ISBN 1-59593-019-1.
- Moore, Gordon E. (1965). "Cramming more components onto integrated circuits" (PDF). Electronics Magazine. p. 4. Retrieved 2006-11-11.
External links
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.