Computer performance by orders of magnitude

This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.

Scientific E notation index: 2 | 3 | 6 | 9 | 12 | 15 | 18 | 21 | 24 | >24

Deciscale computing (10−1)

  • 5×10−1 Speed of the average human mental calculation for multiplication using pen and paper

Scale computing (100)

  • 1 OP/S the speed of the average human addition calculation using pen and paper
  • 1 OP/S the speed of Zuse Z1
  • 5 OP/S world record for addition set

Decascale computing (101)

  • 6×101 Upper end of serialized human perception computation (light bulbs in the US do not flicker to the human observer)

Hectoscale computing (102)

  • 2.2×102 Upper end of serialized human throughput. This is roughly expressed by the lower limit of accurate event placement on small scales of time (The swing of a conductor's arm, the reaction time to lights on a drag strip, etc.)[1]
  • 2×102 IBM 602 1946 computer.

Kiloscale computing (103)

Megascale computing (106)

Gigascale computing (109)

Terascale computing (1012)

Petascale computing (1015)

  • 1.026×1015 IBM Roadrunner 2009 Supercomputer
  • 2×1015 Nvidia DGX-2 a 2 Petaflop Machine Learning system
  • 135×1015 Fastest computer system as of 2018 is the Folding@home distributed computing system
  • 11.5×1015 Google TPU pod containing 64 second-generation TPUs, May 2017[6]
  • 17.17×1015 IBM Sequoia's LINPACK performance, June 2013[7]
  • 33.86×1015 Tianhe-2's LINPACK performance, June 2013[7]
  • 36.8×1015 Estimated computational power required to simulate a human brain in real time.[8]
  • 93.01×1015 Sunway TaihuLight's LINPACK performance, June 2016[9]

Exascale computing (1018)

  • 1×1018 The U.S. Department of Energy and NSA estimated in 2008 that they would need exascale computing around 2018[10]
  • 24×1018 Bitcoin network Hash Rate reached 24 Exahashes per second in early 2018[11]

Zettascale computing (1021)

  • 1×1021 Accurate global weather estimation on the scale of approximately 2 weeks.[12] Assuming Moore's law remains constant, such systems may be feasible around 2030.

A zettascale computer system could generate more single floating point data in one second than was stored by any digital means on Earth in first quarter 2011.

Yottascale computing (1024)

  • 257.6×1024 Estimated computational power required to simulate 7 billion brains in real time[8]

beyond (>1024)

  • 4×1048 Estimated computational power of a Matrioshka brain, where the power source is the Sun, the outermost layer operates at 10 kelvins, and the constituent parts operate at or near the Landauer limit and draws power at the efficiency of a Carnot engine. Approximate maximum computational power for a Kardashev 2 civilization.
  • 5×1058 Estimated power of a galaxy equivalent in luminosity to the Milky Way converted into Matrioshka brains. Approximate maximum computational power for a Kardashev 3 civilization.

See also

References

  1. "How many frames per second can the human eye see?". 2004-05-19. Retrieved 2013-02-19.
  2. Overclock3D - Sandra CPU
  3. Tony Pearson, IBM Watson - How to build your own "Watson Jr." in your basement, Inside System Storage
  4. "DGX-1 deep learning system" (PDF). NVIDIA DGX-1 Delivers 75X Faster Training...Note: Caffe benchmark with AlexNet, training 1.28M images with 90 epochs
  5. "DGX Server". DGX Server. Nvidia. Retrieved 7 September 2017.
  6. https://blog.google/topics/google-cloud/google-cloud-offer-tpus-machine-learning/
  7. 1 2 http://top500.org/list/2013/06/
  8. 1 2 http://hplusmagazine.com/2009/04/07/brain-chip/
  9. http://top500.org/list/2016/06/ Top500 list, June 2016
  10. "'Exaflop' Supercomputer Planning Begins". 2008-02-02. Archived from the original on 2008-10-01. Retrieved 2010-01-04. Through the IAA, scientists plan to conduct the basic research required to create a computer capable of performing a million trillion calculations per second, otherwise known as an exaflop.
  11. Bitcoin hash rate chart
  12. DeBenedictis, Erik P. (2005). "Reversible logic for supercomputing". Proceedings of the 2nd conference on Computing frontiers. pp. 391–402. ISBN 1-59593-019-1.
  13. Moore, Gordon E. (1965). "Cramming more components onto integrated circuits" (PDF). Electronics Magazine. p. 4. Retrieved 2006-11-11.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.