AI accelerator

An AI accelerator is a class of microprocessor[1] or computer system[2] designed as hardware acceleration for artificial intelligence applications, especially artificial neural networks, machine vision and machine learning. Typical applications include algorithms for robotics, internet of things and other data-intensive or sensor-driven tasks.[3] They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability.[4] A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design.

History of AI acceleration

Computer systems have frequently complemented the CPU with special purpose accelerators for specialized tasks, known as coprocessors. Notable application-specific hardware units include video cards for graphics, sound cards, graphics processing units and digital signal processors. As deep learning and artificial intelligence workloads rose in prominence in the 2010s, specialized hardware units were developed or adapted from existing products to accelerate these tasks.

Early attempts

As early as 1993, digital signal processors were used as neural network accelerators e.g. to accelerate optical character recognition software.[5] In the 1990s, there were also attempts to create parallel high-throughput systems for workstations aimed at various applications, including neural network simulations.[6][7][8] FPGA-based accelerators were also first explored in the 1990s for both inference[9] and training.[10] ANNA was a neural net CMOS accelerator developed by Yann LeCun.[11]

Heterogeneous computing

Heterogeneous computing began the incorporation of a number of specialized processors in a single system, or even a single chip, each optimized for a specific type of task. Architectures such as the cell microprocessor[12] have features significantly overlapping with AI accelerators including: support for packed low precision arithmetic, dataflow architecture, and prioritizing 'throughput' over latency. The Cell microprocessor was subsequently applied to a number of tasks[13][14][15] including AI.[16][17][18]

In the 2000s, CPUs also gained increasingly wide SIMD units, driven by video and gaming workloads; as well as support for packed low precision data types.[19]

Use of GPU

Graphics processing units or GPUs are specialized hardware for the manipulation of images and calculation of local image properties. The mathematical basis of neural networks and image manipulation are similar, embarrassingly parallel tasks involving matrices, leading GPUs to become increasingly used for machine learning tasks.[20][21][22] As of 2016, GPUs are popular for AI work, and they continue to evolve in a direction to facilitate deep learning, both for training[23] and inference in devices such as self-driving cars.[24] GPU developers such as Nvidia NVLink are developing additional connective capability for the kind of dataflow workloads AI benefits from.[25] As GPUs have been increasingly applied to AI acceleration, GPU manufacturers have incorporated neural network specific hardware to further accelerate these tasks.[26] Tensor cores are intended to speed up the training of neural networks.[26]

Use of FPGAs

Deep learning frameworks are still evolving, making it hard to design custom hardware. Reconfigurable devices such as field-programmable gate arrays (FPGA) make it easier to evolve hardware, frameworks and software alongside each other.[9][10][27]

Microsoft has used FPGA chips to accelerate inference.[28][29] The application of FPGAs to AI acceleration motivated Intel to acquire Altera with the aim of integrating FPGAs in server CPUs, which would be capable of accelerating AI as well as general purpose tasks.[30]

Emergence of dedicated AI accelerator ASICs

While GPUs and FPGAs perform far better than CPUs for these AI related tasks, a factor of up to 10 in efficiency[31][32] may be gained with a more specific design, via an application-specific integrated circuit (ASIC). These accelerators employ strategies such as optimized memory use and the use of lower precision arithmetic to accelerate calculation and increase throughput of computation.[33][34] Some adopted low-precision floating-point formats used AI acceleration are half-precision and the bfloat16 floating-point format.[35][36][37][38][39][40][41]

In-memory computing architectures

In June 2017, IBM researchers announced an architecture in contrast to the von Neumann architecture based on in-memory computing and phase-change memory arrays applied to temporal correlation detection, intending to generalize the approach to heterogeneous computing and massively parallel systems.[42] In October 2018, IBM researchers announced an architecture based on in-memory processing and modeled on the human brain's synaptic network to accelerate deep neural networks.[43] The system is based on phase-change memory arrays.[44]

Nomenclature

As of 2016, the field is still in flux and vendors are pushing their own marketing term for what amounts to an "AI accelerator", in the hope that their designs and APIs will become the dominant design. There is no consensus on the boundary between these devices, nor the exact form they will take; however several examples clearly aim to fill this new space, with a fair amount of overlap in capabilities.

In the past when consumer graphics accelerators emerged, the industry eventually adopted Nvidia's self-assigned term, "the GPU",[45] as the collective noun for "graphics accelerators", which had taken many forms before settling on an overall pipeline implementing a model presented by Direct3D.

Examples

Stand alone products

GPU based products

  • Nvidia Tesla is Nvidia's line of GPU derived products marketed for GPGPU and AI tasks.
    • Nvidia Volta is a microarchitecture which augments the Graphics processing unit with additional 'tensor units' targeted specifically at accelerating calculations for neural networks[50]
    • Nvidia GeForce 20 series is the first series based on the Turing (microarchitecture) and features built in "Tensor Cores". [51]
    • Nvidia DGX-1 is a Nvidia workstation/server product which incorporates Nvidia brand GPUs for GPGPU tasks including machine learning.[52]
  • Radeon Instinct is AMD's line of GPU derived products for AI acceleration.[53]

AI accelerating co-processors

  • The processor in Qualcomm's mobile platform Snapdragon 845 contains a Hexagon 685 DSP core for AI processing in camera, voice, XR and gaming applications
  • PowerVR 2NX NNA (Neural Net Accelerator) is an IP core from Imagination Technologies licensed for integration into chips.[54]
  • Apple's Neural Engine is an AI accelerator core within the Apple A11 Bionic SoC[55] and Apple A12 Bionic SoC.
  • Cadence Tensilica Vision C5 is a neural network-optimized digital signal processor IP core[56]
  • The Neural Processing Unit is a neural network accelerator within the HiSilicon Kirin 970[57]
  • January 2018 CEVA, Inc. launched a family of four AI processors called NeuPro, each containing one programmable vector DSP and one hardwired implementation of 8-bit or 16-bit neural network layers supporting neural nets with performances ranging from 2 TOPS thru 12.5 TOPS. [58]
  • Universal Multifunction Accelerator (UMA) by Manjeera Digital Systems in Hyderabad is an accelerator in a proprietary architecture based on Middle Stratum Operations.[59][60][61]

Research and unreleased products

  • In December 2017 Tesla Motors confirmed a rumour that it is developing an AI chip for autonomous driving. Jim Keller has been working in this project since at least early 2016.[62]
  • MIT Eyeriss is an accelerator design aimed explicitly at convolutional neural networks, using a scratchpad memory and network-on-chip architecture.[63]
  • Nullhop is an accelerator designed at the Institute of Neuroinformatics of ETH Zürich and University of Zürich based on sparse representation of feature maps. The second generation of the architecture is commercialized by the university spin-off Synthara Technologies.[64][65]
  • Kalray is an accelerator for convolutional neural nets.[66]
  • SpiNNaker is a many-core design specialized for simulating a large neural network.
  • Graphcore IPU is a graph-based AI accelerator.[67]
  • DPU, by Wave Computing, a dataflow architecture[68]
  • STMicroelectronics at the start of 2017 presented a demonstrator SoC manufactured in a 28 nm process containing a deep CNN accelerator.[69]
  • TrueNorth is a manycore design based on spiking neurons rather than traditional arithmetic.[70][71]
  • Intel Loihi is an experimental neuromorphic chip.[72]
  • BrainChip in September 2017 introduced a commercial PCI Express card with a Xilinx Kintex Ultrascale FPGA running neuromorphic neural cores applying pattern recognition on 600 video images per second using 16 watts of power.[73]
  • IIT Madras is designing a spiking neuron accelerator for big-data analytics.[74]
  • Several memristor-based AI accelerators have been proposed which leverage in-memory computing capability of memristor.[4]
  • AlphaICs is designing an agent-based Co-Processor called Real AI Processor (RAP) to enable perception and decision making in a chip. [75]

Potential applications

See also

References

  1. "Intel unveils Movidius Compute Stick USB AI Accelerator".
  2. "Inspurs unveils GX4 AI Accelerator".
  3. "Google Developing AI Processors". Google using its own AI accelerators.
  4. 1 2 "A Survey of ReRAM-based Architectures for Processing-in-memory and Neural Networks", S. Mittal, Machine Learning and Knowledge Extraction, 2018
  5. "convolutional neural network demo from 1993 featuring DSP32 accelerator".
  6. "design of a connectionist network supercomputer".
  7. "The end of general purpose computers (not)". This presentation covers a past attempt at neural net accelerators, notes the similarity to the modern SLI GPGPU processor setup, and argues that general purpose vector accelerators are the way forward (in relation to RISC-V hwacha project. Argues that NN's are just dense and sparse matrices, one of several recurring algorithms)
  8. "SYNAPSE-1: a high-speed general purpose parallel neurocomputer system".
  9. 1 2 "Space Efficient Neural Net Implementation" (PDF).
  10. 1 2 "A Generic Building Block for Hopfield Neural Networks with On-Chip Learning" (PDF).
  11. Application of the ANNA Neural Network Chip to High-Speed Character Recognition
  12. "Synergistic Processing in Cell's Multicore Architecture".
  13. "Performance of Cell processor for biomolecular simulations" (PDF).
  14. "Video Processing and Retrieval on Cell architecture".
  15. "Ray Tracing on the Cell Processor".
  16. "Development of an artificial neural network on a heterogeneous multicore architecture to predict a successful weight loss in obese individuals" (PDF).
  17. "Parallelization of the Scale-Invariant Keypoint Detection Algorithm for Cell Broadband Engine Architecture".
  18. "Data Mining Algorithms on the Cell Broadband Engine".
  19. "Improving the performance of video with AVX".
  20. "microsoft research/pixel shaders/MNIST".
  21. "how the gpu came to be used for general computation".
  22. "imagenet classification with deep convolutional neural networks" (PDF).
  23. "nvidia driving the development of deep learning".
  24. "nvidia introduces supercomputer for self driving cars".
  25. "how nvlink will enable faster easier multi GPU computing".
  26. 1 2 Harris, Mark (May 11, 2017). "CUDA 9 Features Revealed: Volta, Cooperative Groups and More". Retrieved August 12, 2017.
  27. "FPGA Based Deep Learning Accelerators Take on ASICs". The Next Platform. 2016-08-23. Retrieved 2016-09-07.
  28. "microsoft extends fpga reach from bing to deep learning".
  29. "Accelerating Deep Convolutional Neural Networks Using Specialized Hardware" (PDF).
  30. "A Survey of FPGA-based Accelerators for Convolutional Neural Networks", Mittal et al., NCAA, 2018
  31. "Google boosts machine learning with its Tensor Processing Unit". 2016-05-19. Retrieved 2016-09-13.
  32. "Chip could bring deep learning to mobile devices". www.sciencedaily.com. 2016-02-03. Retrieved 2016-09-13.
  33. "Deep Learning with Limited Numerical Precision" (PDF).
  34. Rastegari, Mohammad; Ordonez, Vicente; Redmon, Joseph; Farhadi, Ali (2016). "XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks". arXiv:1603.05279 [cs.CV].
  35. Khari Johnson (2018-05-23). "Intel unveils Nervana Neural Net L-1000 for accelerated AI training". VentureBeat. Retrieved 2018-05-23. ...Intel will be extending bfloat16 support across our AI product lines, including Intel Xeon processors and Intel FPGAs.
  36. Michael Feldman (2018-05-23). "Intel Lays Out New Roadmap for AI Portfolio". TOP500 Supercomputer Sites. Retrieved 2018-05-23. Intel plans to support this format across all their AI products, including the Xeon and FPGA lines
  37. Lucian Armasu (2018-05-23). "Intel To Launch Spring Crest, Its First Neural Network Processor, In 2019". Tom's Hardware. Retrieved 2018-05-23. Intel said that the NNP-L1000 would also support bfloat16, a numerical format that’s being adopted by all the ML industry players for neural networks. The company will also support bfloat16 in its FPGAs, Xeons, and other ML products. The Nervana NNP-L1000 is scheduled for release in 2019.
  38. "Available TensorFlow Ops | Cloud TPU | Google Cloud". Google Cloud. Retrieved 2018-05-23. This page lists the TensorFlow Python APIs and graph operators available on Cloud TPU.
  39. Elmar Haußmann (2018-04-26). "Comparing Google's TPUv2 against Nvidia's V100 on ResNet-50". RiseML Blog. Retrieved 2018-05-23. For the Cloud TPU, Google recommended we use the bfloat16 implementation from the official TPU repository with TensorFlow 1.7.0. Both the TPU and GPU implementations make use of mixed-precision computation on the respective architecture and store most tensors with half-precision.
  40. Tensorflow Authors (2018-02-28). "ResNet-50 using BFloat16 on TPU". Google. Retrieved 2018-05-23.
  41. Joshua V. Dillon, Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, Rif A. Saurous (2017-11-28). TensorFlow Distributions (Report). arXiv:1711.10604. Bibcode:2017arXiv171110604D. Accessed 2018-05-23. All operations in TensorFlow Distributions are numerically stable across half, single, and double floating-point precisions (as TensorFlow dtypes: tf.bfloat16 (truncated floating point), tf.float16, tf.float32, tf.float64). Class constructors have a validate_args flag for numerical asserts
  42. Abu Sebastian, Tomas Tuma, Nikolaos Papandreou, Manuel Le Gallo, Lukas Kull, Thomas Parnell, Evangelos Eleftheriou. "Temporal correlation detection using computational phase-change memory". arXiv:1706.00511 [cs.ET].
  43. "A new brain-inspired architecture could improve how computers handle data and advance AI". American Institute of Physics. 2018-10-03. Retrieved 2018-10-05.
  44. Carlos Ríos, Nathan Youngblood, Zengguang Cheng, Manuel Le Gallo, Wolfram H.P. Pernice, C David Wright, Abu Sebastian, Harish Bhaskaran. "In-memory computing on a photonic platform". arXiv:1801.06228 [cs.ET].
  45. "NVIDIA launches the World's First Graphics Processing Unit, the GeForce 256,".
  46. Kampman, Jeff (17 October 2017). "Intel unveils purpose-built Neural Network Processor for deep learning". Tech Report. Retrieved 18 October 2017.
  47. "Intel Nervana Neural Network Processors (NNP) Redefine AI Silicon". Retrieved 20 October 2017.
  48. "The Evolution of EyeQ".
  49. "NM500, Neuromorphic chip with 576 neurons".
  50. "Nvidia goes beyond the GPU for AI with Volta".
  51. "The NVIDIA Turing GPU Architecture Deep Dive: Prelude to GeForce RTX". AnandTech.
  52. "nvidia dgx-1" (PDF).
  53. Smith, Ryan (12 December 2016). "AMD Announces Radeon Instinct: GPU Accelerators for Deep Learning, Coming in 2017". Anandtech. Retrieved 12 December 2016.
  54. "The highest performance neural network inference accelerator".
  55. "The iPhone X's new neural engine exemplifies Apple's approach to AI". The Verge. Retrieved 2017-09-23.
  56. "Cadence Unveils Industry's First Neural Network DSP IP for Automotive, Surveillance, Drone and Mobile Markets".
  57. "HUAWEI Reveals the Future of Mobile AI at IFA 2017".
  58. "A Family of AI Processors for Deep Learning at the Edge".
  59. Manjeera Digital System, UMA. "Universal Multifunction Accelerator". Manjeera Digital Systems. Retrieved 28 June 2018.
  60. Manjeera Digital Systems, Universal Multifunction Accelerator. "Revolutionise Processing". Indian Express. Retrieved 28 June 2018.
  61. AI Chip, UMA (10 May 2018). "AI Chip from Hyderabad" (News Paper). Telangana Today. Retrieved 28 June 2018.
  62. Lambert, Fred (December 8, 2017). "Elon Musk confirms that Tesla is working on its own new AI chip led by Jim Keller".
  63. Chen, Yu-Hsin; Krishna, Tushar; Emer, Joel; Sze, Vivienne (2016). "Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks". IEEE International Solid-State Circuits Conference, ISSCC 2016, Digest of Technical Papers. pp. 262–263.
  64. Aimar, Alessandro; et al. "NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps" (PDF).
  65. "Synthara Technologies".
  66. "kalray MPPA" (PDF).
  67. "Graphcore Technology".
  68. "Wave Computing's DPU architecture".
  69. "A 2.9 TOPS/W Deep Convolutional Neural Network SoC in FD-SOI 28nm for Intelligent Embedded Systems" (PDF).
  70. "yann lecun on IBM truenorth". argues that spiking neurons have never produced leading quality results, and that 8-16 bit precision is optimal, pushes the competing 'neuflow' design
  71. "IBM cracks open new era of neuromorphic computing". TrueNorth is incredibly efficient: The chip consumes just 72 milliwatts at max load, which equates to around 400 billion synaptic operations per second per watt — or about 176,000 times more efficient than a modern CPU running the same brain-like workload, or 769 times more efficient than other state-of-the-art neuromorphic approaches
  72. "Intel's New Self-Learning Chip Promises to Accelerate Artificial Intelligence".
  73. "BrainChip Accelerator".
  74. "India preps RISC-V Processors - Shakti targets servers, IoT, analytics". The Shakti project now includes plans for at least six microprocessor designs as well as associated fabrics and an accelerator chip
  75. "AlphaICs".
  76. "drive px".
  77. "design of a machine vision system for weed control" (PDF).
  78. "qualcomm research brings server class machine learning to every data devices".
  79. "movidius powers worlds most intelligent drone".
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.