Deep learning processor

A deep learning processor (DLP), or a deep learning accelerator, is a specially designed circuitry optimized for deep learning algorithms, usually with separate data memory and dedicated instruction set architecture. Deep learning processors form a part of a wide range of today's commercial infrastructure, from mobile devices (neural processing unit, i.e., NPU, in Huawei cellphones.[1]) to cloud servers (e.g, tensor processing unit, i.e., TPU,[2] in Google Cloud).

The goal of DLPs is to provide higher efficiency and performance than existing processing devices, i.e., general CPUs (central processing units) and GPUs (graphics processing units), when processing deep learning algorithms. Just as GPUs are designed for graphic processing, DLPs leverage the domain-specific (deep learning) knowledge in designing architectures for deep learning processing.[3] Commonly, most DLPs leverage a large number of computing components to leverage the high data-level parallelism, a relatively larger on-chip buffer/memory to leverage the data reuse patterns, and limited data-width operators to leverage the error-resilience of deep learning.[4]

History

The use of CPUs/GPUs

In the initial days, general-purpose CPUs were adopted to perform deep learning algorithms. Later, GPUs were introduced to the domain of deep learning. For example, in 2012, Alex Krizhevsky used two GPUs to train a deep learning network, i.e., AlexNet,[5] which won the ISLVRC-2012 competition. As the interest in deep learning algorithms and DLPs kept on increasing, GPU manufacturers started adding deep learning related features in both hardware (e.g., INT8 operators) and software (e.g., cuDNN Library). For example, Nvidia released the Turing Tensor Core—a DLP—to accelerate deep learning processing.

The first DLP

To provide higher efficiency in performance and energy, domain-specific designs started drawing a great deal of attention. In 2014, Chen et al. proposed the first DLP in the world, DianNao (Chinese for "electric brain"),[6] to accelerate deep neural networks especially. DianNao provides the 452 Gop/s peak performance (of key operations in deep neural networks) only in a small footprint of 3.02 mm2 and 485 mW. Later, the successors (DaDianNao,[7] ShiDianNao,[8] PuDianNao[9]) are proposed by the same group, forming the DianNao Family[10]

The blooming DLPs

Inspired from the pioneer work of DianNao family, many DLPs have been proposed in both academia and industry with design optimized to leverage the features of deep neural networks for high efficiency. In recent years, such efforts include Eyeriss[11] (MIT), EIE[12] (Stanford), Minerva[13] (Harvard), Stripes[14] (University of Toronto) in academia, and TPU[15] (Google), MLU[16] (Cambricon) in industry. Table 1 lists several representative works.

Table 1. Typical DLPs
Year DLPs Institution Type Computation Memory Hierarchy Control Peak Performance
2014 DianNao[6] ICT, CAS digital vector MACs scratchpad VLIW 452 Gops (16-bit)
DaDianNao[7] ICT, CAS digital vector MACs scratchpad VLIW 5.58 Tops (16-bit)
2015 ShiDianNao[8] ICT, CAS digital scalar MACs scratchpad VLIW 194 Gops (16-bit)
PuDianNao[9] ICT, CAS digital vector MACs scratchpad VLIW 1,056 Gops (16-bit)
2016 EIE[12] Stanford digital scalar MACs scratchpad - 102 Gops (16-bit)
Eyeriss[11] MIT digital scalar MACs scratchpad - 67.2 Gops (16-bit)
Prime[17] UCSB hybrid Process-in-Memory ReRAM - -
2017 TPU[15] Google digital scalar MACs scratchpad CISC 92 Tops (8-bit)
FlexFlow ICT, CAS digital scalar MACs scratchpad - 420 Gops ()
2018 MAERI Georgia Tech digital scalar MACs scratchpad -
PermDNN City University of New York digital vector MACs scratchpad - 614.4 Gops (16-bit)
2019 FPSA Tsinghua hybrid Process-in-Memory ReRAM -
Cambricon-F ICT, CAS digital vector MACs scratchpad FISA 14.9 Tops (F1, 16-bit)

956 Tops (F100, 16-bit)

DLP architecture

With the rapid evolution of deep learning algorithms and DLPs, many architectures have been explored. Roughly, DLPs can be classified into three categories based on their implementation: digital circuits, analog circuits, and hybrid circuits. As the pure analog DLPs are rarely seen, we introduce the digital DLPs and hybrid DLPs.

Digital DLPs

The major components of DLPs architecture usually include a computation component, the on-chip memory hierarchy, and the control logic that manages the data communication and computing flows.

Regarding the computation component, as most operations in deep learning can be aggregated into vector operations, the most common ways for building computation components in digital DLPs are the MAC-based (multiplier-accumulation) organization, either with vector MACs[6][7][9] or scalar MACs.[15][8][11] Rather than SIMD or SIMT in general processing devices, deep learning domain-specific parallelism is better explored on these MAC-based organizations. Regarding the memory hierarchy, as deep learning algorithms require high bandwidth to provide the computation component with sufficient data, DLPs usually employ a relatively larger size (tens of kilobytes or several megabytes) on-chip buffer but with dedicated on-chip data reuse strategy and data exchange strategy to alleviate the burden for memory bandwidth. For example, DianNao, 16 16-in vector MAC, requires 16 × 16 × 2 = 512 16-bit data, i.e., almost 1024GB/s bandwidth requirements between computation components and buffers. With on-chip reuse, such bandwidth requirements are reduced drastically.[6] Instead of the widely used cache in general processing devices, DLPs always use scratchpad memory as it could provide higher data reuse opportunities by leveraging the relatively regular data access pattern in deep learning algorithms. Regarding the control logic, as the deep learning algorithms keep evolving at a dramatic speed, DLPs start to leverage dedicated ISA (instruction set architecture) to support the deep learning domain flexibly. At first, DianNao used a VLIW-style instruction set where each instruction could finish a layer in a DNN. Cambricon[18] introduces the first deep learning domain-specific ISA, which could support more than ten different deep learning algorithms. TPU also reveals five key instructions from the CISC-style ISA.

Hybrid DLPs

Hybrid DLPs emerged for DNN inference and training acceleration because of their high efficiency. Processing-in-memory (PIM) architectures are one of the most important types of hybrid DLP. The key design concept of PIM is to bridge the gap between computing and memory, in the following manner: 1) Moving computation components into memory cells, controllers, or memory chips to alleviate the memory wall issue.[19][20][21] Such architectures significantly shorten data paths and leverage much higher internal bandwidth, hence resulting in attractive performance improvement. 2) Build high efficient DNN engines by adopting computational devices. In 2013, HP Lab demonstrated the astonishing capability of adopting ReRAM crossbar structure for computing.[22] Inspiring by this work, tremendous work are proposed to explore the new architecture and system design based on ReRAM,[17][23][24][19] phase change memory,[20][25][26] etc.

GPUs and FPGAs

Despite the emergence of DLPs, GPUs and FPGAs are also being used as accelerators to speed up the execution of deep learning algorithms. For example, Summit, a supercomputer from IBM for Oak Ridge National Laboratory,[27] contains 27,648 Nvidia Tesla V100 cards, which can be used to accelerate deep learning algorithms. Microsoft builds its deep learning platform using tons of FPGAs in its Azure to support real-time deep learning services.[28] Table 2 compares the DLPs against GPUs and FPGAs in terms of target, performance, energy efficiency, and flexibility.

Table 2. DLPs vs. GPUs vs. FPGAs
Target Performance Energy Efficiency Flexibility
DLPs deep learning high high domain-specific
FPGAs all low moderate general
GPUs matrix computation moderate low matrix applications

Atomically thin semiconductors for deep learning

Atomically thin semiconductors are considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage. In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on floating-gate field-effect transistors (FGFETs).[29] They use two-dimensional materials such as semiconducting molybdenum disulphide to precisely tune FGFETs as building blocks in which logic operations can be performed with the memory elements. [29]

Benchmarks

Benchmarking has served long as the foundation of designing new hardware architectures, where both architects and practitioners can compare various architectures, identify their bottlenecks, and conduct the corresponding system/architectural optimization. Table 3 lists several typical benchmarks for DLPs, dating from the year 2012 in time order.

Table 3. Benchmarks.
Year NN Benchmark Affiliations # of microbenchmarks # of component benchmarks # of application benchmarks
2012 BenchNN ICT, CAS N/A 12 N/A
2016 Fathom Harvard N/A 8 N/A
2017 BenchIP ICT, CAS 12 11 N/A
2017 DAWNBench Stanford 8 N/A N/A
2017 DeepBench Baidu 4 N/A N/A
2018 MLPerf Harvard, Intel, and Google, etc. N/A 7 N/A
2019 AIBench ICT, CAS and Alibaba, etc. 12 16 2
2019 NNBench-X UCSB N/A 10 N/A

See also

References

  1. "HUAWEI Reveals the Future of Mobile AI at IFA".
  2. P, JouppiNorman; YoungCliff; PatilNishant; PattersonDavid; AgrawalGaurav; BajwaRaminder; BatesSarah; BhatiaSuresh; BodenNan; BorchersAl; BoyleRick (2017-06-24). "In-Datacenter Performance Analysis of a Tensor Processing Unit". ACM SIGARCH Computer Architecture News. 45 (2): 1–12. doi:10.1145/3140659.3080246.
  3. "A Survey of Accelerator Architectures for 3D Convolution Neural Networks", Mittal et al., JSA, 2021
  4. "A Survey on Hardware Accelerators and Optimization Techniques for RNNs", JSA, 2020
  5. Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey E (2017-05-24). "ImageNet classification with deep convolutional neural networks". Communications of the ACM. 60 (6): 84–90. doi:10.1145/3065386.
  6. Chen, Tianshi; Du, Zidong; Sun, Ninghui; Wang, Jia; Wu, Chengyong; Chen, Yunji; Temam, Olivier (2014-04-05). "DianNao". ACM SIGARCH Computer Architecture News. 42 (1): 269–284. doi:10.1145/2654822.2541967. ISSN 0163-5964.
  7. Chen, Yunji; Luo, Tao; Liu, Shaoli; Zhang, Shijin; He, Liqiang; Wang, Jia; Li, Ling; Chen, Tianshi; Xu, Zhiwei; Sun, Ninghui; Temam, Olivier (December 2014). "DaDianNao: A Machine-Learning Supercomputer". 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE: 609–622. doi:10.1109/micro.2014.58. ISBN 978-1-4799-6998-2. S2CID 6838992.
  8. Du, Zidong; Fasthuber, Robert; Chen, Tianshi; Ienne, Paolo; Li, Ling; Luo, Tao; Feng, Xiaobing; Chen, Yunji; Temam, Olivier (2016-01-04). "ShiDianNao". ACM SIGARCH Computer Architecture News. 43 (3S): 92–104. doi:10.1145/2872887.2750389. ISSN 0163-5964.
  9. Liu, Daofu; Chen, Tianshi; Liu, Shaoli; Zhou, Jinhong; Zhou, Shengyuan; Teman, Olivier; Feng, Xiaobing; Zhou, Xuehai; Chen, Yunji (2015-05-29). "PuDianNao". ACM SIGARCH Computer Architecture News. 43 (1): 369–381. doi:10.1145/2786763.2694358. ISSN 0163-5964.
  10. Chen, Yunji; Chen, Tianshi; Xu, Zhiwei; Sun, Ninghui; Temam, Olivier (2016-10-28). "DianNao family". Communications of the ACM. 59 (11): 105–112. doi:10.1145/2996864. ISSN 0001-0782. S2CID 207243998.
  11. Chen, Yu-Hsin; Emer, Joel; Sze, Vivienne (2017). "Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks". IEEE Micro: 1. doi:10.1109/mm.2017.265085944. hdl:1721.1/102369. ISSN 0272-1732.
  12. Han, Song; Liu, Xingyu; Mao, Huizi; Pu, Jing; Pedram, Ardavan; Horowitz, Mark A.; Dally, William J. (2016-02-03). EIE: Efficient Inference Engine on Compressed Deep Neural Network. OCLC 1106232247.
  13. Reagen, Brandon; Whatmough, Paul; Adolf, Robert; Rama, Saketh; Lee, Hyunkwang; Lee, Sae Kyu; Hernandez-Lobato, Jose Miguel; Wei, Gu-Yeon; Brooks, David (June 2016). "Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators". 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). Seoul: IEEE: 267–278. doi:10.1109/ISCA.2016.32. ISBN 978-1-4673-8947-1.
  14. Judd, Patrick; Albericio, Jorge; Moshovos, Andreas (2017-01-01). "Stripes: Bit-Serial Deep Neural Network Computing". IEEE Computer Architecture Letters. 16 (1): 80–83. doi:10.1109/lca.2016.2597140. ISSN 1556-6056. S2CID 3784424.
  15. "In-Datacenter Performance Analysis of a Tensor Processing Unit | Proceedings of the 44th Annual International Symposium on Computer Architecture". doi:10.1145/3079856.3080246. S2CID 4202768. Cite journal requires |journal= (help)
  16. "MLU 100 intelligence accelerator card".
  17. Chi, Ping; Li, Shuangchen; Xu, Cong; Zhang, Tao; Zhao, Jishen; Liu, Yongpan; Wang, Yu; Xie, Yuan (June 2016). "PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory". 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). IEEE: 27–39. doi:10.1109/isca.2016.13. ISBN 978-1-4673-8947-1.
  18. Liu, Shaoli; Du, Zidong; Tao, Jinhua; Han, Dong; Luo, Tao; Xie, Yuan; Chen, Yunji; Chen, Tianshi (June 2016). "Cambricon: An Instruction Set Architecture for Neural Networks". 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). IEEE: 393–405. doi:10.1109/isca.2016.42. ISBN 978-1-4673-8947-1.
  19. Song, Linghao; Qian, Xuehai; Li, Hai; Chen, Yiran (February 2017). "PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning". 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE: 541–552. doi:10.1109/hpca.2017.55. ISBN 978-1-5090-4985-1. S2CID 15281419.
  20. Ambrogio, Stefano; Narayanan, Pritish; Tsai, Hsinyu; Shelby, Robert M.; Boybat, Irem; di Nolfo, Carmelo; Sidler, Severin; Giordano, Massimo; Bodini, Martina; Farinha, Nathan C. P.; Killeen, Benjamin (June 2018). "Equivalent-accuracy accelerated neural-network training using analogue memory". Nature. 558 (7708): 60–67. doi:10.1038/s41586-018-0180-5. ISSN 0028-0836. PMID 29875487. S2CID 46956938.
  21. Chen, Wei-Hao; Lin, Wen-Jang; Lai, Li-Ya; Li, Shuangchen; Hsu, Chien-Hua; Lin, Huan-Ting; Lee, Heng-Yuan; Su, Jian-Wei; Xie, Yuan; Sheu, Shyh-Shyuan; Chang, Meng-Fan (December 2017). "A 16Mb dual-mode ReRAM macro with sub-14ns computing-in-memory and memory functions enabled by self-write termination scheme". 2017 IEEE International Electron Devices Meeting (IEDM). IEEE: 28.2.1–28.2.4. doi:10.1109/iedm.2017.8268468. ISBN 978-1-5386-3559-9. S2CID 19556846.
  22. Yang, J. Joshua; Strukov, Dmitri B.; Stewart, Duncan R. (January 2013). "Memristive devices for computing". Nature Nanotechnology. 8 (1): 13–24. doi:10.1038/nnano.2012.240. ISSN 1748-3395. PMID 23269430.
  23. Shafiee, Ali; Nag, Anirban; Muralimanohar, Naveen; Balasubramonian, Rajeev; Strachan, John Paul; Hu, Miao; Williams, R. Stanley; Srikumar, Vivek (2016-10-12). "ISAAC". ACM SIGARCH Computer Architecture News. 44 (3): 14–26. doi:10.1145/3007787.3001139. ISSN 0163-5964. S2CID 6329628.
  24. Ji, Yu Zhang, Youyang Xie, Xinfeng Li, Shuangchen Wang, Peiqi Hu, Xing Zhang, Youhui Xie, Yuan (2019-01-27). FPSA: A Full System Stack Solution for Reconfigurable ReRAM-based NN Accelerator Architecture. OCLC 1106329050.CS1 maint: multiple names: authors list (link)
  25. Nandakumar, S. R.; Boybat, Irem; Joshi, Vinay; Piveteau, Christophe; Le Gallo, Manuel; Rajendran, Bipin; Sebastian, Abu; Eleftheriou, Evangelos (November 2019). "Phase-Change Memory Models for Deep Learning Training and Inference". 2019 26th IEEE International Conference on Electronics, Circuits and Systems (ICECS). IEEE: 727–730. doi:10.1109/icecs46596.2019.8964852. ISBN 978-1-7281-0996-1. S2CID 210930121.
  26. Joshi, Vinay; Le Gallo, Manuel; Haefeli, Simon; Boybat, Irem; Nandakumar, S. R.; Piveteau, Christophe; Dazzi, Martino; Rajendran, Bipin; Sebastian, Abu; Eleftheriou, Evangelos (2020-05-18). "Accurate deep neural network inference using computational phase-change memory". Nature Communications. 11 (1): 2473. doi:10.1038/s41467-020-16108-9. ISSN 2041-1723. PMC 7235046. PMID 32424184.
  27. "Summit: Oak Ridge National Laboratory's 200 petaflop supercomputer".
  28. "Microsoft unveils Project Brainwave for real-time AI".
  29. Marega, Guilherme Migliato; Zhao, Yanfei; Avsar, Ahmet; Wang, Zhenyu; Tripati, Mukesh; Radenovic, Aleksandra; Kis, Anras (2020). "Logic-in-memory based on an atomically thin semiconductor". Nature. 587 (2): 72–77. doi:10.1038/s41586-020-2861-0.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.