SqueezeNet

SqueezeNet
Original author(s) Forrest Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, Bill Dally, Kurt Keutzer
Initial release 22 February 2016 (2016-02-22)
Stable release
v1.1
Repository github.com/DeepScale/SqueezeNet
Type Deep neural network
License BSD license

SqueezeNet is the name of a deep neural network that was released in 2016. SqueezeNet was developed by researchers at DeepScale, University of California, Berkeley, and Stanford University. In designing SqueezeNet, the authors' goal was to create a smaller neural network with fewer parameters that can more easily fit into computer memory and can more easily be transmitted over a computer network.[1]

Framework support for SqueezeNet

SqueezeNet was originally released on February 22, 2016.[2] This original version of SqueezeNet was implemented on top of the Caffe deep learning software framework. Shortly thereafter, the open-source research community ported SqueezeNet to a number of other deep learning frameworks. On February 26, 2016, Eddie Bell released a port of SqueezeNet for the Chainer deep learning framework.[3] On March 2, 2016, Guo Haria released a port of SqueezeNet for the Apache MXNet framework.[4] On June 3, 2016, Tammy Yang released a port of SqueezeNet for the Keras framework.[5] In 2017, companies including Baidu, Xilinx, Imagination Technologies, and Synopsys demonstrated SqueezeNet running on low-power processing platforms such as smartphones, FPGAs, and custom processors.[6][7][8][9]

As of 2018, SqueezeNet ships "natively" as part of the source code of a number of deep learning frameworks such as PyTorch, Apache MXNet, and Apple CoreML.[10][11][12] In addition, 3rd party developers have created implementations of SqueezeNet that are compatible with frameworks such as TensorFlow.[13] Below is a summary of frameworks that support SqueezeNet.

Framework SqueezeNet Support References
Apache MXNet Native [11]
Apple CoreML Native [12]
Caffe2 Native [14]
Keras 3rd party [5]
MATLAB Deep Learning Toolbox Native [15]
ONNX Native [16]
PyTorch Native [10]
TensorFlow 3rd party [13]
Wolfram Mathematica Native [17]

Relationship to AlexNet

SqueezeNet was originally described in a paper entitled "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size."[18] AlexNet is a deep neural network that has 240MB of parameters, and SqueezeNet has just 5MB of parameters. However, it's important to note that SqueezeNet is not a "squeezed version of AlexNet." Rather, SqueezeNet is an entirely different DNN architecture than AlexNet.[19] What SqueezeNet and AlexNet have in common is that both of them achieve approximately the same level of accuracy when evaluated on the ImageNet image classification validation dataset.

Relationship to Deep Compression

Model compression (e.g. quantization and pruning of model parameters) can be applied to a deep neural network after it has been trained.[20] In the SqueezeNet paper, the authors demonstrated that a model compression technique called Deep Compression can be applied to SqueezeNet to further reduce the size of the parameter file from 5MB to 500KB.[18] Deep Compression has also been applied to other DNNs such as AlexNet and VGG.[21]

References

  1. Ganesh, Abhinav. "Deep Learning Reading Group: SqueezeNet". KDnuggets. Retrieved 2018-04-07.
  2. "SqueezeNet". GitHub. 2016-02-22. Retrieved 2018-05-12.
  3. Bell, Eddie (2016-02-26). "An implementation of SqueezeNet in Chainer". GitHub. Retrieved 2018-05-12.
  4. Haria, Guo (2016-03-02). "SqueezeNet for MXNet". GitHub. Retrieved 2018-05-12.
  5. 1 2 Yang, Tammy (2016-06-03). "SqueezeNet Keras Implemenation". GitHub. Retrieved 2018-05-12.
  6. Chirgwin, Richard (2017-09-26). "Baidu puts open source deep learning into smartphones". The Register. Retrieved 2018-04-07.
  7. Bush, Steve (2018-01-25). "Neural network SDK for PowerVR GPUs". Electronics Weekly. Retrieved 2018-04-07.
  8. Yoshida, Junko (2017-03-13). "Xilinx AI Engine Steers New Course". EE Times. Retrieved 2018-05-13.
  9. Boughton, Paul (2017-08-28). "Deep learning computer vision algorithms ported to processor IP". Engineer Live. Retrieved 2018-04-07.
  10. 1 2 "squeezenet.py". GitHub: PyTorch. Retrieved 2018-05-12.
  11. 1 2 "squeezenet.py". GitHub: Apache MXNet. Retrieved 2018-04-07.
  12. 1 2 "CoreML". Apple. Retrieved 2018-04-10.
  13. 1 2 Poster, Domenick. "Tensorflow implementation of SqueezeNet". GitHub. Retrieved 2018-05-12.
  14. Inkawhich, Nathan. "SqueezeNet Model Quickload Tutorial". GitHub: Caffe2. Retrieved 2018-04-07.
  15. "SqueezeNet for MATLAB Deep Learning Toolbox". Mathworks. Retrieved 2018-10-03.
  16. Fang, Lu. "SqueezeNet for ONNX". Open Neural Network eXchange.
  17. "SqueezeNet V1.1 Trained on ImageNet Competition Data". Wolfram Neural Net Repository. Retrieved 2018-05-12.
  18. 1 2 Iandola, Forrest N; Han, Song; Moskewicz, Matthew W; Ashraf, Khalid; Dally, William J; Keutzer, Kurt (2016). "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size". arXiv:1602.07360 [cs.CV].
  19. "SqueezeNet". Short Science. Retrieved 2018-05-13.
  20. Gude, Alex (2016-08-09). "Lab41 Reading Group: Deep Compression". Retrieved 2018-05-08.
  21. Han, Song (2016-11-06). "Compressing and regularizing deep neural networks". O'Reilly. Retrieved 2018-05-08.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.