Alex Graves (computer scientist)

Alex Graves is a research scientist at DeepMind. He did a BSc in Theoretical Physics at Edinburgh and obtained a PhD in AI under Jürgen Schmidhuber at IDSIA.[1] He was also a postdoc at TU Munich and under Geoffrey Hinton[2] at the University of Toronto.

At IDSIA, he trained long short-term memory neural networks by a novel method called connectionist temporal classification (CTC).[3] This method outperformed traditional speech recognition models in certain applications.[4] In 2009, his CTC-trained LSTM was the first recurrent neural network to win pattern recognition contests, winning several competitions in connected handwriting recognition.[5][6] This method has become very popular. Google uses CTC-trained LSTM for speech recognition on the smartphone.[7][8]

Graves is also the creator of neural Turing machines[9] and the closely related differentiable neural computer[10][11].

References

  1. "Alex Graves - Research Scientist @ Google DeepMind". Retrieved May 17, 2016.
  2. "Marginally Interesting: What is going on with DeepMind and Google?". Blog.mikiobraun.de. Retrieved May 17, 2016.
  3. Alex Graves, Santiago Fernandez, Faustino Gomez, and Jürgen Schmidhuber (2006). Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural nets. Proceedings of ICML’06, pp. 369–376.
  4. Santiago Fernandez, Alex Graves, and Jürgen Schmidhuber (2007). An application of recurrent neural networks to discriminative keyword spotting. Proceedings of ICANN (2), pp. 220–229.
  5. Graves, Alex; and Schmidhuber, Jürgen; Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks, in Bengio, Yoshua; Schuurmans, Dale; Lafferty, John; Williams, Chris K. I.; and Culotta, Aron (eds.), Advances in Neural Information Processing Systems 22 (NIPS'22), December 7th–10th, 2009, Vancouver, BC, Neural Information Processing Systems (NIPS) Foundation, 2009, pp. 545–552
  6. A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, J. Schmidhuber. A Novel Connectionist System for Improved Unconstrained Handwriting Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 5, 2009.
  7. Google Research Blog. The neural networks behind Google Voice transcription. August 11, 2015. By Françoise Beaufays http://googleresearch.blogspot.co.at/2015/08/the-neural-networks-behind-google-voice.html
  8. Google Research Blog. Google voice search: faster and more accurate. September 24, 2015. By Haşim Sak, Andrew Senior, Kanishka Rao, Françoise Beaufays and Johan Schalkwyk – Google Speech Team http://googleresearch.blogspot.co.uk/2015/09/google-voice-search-faster-and-more.html
  9. "Google's Secretive DeepMind Startup Unveils a "Neural Turing Machine"". Retrieved May 17, 2016.
  10. Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago (2016-10-12). "Hybrid computing using a neural network with dynamic external memory". Nature. 538 (7626): 471–476. Bibcode:2016Natur.538..471G. doi:10.1038/nature20101. ISSN 1476-4687. PMID 27732574.
  11. "Differentiable neural computers | DeepMind". DeepMind. Retrieved 2016-10-19.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.