Transfer learning

Transfer learning (TL) is a research problem in machine learning (ML) that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem.[1] For example, knowledge gained while learning to recognize cars could apply when trying to recognize trucks. This area of research bears some relation to the long history of psychological literature on transfer of learning, although formal ties between the two fields are limited. From the practical standpoint, reusing or transferring information from previously learned tasks for the learning of new tasks has the potential to significantly improve the sample efficiency of a reinforcement learning agent.[2]

Andrew Ng said in his NIPS 2016 tutorial [3][4] that TL will be the next driver of ML commercial success after supervised learning to highlight the importance of TL.

History

In 1993, Lorien Pratt published a paper on transfer in machine learning, formulating the discriminability-based transfer (DBT) algorithm.[5]

In 1997, the journal Machine Learning published a special issue devoted to transfer learning,[6] and by 1998, the field had advanced to include multi-task learning,[7] along with a more formal analysis of its theoretical foundations.[8] Learning to Learn,[9] edited by Pratt and Sebastian Thrun, is a 1998 review of the subject.

Transfer learning has also been applied in cognitive science, with the journal Connection Science publishing a special issue on reuse of neural networks through transfer in 1996.[10]

Definition

The definition of transfer learning is given in terms of domain and task. The domain consists of: a feature space and a marginal probability distribution , where . Given a specific domain, , a task consists of two components: a label space and an objective predictive function (denoted by ), which is learned from the training data consisting of pairs, which consist of pairs , where and . The function can be used to predict the corresponding label,, of a new instance .[11]

Given a source domain and learning task , a target domain and learning task , transfer learning aims to help improve the learning of the target predictive function in using the knowledge in and , where , or .[11]

Applications

Algorithms are available for transfer learning in Markov logic networks[12] and Bayesian networks.[13] Transfer learning has also been applied to cancer subtype discovery,[14] building utilization,[15][16] general game playing,[17] text classification,[18][19] digit recognition[20] and spam filtering.[21]

In 2020 it was discovered that, due to their similar physical natures, transfer learning is possible between Electromyographic (EMG) signals from the muscles when classifying the behaviours of Electroencephalographic (EEG) brainwaves from the gesture recognition domain to the mental state recognition domain. It was also noted that this relationship worked vice-versa, showing that EEG can likewise be used to classify EMG in addition[22]. The experiments noted that the accuracy of neural networks and convolutional neural networks were improved through transfer learning both at the first epoch (prior to any learning, ie. compared to standard random weight distribution) and at the asymptote (the end of the learning process). That is, algorithms are improved by exposure to another domain.

See also

References

  1. West, Jeremy; Ventura, Dan; Warnick, Sean (2007). "Spring Research Presentation: A Theoretical Foundation for Inductive Transfer". Brigham Young University, College of Physical and Mathematical Sciences. Archived from the original on 2007-08-01. Retrieved 2007-08-05.
  2. George Karimpanal, Thommen; Bouffanais, Roland (2019). "Self-organizing maps for storage and transfer of knowledge in reinforcement learning". Adaptive Behavior. 27 (2): 111–126. arXiv:1811.08318. doi:10.1177/1059712318818568. ISSN 1059-7123.
  3. NIPS 2016 tutorial: "Nuts and bolts of building AI applications using Deep Learning" by Andrew Ng, retrieved 2019-12-28
  4. "NIPS 2016 Schedule". nips.cc. Retrieved 2019-12-28.
  5. Pratt, L. Y. (1993). "Discriminability-based transfer between neural networks" (PDF). NIPS Conference: Advances in Neural Information Processing Systems 5. Morgan Kaufmann Publishers. pp. 204–211.
  6. Pratt, L. Y.; Thrun, Sebastian (July 1997). "Machine Learning - Special Issue on Inductive Transfer". link.springer.com. Springer. Retrieved 2017-08-10.
  7. Caruana, R., "Multitask Learning", pp. 95-134 in Pratt & Thrun 1998
  8. Baxter, J., "Theoretical Models of Learning to Learn", pp. 71-95 Pratt & Thrun 1998
  9. Thrun & Pratt 2012.
  10. Pratt, L. (1996). "Special Issue: Reuse of Neural Networks through Transfer". Connection Science. 8 (2). Retrieved 2017-08-10.
  11. Lin, Yuan-Pin; Jung, Tzyy-Ping (27 June 2017). "Improving EEG-Based Emotion Classification Using Conditional Transfer Learning". Frontiers in Human Neuroscience. 11: 334. doi:10.3389/fnhum.2017.00334. PMC 5486154. PMID 28701938. Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.
  12. Mihalkova, Lilyana; Huynh, Tuyen; Mooney, Raymond J. (July 2007), "Mapping and Revising Markov Logic Networks for Transfer" (PDF), Learning Proceedings of the 22nd AAAI Conference on Artificial Intelligence (AAAI-2007), Vancouver, BC, pp. 608–614, retrieved 2007-08-05
  13. Niculescu-Mizil, Alexandru; Caruana, Rich (March 21–24, 2007), "Inductive Transfer for Bayesian Network Structure Learning" (PDF), Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics (AISTATS 2007), retrieved 2007-08-05
  14. Hajiramezanali, E. & Dadaneh, S. Z. & Karbalayghareh, A. & Zhou, Z. & Qian, X. Bayesian multi-domain learning for cancer subtype discovery from next-generation sequencing count data. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. arXiv:1810.09433
  15. Arief-Ang, I.B.; Salim, F.D.; Hamilton, M. (2017-11-08). DA-HOC: semi-supervised domain adaptation for room occupancy prediction using CO2 sensor data. 4th ACM International Conference on Systems for Energy-Efficient Built Environments (BuildSys). Delft, Netherlands. pp. 1–10. doi:10.1145/3137133.3137146. ISBN 978-1-4503-5544-5.
  16. Arief-Ang, I.B.; Hamilton, M.; Salim, F.D. (2018-12-01). "A Scalable Room Occupancy Prediction with Transferable Time Series Decomposition of CO2 Sensor Data". ACM Transactions on Sensor Networks. 14 (3–4): 21:1–21:28. doi:10.1145/3217214.
  17. Banerjee, Bikramjit, and Peter Stone. "General Game Learning Using Knowledge Transfer." IJCAI. 2007.
  18. Do, Chuong B.; Ng, Andrew Y. (2005). "Transfer learning for text classification". Neural Information Processing Systems Foundation, NIPS*2005 (PDF). Retrieved 2007-08-05.
  19. Rajat, Raina; Ng, Andrew Y.; Koller, Daphne (2006). "Constructing Informative Priors using Transfer Learning". Twenty-third International Conference on Machine Learning (PDF). Retrieved 2007-08-05.
  20. Maitra, D. S.; Bhattacharya, U.; Parui, S. K. (August 2015). "CNN based common approach to handwritten character recognition of multiple scripts". 2015 13th International Conference on Document Analysis and Recognition (ICDAR): 1021–1025. doi:10.1109/ICDAR.2015.7333916. ISBN 978-1-4799-1805-8.
  21. Bickel, Steffen (2006). "ECML-PKDD Discovery Challenge 2006 Overview". ECML-PKDD Discovery Challenge Workshop (PDF). Retrieved 2007-08-05.
  22. Bird, Jordan J.; Kobylarz, Jhonatan; Faria, Diego R.; Ekart, Aniko; Ribeiro, Eduardo P. (2020). "Cross-Domain MLP and CNN Transfer Learning for Biological Signal Processing: EEG and EMG". IEEE Access. Institute of Electrical and Electronics Engineers (IEEE). 8: 54789–54801. doi:10.1109/access.2020.2979074. ISSN 2169-3536.

Sources

  • Thrun, Sebastian; Pratt, Lorien (6 December 2012). Learning to Learn. Springer Science & Business Media. ISBN 978-1-4615-5529-2.CS1 maint: ref=harv (link)
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.