Multitask optimization

Multi-task optimization is a paradigm in the optimization literature that focuses on solving multiple self-contained tasks simultaneously.[1][2] The paradigm has been inspired by the well-established concepts of transfer learning[3] and multi-task learning[4] in predictive analytics.

The key motivation behind multi-task optimization is that if optimization tasks are related to each other in terms of their optimal solutions or the general characteristics of their function landscapes,[5] the search progress can be transferred to substantially accelerate the search on the other.

The success of the paradigm is not necessarily limited to one-way knowledge transfers from simpler to more complex tasks. In practice an attempt is to intentionally solve a more difficult task that may unintentionally solve several smaller problems.[6]

Methods

There are two common approaches for multi-task optimization: Bayesian optimization and evolutionary computation.[1]

Multi-task Bayesian optimization

Multi-task Bayesian optimization is a modern model-based approach that leverages the concept of knowledge transfer to speed up the automatic hyperparameter optimization process of machine learning algorithms.[7] The method builds a multi-task Gaussian process model on the data originating from different searches progressing in tandem.[8] The captured inter-task dependencies are thereafter utilized to better inform the subsequent sampling of candidate solutions in respective search spaces.

Evolutionary multi-tasking

Evolutionary multi-tasking has been explored as a means of exploiting the implicit parallelism of population-based search algorithms to simultaneously progress multiple distinct optimization tasks. By mapping all tasks to a unified search space, the evolving population of candidate solutions can harness the hidden relationships between them through continuous genetic transfer. This is induced when solutions associated with different tasks crossover.[2][9] Recently, modes of knowledge transfer that are different from direct solution crossover have been explored.[10]

Applications

Algorithms for multi-task optimization span a wide array of real-world applications. Recent studies highlight the potential for speed-ups in the optimization of engineering design parameters by conducting related designs jointly in a multi-task manner.[9] In machine learning, the transfer of optimized features across related data sets can enhance the efficiency of the training process as well as improve the generalization capability of learned models.[11][12] In addition, the concept of multi-tasking has led to advances in automatic hyperparameter optimization of machine learning models and ensemble learning.[13][14]

Applications have also been reported in cloud computing,[15] with future developments geared towards cloud-based on-demand optimization services that can cater to multiple customers simultaneously.[2][16]

See also

References

  1. Gupta, A., Ong, Y. S., & Feng, L. (2018). Insights on transfer optimization: Because experience is the best teacher. IEEE Transactions on Emerging Topics in Computational Intelligence, 2(1), 51-64.
  2. Gupta, A., Ong, Y. S., & Feng, L. (2016). Multifactorial evolution: toward evolutionary multitasking. IEEE Transactions on Evolutionary Computation, 20(3), 343-357.
  3. Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10), 1345-1359.}
  4. Caruana, R., "Multitask Learning", pp. 95-134 in Pratt & Thrun 1998
  5. Cheng, M. Y., Gupta, A., Ong, Y. S., & Ni, Z. W. (2017). Coevolutionary multitasking for concurrent global optimization: With case studies in complex engineering design. Engineering Applications of Artificial Intelligence, 64, 13-24.}
  6. Cabi, S., Colmenarejo, S. G., Hoffman, M. W., Denil, M., Wang, Z., & De Freitas, N. (2017). The intentional unintentional agent: Learning to solve many continuous control tasks simultaneously. arXiv preprint arXiv:1707.03300.
  7. Swersky, K., Snoek, J., & Adams, R. P. (2013). Multi-task bayesian optimization. Advances in neural information processing systems (pp. 2004-2012).
  8. Bonilla, E. V., Chai, K. M., & Williams, C. (2008). Multi-task Gaussian process prediction. Advances in neural information processing systems (pp. 153-160).
  9. Ong, Y. S., & Gupta, A. (2016). Evolutionary multitasking: a computer science view of cognitive multitasking. Cognitive Computation, 8(2), 125-142.
  10. Feng, L., Zhou, L., Zhong, J., Gupta, A., Ong, Y. S., Tan, K. C., & Qin, A. K. (2018). Evolutionary Multitasking via Explicit Autoencoding. IEEE transactions on cybernetics, (99).
  11. Chandra, R., Gupta, A., Ong, Y. S., & Goh, C. K. (2016, October). Evolutionary multi-task learning for modular training of feedforward neural networks. In International Conference on Neural Information Processing (pp. 37-46). Springer, Cham.
  12. Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks? In Advances in neural information processing systems (pp. 3320-3328).
  13. Wen, Y. W., & Ting, C. K. (2016, July). Learning ensemble of decision trees through multifactorial genetic programming. In Evolutionary Computation (CEC), 2016 IEEE Congress on (pp. 5293-5300). IEEE.
  14. Zhang, B., Qin, A. K., & Sellis, T. (2018, July). Evolutionary feature subspaces generation for ensemble classification. In Proceedings of the Genetic and Evolutionary Computation Conference (pp. 577-584). ACM.
  15. Bao, L., Qi, Y., Shen, M., Bu, X., Yu, J., Li, Q., & Chen, P. (2018, June). An Evolutionary Multitasking Algorithm for Cloud Computing Service Composition. In World Congress on Services (pp. 130-144). Springer, Cham.
  16. Tang, J., Chen, Y., Deng, Z., Xiang, Y., & Joy, C. P. (2018). A Group-based Approach to Improve Multifactorial Evolutionary Algorithm. In IJCAI (pp. 3870-3876).
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.