Multitask optimization

Multitask optimization is a paradigm in the optimization literature that focuses on solving multiple self-contained tasks at the same time.[1][2] Inspired by the well-established concepts of transfer learning[3] and multi-task learning[4] in predictive analytics, the key motivation behind multitask optimization is that if optimization tasks are related to each other (in terms of their optimal solutions, or the general characteristics of their function landscapes)[5], then the search progress on one can be transferred to substantially speedup the search on the other. Notably, the success of the paradigm is not necessarily limited to one-way knowledge transfers from simpler to more complex task. In fact, in an attempt to intentionally solve a harder task, several simpler ones may often be unintentionally solved.[6]

Methods

In the existing literature, two common approaches for multitask optimization span Bayesian optimization and evolutionary computation.[1]

Multitask Bayesian optimization is a recent model-based approach that leverages the concept of knowledge transfer to speedup the automatic hyperparameter optimization process of machine learning algorithms.[7] The method builds a multitask Gaussian process model on the data originating from different searches progressing in tandem.[8] The captured inter-task dependencies are thereafter utilized to better inform the subsequent sampling of candidate solutions in the respective search spaces.

Evolutionary multitasking has been explored as a means of exploiting the implicit parallelism of population-based search algorithms to simultaneously progress multiple distinct optimization tasks. By mapping all tasks to a unified search space, the evolving population of candidate solutions can harness the hidden relationships between them through continuous genetic transfer - which is induced when solutions associated with different tasks crossover with each other.[2][9] More recently, modes of knowledge transfer that are different from direct solution crossover have been explored.[10]

Applications

Algorithms for multitask optimization span a wide array of real-world applications. Recent studies highlight the potential for speedups in the optimization of engineering design parameters by conducting related designs jointly in a multitask manner.[9] In machine learning, the transfer of optimized features across related datasets can enhance the efficiency of the training process as well as improve the generalization capability of learned models.[11][12] In addition to the above, the concept of multitasking has led to advances in automatic hyperparameter optimization of machine learning models, and ensemble learning.[13][14]

Applications have also been reported in cloud computing,[15] with future developments geared toward a cloud-based on-demand optimization service that can cater to multiple customers simultaneously.[2][16]

See also

References

  1. 1 2 Gupta, A., Ong, Y. S., & Feng, L. (2018). Insights on transfer optimization: Because experience is the best teacher. IEEE Transactions on Emerging Topics in Computational Intelligence, 2(1), 51-64.
  2. 1 2 3 Gupta, A., Ong, Y. S., & Feng, L. (2016). Multifactorial evolution: toward evolutionary multitasking. IEEE Transactions on Evolutionary Computation, 20(3), 343-357.
  3. Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10), 1345-1359.}
  4. Caruana, R., "Multitask Learning", pp. 95-134 in Pratt & Thrun 1998
  5. Cheng, M. Y., Gupta, A., Ong, Y. S., & Ni, Z. W. (2017). Coevolutionary multitasking for concurrent global optimization: With case studies in complex engineering design. Engineering Applications of Artificial Intelligence, 64, 13-24.}
  6. Cabi, S., Colmenarejo, S. G., Hoffman, M. W., Denil, M., Wang, Z., & De Freitas, N. (2017). The intentional unintentional agent: Learning to solve many continuous control tasks simultaneously. arXiv preprint arXiv:1707.03300.
  7. Swersky, K., Snoek, J., & Adams, R. P. (2013). Multi-task bayesian optimization. Advances in neural information processing systems (pp. 2004-2012).
  8. Bonilla, E. V., Chai, K. M., & Williams, C. (2008). Multi-task Gaussian process prediction. Advances in neural information processing systems (pp. 153-160).
  9. 1 2 Ong, Y. S., & Gupta, A. (2016). Evolutionary multitasking: a computer science view of cognitive multitasking. Cognitive Computation, 8(2), 125-142.
  10. Feng, L., Zhou, L., Zhong, J., Gupta, A., Ong, Y. S., Tan, K. C., & Qin, A. K. (2018). Evolutionary Multitasking via Explicit Autoencoding. IEEE transactions on cybernetics, (99).
  11. Chandra, R., Gupta, A., Ong, Y. S., & Goh, C. K. (2016, October). Evolutionary multi-task learning for modular training of feedforward neural networks. In International Conference on Neural Information Processing (pp. 37-46). Springer, Cham.
  12. Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks? In Advances in neural information processing systems (pp. 3320-3328).
  13. Wen, Y. W., & Ting, C. K. (2016, July). Learning ensemble of decision trees through multifactorial genetic programming. In Evolutionary Computation (CEC), 2016 IEEE Congress on (pp. 5293-5300). IEEE.
  14. Zhang, B., Qin, A. K., & Sellis, T. (2018, July). Evolutionary feature subspaces generation for ensemble classification. In Proceedings of the Genetic and Evolutionary Computation Conference (pp. 577-584). ACM.
  15. Bao, L., Qi, Y., Shen, M., Bu, X., Yu, J., Li, Q., & Chen, P. (2018, June). An Evolutionary Multitasking Algorithm for Cloud Computing Service Composition. In World Congress on Services (pp. 130-144). Springer, Cham.
  16. Tang, J., Chen, Y., Deng, Z., Xiang, Y., & Joy, C. P. (2018). A Group-based Approach to Improve Multifactorial Evolutionary Algorithm. In IJCAI (pp. 3870-3876).
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.