Automated machine learning and neural architecture optimization

Published

27-12-2023

DOI:

https://doi.org/10.58414/SCIENTIFICTEMPER.2023.14.4.42

Keywords:

Automated machine learning, Neural architecture optimization, Classifier accuracy, Model selection, Learning curves.

Dimensions Badge

Issue

Section

SECTION C: ARTIFICIAL INTELLIGENCE, ENGINEERING, TECHNOLOGY

Authors

  • Pravin P. Adivarekar1 Department of Computer Engineering, A.P.Shah Institute of Technology, Thane, Maharashtra, India
  • Amarnath Prabhakaran A Department of BME, Nandha Engineering College, Erode, Tamil Nadu, India
  • Sukhwinder Sharma Department of Data Science & Engineering, Manipal University Jaipur, Jaipur, Rajasthan, India
  • Divya P Department of Computer Science, ACS College of Engineering, Bangalore, India
  • Muniyandy Elangovan Department of Biosciences, Saveetha School of Engineering, Saveetha Nagar, Thandalam, India; 2 Department of R&D, Bond Marine Consultancy, London EC1V 2NX, UK
  • Ravi Rastogi Electronics Division, NIELIT Gorakhpur, MMMUT Campus, Deoria Road, Gorakhpur, Uttar Pradesh, India

Abstract

Automated machine learning (AutoML) and neural architecture optimization (NAO) represent pivotal components in the landscape of machine learning and artificial intelligence. This paper extensively explores these domains, aiming to delineate their significance, methodologies, cutting-edge techniques, challenges, and emerging trends. AutoML streamlines and democratizes machine learning by automating intricate procedures, such as algorithm selection and hyperparameter tuning. Conversely, NAO automates the design of neural network architectures, a critical aspect for optimizing deep learning model performance. Both domains have made substantial advancements, significantly impacting research, industry practices, and societal applications. Through a series of experiments, classifier accuracy, NAO model selection based on hidden unit count, and learning curve analysis were investigated. The results underscored the efficacy of machine learning models, the substantial impact of architectural choices on test accuracy, and the significance of selecting an optimal number of training epochs for model convergence. These findings offer valuable insights into the potential and limitations of AutoML and NAO, emphasizing the transformative potential of automation and optimization within the machine learning field. Additionally, this study highlights the imperative for further research to explore synergies between AutoML and NAO, aiming to bridge the gap between model selection, architecture design, and hyperparameter tuning. Such endeavors hold promise in opening new frontiers in automated machine learning methodologies.

How to Cite

Adivarekar1, P. P., A, A. P., Sharma, S., P, D., Muniyandy Elangovan, & Ravi Rastogi. (2023). Automated machine learning and neural architecture optimization. The Scientific Temper, 14(04), 1345–1351. https://doi.org/10.58414/SCIENTIFICTEMPER.2023.14.4.42

Downloads

Download data is not yet available.

Most read articles by the same author(s)