Multi-Metric Evaluation Framework for Machine Learning-Based Load Prediction in e-Governance Systems
Downloads
Published
DOI:
https://doi.org/10.58414/SCIENTIFICTEMPER.2026.17.01.20Keywords:
e-Governance, Load Prediction, Machine Learning, Resource Management, Scalability Analysis, Ensemble Learning, Inference Latency, Model Selection, Cloud Computing, Performance EvaluationDimensions Badge
Issue
Section
License
Copyright (c) 2026 The Scientific Temper

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The explosive growth of e-Governance platforms will necessitate transitioning from lifecycle reactive handling to proactive rather than just reactionary methods for handling e-Governance workloads and therefore managing resources effectively. Given that e-Governance workloads consist of highly dynamic content, load predictions must be sufficiently accurate for efficient resource selection and provisioning, continual discussion between workloads that need to comply with SLAs, and enabling the systematic handling of e-Governance workload. Machine learning-based approaches will provide strong predictive capabilities; however, careful consideration must be given to how those ML-based approaches will be deployed into the environment of an e-Governance system with regards to predictive accuracy, computational performance, scalability and robustness. This research paper will present a complete multi-metric evaluation framework that was developed to assess Load Prediction Models for e-Governance Platforms. The evaluation framework will consist of regressors, including Linear Regression, Instance-Based Learning, and Ensemble Approaches such as Random Forest, Gradient Boosting, XGBoost, LightGBM and CatBoost; however, when conducting the evaluation of each of the regression models it should not only include the traditional manner of evaluating for accuracy but also include training time, prediction latency, amount of Memory consumed for model training, amount of Data Inference Processed, Worst Case Error Percentiles, and Scalable Assessment of All Proposed Regression Models with respect to Data Size. The experimental results show that both Ensemble and Gradient Boosting Models significantly outperform conventional Baseline Approaches in terms of the Accuracy of the Prediction of the Response Variable. By combining the various advantages of all models tested and evaluating them based upon completed multi-metric evaluation framework, LightGBM has the overall best combination of Accuracy, Scalable Assessment, High Inference Efficiency and Lower Memory Usage. The results of this study provide insights into practical aspects of deploying intelligent load prediction solutions designed to improve the performance and reliability of e-Governance platforms.Abstract
How to Cite
Downloads
Similar Articles
- Pravin P. Adivarekar1, Amarnath Prabhakaran A, Sukhwinder Sharma, Divya P, Muniyandy Elangovan, Ravi Rastogi, Automated machine learning and neural architecture optimization , The Scientific Temper: Vol. 14 No. 04 (2023): The Scientific Temper
- V. Infine Sinduja, P. Joesph Charles, A hybrid approach using attention bidirectional gated recurrent unit and weight-adaptive sparrow search optimization for cloud load balancing , The Scientific Temper: Vol. 16 No. 05 (2025): The Scientific Temper
- Sheena Edavalath, Manikandasaran S. Sundaram, Cost-based resource allocation method for efficient allocation of resources in a heterogeneous cloud environment , The Scientific Temper: Vol. 14 No. 04 (2023): The Scientific Temper
- Anita M, Shakila S, Stochastic kernelized discriminant extreme learning machine classifier for big data predictive analytics , The Scientific Temper: Vol. 15 No. spl-1 (2024): The Scientific Temper
- V. Selvi, T. S. Poornappriya, R. Balasubramani, Cloud computing research productivity and collaboration: A scientometric perspective , The Scientific Temper: Vol. 15 No. spl-1 (2024): The Scientific Temper
- A. Anand, A. Nisha Jebaseeli, AI-driven real-time performance optimization and comparison of virtual machines and containers in cloud environments , The Scientific Temper: Vol. 15 No. spl-1 (2024): The Scientific Temper
- P S Renjeni, B Senthilkumaran, Ramalingam Sugumar, L. Jaya Singh Dhas, Gaussian kernelized transformer learning model for brain tumor risk factor identification and disease diagnosis , The Scientific Temper: Vol. 16 No. 02 (2025): The Scientific Temper
- Jyoti Vishwakarma, Sunil Kumar, Mapping Research on ESG Disclosure and Firm Performance: A Systematic Bibliometric Analysis , The Scientific Temper: Vol. 16 No. 09 (2025): The Scientific Temper
- Ayalew Ali, Sitotaw Wodajo, Taye Teshoma, The link between corporate governance and earnings management of insurance companies in Ethiopia , The Scientific Temper: Vol. 16 No. 07 (2025): The Scientific Temper
- G. Vijayalakshmi, M. V. Srinath, Student’s Academic Performance Improvement Using Adaptive Ensemble Learning Method , The Scientific Temper: Vol. 16 No. 11 (2025): The Scientific Temper
You may also start an advanced similarity search for this article.
Most read articles by the same author(s)
- Kanchan Chaudhary, Saurabh Charaya, The Implementation of Artificial Intelligence-Based Models of Postoperative Care in Paediatric Healthcare Settings , The Scientific Temper: Vol. 16 No. 12 (2025): The Scientific Temper
- Surendra Singh Bisht, Saurabh Charaya, Rachna Mehta, A Comparative and Hybrid Machine Learning Framework for IoT-Based Predictive Maintenance of Rotating Machinery , The Scientific Temper: Vol. 17 No. 02 (2026): The Scientific Temper

