Priority based parallel processing multi user multi task scheduling algorithm
Downloads
Published
DOI:
https://doi.org/10.58414/SCIENTIFICTEMPER.2025.16.2.04Keywords:
Task scheduling, Multi User, Parallel Processing, Edge server, Data centreDimensions Badge
Issue
Section
License
Copyright (c) 2025 The Scientific Temper

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Mobile Edge computing is one of the emerging fields in cloud environments where numerous user applications leverage a wide range of strong and powerful resources. To ensure optimal utilization, cloud computing resources such as storage, applications, and other services require effective management and scheduling. Managing resources is particularly challenging in scientific workflows, which involve extensive computations and interdependent operations. Task scheduling is the crucial challenge in this setup since the edge setup is migrated near to the user’s environment most of the computation is going to be handled by the edge server. Various algorithms and techniques have been proposed to address this issue. This paper explores a novel scheduling method for tasks offloaded by different users in a multi-user access computing paradigm. Also, the priority of the task is being considered while the tasks from mobile users are assigned to the data center. Considering the priority of the task, the tasks are being scheduled parallelly to the data centers. The completion time and the CPU utilization are extremely enhanced by using the proposed PBPPMUMTSA- Priority Based Parallel Processing Multi User Multi Task Scheduling Algorithm.Abstract
How to Cite
Downloads
Similar Articles
- Raja Selvaraj, Manikandasaran S. Sundari, EAM: Enhanced authentication method to ensure the authenticity and integrity of the data in VM migration to the cloud environment , The Scientific Temper: Vol. 14 No. 01 (2023): The Scientific Temper
- A. Jabeen, AR Mohamed Shanavas, Bradley Terry Brownboost and Lemke flower pollinated resource efficient task scheduling in cloud computing , The Scientific Temper: Vol. 16 No. 05 (2025): The Scientific Temper
- V. Babydeepa, K. Sindhu, Piecewise adaptive weighted smoothing-based multivariate rosenthal correlative target projection for lung and uterus cancer prediction with big data , The Scientific Temper: Vol. 15 No. 03 (2024): The Scientific Temper
- V. Umadevi, S. Ranganathan, IoT based energy aware local approximated MapReduce fuzzy clustering for smart healthcare data transmission , The Scientific Temper: Vol. 15 No. 03 (2024): The Scientific Temper
- M. Iniyan, A. Banumathi, Brower blowfish nash secured stochastic neural network based disease diagnosis for medical WBAN in cloud environment , The Scientific Temper: Vol. 15 No. 03 (2024): The Scientific Temper
- Sahaya Jenitha A, Sinthu J. Prakash, A general stochastic model to handle deduplication challenges using hidden Markov model in big data analytics , The Scientific Temper: Vol. 14 No. 04 (2023): The Scientific Temper
- Madhuri Prashant Pant, Jayshri Appaso Patil, Unlocking the potential of big data and analytics significance, applications in diverse domains and implementation of Apache Hadoop map/reduce for citation histogram , The Scientific Temper: Vol. 16 No. Spl-2 (2025): The Scientific Temper
- Raja Selvaraj, Manikandasaran S Sundaram, ECM: Enhanced confidentiality method to ensure the secure migration of data in VM to cloud environment , The Scientific Temper: Vol. 14 No. 03 (2023): The Scientific Temper
- Shaik Khaleel Ahamed, Neerav Nishant, Ayyakkannu Selvaraj, Nisarg Gandhewar, Srithar A, K.K.Baseer, Investigating privacy-preserving machine learning for healthcare data sharing through federated learning , The Scientific Temper: Vol. 14 No. 04 (2023): The Scientific Temper
- Sabeerath K, Manikandasaran S. Sundaram, BTEDD: Block-level tokens for efficient data deduplication in public cloud infrastructures , The Scientific Temper: Vol. 15 No. 03 (2024): The Scientific Temper
You may also start an advanced similarity search for this article.

