Optimizing Resource Allocation in Cloud-Based Information Systems through Machine Learning Algorithms
Main Article Content
Abstract
Efficient resource sharing has become a must for improving system performance and lowering running costs as cloud-based information systems continue to serve a wide range of large-scale apps. Traditional ways of allocating resources don't always work well when tasks change, which wastes time and money. This article suggests a high-tech structure based on machine learning that can help make the best use of cloud resources. It focuses on features that can predict and adapt to changes in task in real time. Our method uses both controlled and unstructured learning to correctly predict resource needs and to find the best way to distribute resources across virtual machines with the least amount of delay and the highest cost-effectiveness. The study used real-world cloud task data to run a lot of models that compared standard heuristic methods to machine learning-based distribution. According to the results, the system is much more efficient now, with up to 35% less wasted resources and 25% faster response times. We talk about how choosing the right model (decision trees, neural networks, and support vector machines) affects the accuracy of predictions and the amount of work that needs to be done. The study also talks about how the machine learning system can be scaled up or down, showing that it can work with different cloud platforms and types of applications. The suggested method lowers the need for human work by automating resource sharing. This lets cloud companies better handle resources, which makes users happier overall. This study adds to the growing field of optimizing cloud resources and shows how important machine learning methods will be in designing future cloud infrastructure. The results show that machine learning is a good, scalable way to handle resources in cloud settings that are getting more complicated all the time.