Comparative Analysis on Developed Optimization Techniques for Reducing Energy Consumption in AI Training and Inference
Main Article Content
Abstract
The rapid growth of Artificial Intelligence (AI) has led to significant increases in computational demands and, consequently, higher energy consumption. This paper explores and compares various optimization techniques aimed at reducing energy usage during both AI model training and inference. We delve into methods including algorithmic optimization, hardware acceleration, quantization, and pruning, examining their effectiveness, trade-offs, and applicability across different AI tasks and architectures. Through this analysis, we aim to provide a comprehensive understanding of the current landscape of energy-efficient AI practices and highlight future directions for sustainable AI development.
Article Details
Issue
Section
Articles