Optimizing Hyperparameters for Enhanced Satsuma Fruit Disease Detection
Main Article Content
Abstract
Hyperparameter tuning is an important process for optimizing the performance of machine learning models by fine-tuning parameters such as learning rate, batch size, and the number of epochs. This study systematically explored these parameters using a grid search optimization approach, conducting 120 experiments to enhance model accuracy and minimize loss. Key performance metrics, such as accuracy and loss, were used to evaluate the system's performance. Visualizations like line graph, heatmap, and pair plot gained insights into parameter interactions. The optimal configuration identified consisted of a learning rate of 0.001, a batch size of 32, and 50 epochs, achieving a test accuracy of 100.0% and a test loss 0.0027. These results represented a significant improvement over the approximated baseline configuration, which yielded a test accuracy of 90.4% and a test loss of 0.4164. The findings underscore the importance of moderate parameter values to ensure stable convergence, efficient training, and prevention of overfitting. By achieving substantial gains in accuracy and reductions in loss, the study demonstrates the transformative impact of hyperparameter tuning on model performance.