Optimizing Food SME Inventory Using a Deep Reinforcement Learning Framework
Main Article Content
Abstract
Introduction:
Inventory management is a critical challenge for small and medium-sized enterprises (SMEs), particularly in the food sector, due to the perishable nature of goods and fluctuating demand. Traditional inventory models often fail to adapt to these dynamic conditions, leading to inefficiencies and increased costs.
Objectives:
This study aims to develop a cost-effective and adaptive framework for inventory optimization in food-sector SMEs. The focus is on applying deep reinforcement learning techniques in single-stage or single-agent environments to simplify complex inventory systems and improve decision-making.
Methods:
The proposed framework employs a Deep Q-Network (DQN) model to estimate optimal order quantities based on key inputs: demand, safety stock, on-hand inventory, and sales data. The model minimizes long-term cumulative costs, including holding, ordering, and fixed costs such as administrative and transportation expenses. By interacting with the environment, the DQN agent learns policies that balance cost-effectiveness with inventory sufficiency.
Results:
The model effectively reduces unnecessary inventory costs while ensuring timely order fulfillment. It adapts to real-time conditions, optimizing ordering decisions and minimizing waste. The simulation results demonstrate the DQN's capability to maintain efficient inventory levels and significantly improve cost performance in SMEs.
Conclusions:
The deep reinforcement learning-based framework provides a viable solution for inventory optimization in food SMEs. It bridges the gap between theoretical modeling and practical application, supporting SMEs in achieving operational efficiency, cost reduction, and improved inventory control.