Hybrid Model Compression Pipeline for Efficient Object Detection using Parallel Recurrent Convolutional Neural Networks

Main Article Content

Shraddha S. More, Rajesh Bansode

Abstract

Several Models of Deep Learning (DL) have demonstrated impressive performance across multiple object detection problems. Large object detection methods based on DL are typically computationally and memory-intensive. Hence, this paper presents model compression strategies for object identification with Parallel Recurrent Convolutional Neural Networks (MCS-OD-PRCNN). Initially, the input photos come from the Common Objects in Context (COCO) 2017 dataset. Next, using Improved Bilateral Texture Filtering (IBTF), the input images are pre-processed. The pre-processed images are then given to the suggested deep-learning model, Parallel Recurrent Convolutional Neural Networks (PRCNNs), which identifies and localizes the objects in the image. After training and validating the PRCNN model on the pre-processed dataset, compression model precision is decreased with the application of strategies like quantization and pruning, eliminating redundant weights and connections, and training a smaller, more efficient student model based on the larger PRCNN model. To ensure optimal performance, hybrid fox and chimp optimization algorithms (Hyb-FCOA) are employed for the compression model’s parameter tuning. The suggested methodology is carried out in Python environment, and fundamental evaluation metrics such as Accuracy, Precision, Recall, F-Measure, mean Average Precision (mAP), Matthew’s Correlation Coefficient (MCC), Intersection over Union (IoU), and Positive Predictive Value (PPV) are employed to evaluate the strategy's performance. The proposed method attains 20.08%, 23.35%, and 27.79% higher accuracy compared to existing techniques such as using one-to-one instruction and guided hybrid quantization for remote sensing object detection (GHOST-GQSD), Fast Region-Based Convolutional Neural Network (Fast-RCNN), and You Only Look Once version 4 (YOLOv4), respectively.

Article Details

Section
Articles