An Optimization of Answer Grading System with Deep Learning Algorithm & Optimizer

Main Article Content

Rudragouda G. Patil, Mahantesh N. Birje, Manisha T. Tapale, Nagaraj V. Dharwadkar

Abstract

Introduction: Evaluating student answers is crucial to educational assessment, significantly impacting learning outcomes and academic success. Traditional grading methods often exhibit inconsistencies and subjectivity, leading to variations in evaluators scoring similar answers. This study proposes a comprehensive methodology employing advanced computational techniques to enhance the consistency and objectivity of grading student answers. The research primarily aims to design a standardized grading framework that reduces variability in scoring due to human subjectivity. The study explores various deep learning algorithms to automate grade prediction and recommendation, thereby streamlining the grading process and providing data-driven insights for evaluators. The proposed framework addresses the challenges of traditional grading by leveraging “Natural Language Processing” (NLP) techniques to analyze and assess answers given by students. The study utilizes a Kaggle dataset of manually graded essays for training and testing. The performance of models is compared using measures such as “Quadratic Weighted Kappa” (QWK). Results indicate that the LSTM model with Generative Pre-Training Transformer-2 tokenizer, optimized using the “Grey Wolf Optimizer” (GWO), outperforms other models, including BERT with its tokenizer. The GPT-2-LSTM model demonstrates the highest QWK, accuracy, and stability, with the lowest MSE and variance, indicating superior performance in automated grading. The research findings suggest that the proposed work can effectively enhance the consistency and objectivity of student answer grading, reducing human bias and improving the overall assessment process.  

Article Details

Section
Articles