Optimization of Fine-Tuned DistilBERT Model for Classification of Status And Type of Laws and Regulations

Main Article Content

Yorissa Silviana, Rianto, Vega Purwayoga

Abstract

Data management of laws and regulations is critical to support an efficient legal information system. Still, the complexity of legal language, the diversity of document structures, and the large volume of data are the main challenges in the automatic classification process. This research aims to optimize the DistilBERT model through a fine-tuning approach with a multi-task learning scheme to predict two labels simultaneously, namely regulation status (Applicable / Not Applicable) and type/form of regulation. The research stages include data collection, preprocessing, model training, and model evaluation. The model achieved high performance on the two classification tasks, with 96% accuracy, 94% precision, 96% recall, and 94% f1-score for Regulation Status classification, as well as 100% perfect results on all evaluation metrics for Regulation Type/Shape classification, demonstrating the accuracy and reliability of the model in understanding and classifying legal documents as a whole. This finding confirms that the optimized model is highly reliable in the classification of the status of laws and regulations.

Article Details

Section
Articles