Analyzing ChatGPT's Problem-solving Capabilities in Arabic-Based Software tasks

Main Article Content

Hani Al-Bloush, Laith Al Shehab, Ashraf A.Odeh

Abstract

This study investigates the problem of ChatGPT’s performance in generating and optimizing Python code in both English and Arabic, addressing the challenges of low-resource languages. The objective is to compare how ChatGPT versions 3.5 and 4.0 handle standard algorithms-Linear Search, Binary Search, and Quick Sort in terms of code generation, optimization, and readability. Using a comparative experimental design, key performance metrics such as time complexity, execution speed, and error rates were analyzed. The results reveal substantial disparities between the two languages: English exhibited efficient code generation, minimal errors, and improved optimization, while Arabic encountered higher error rates, slower execution, and limited performance gains despite optimization. These findings highlight the limitations of AI models in low-resource linguistic environments, underscoring the need for fine-tuning to enhance global applicability. This study contributes to advancing the understanding of AI coding tools and their ability to support diverse linguistic contexts, particularly in underrepresented languages like Arabic.

Article Details

Section
Articles