Enhancing mathematical reasoning in large language models

Loading...
Thumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

Université of eloued جامعة الوادي

Abstract

L’automatisation de la resolution de probl ́ emes math ` ematiques est un domaine en pleine ́ expansion, stimule par l’ ́ evolution des mod ́ eles de langage. Ce travail explore l’utilisation de ` quatre grands modeles de langage pr ` e-entra ́ ˆınes — GPT-2, Qwen2.5-7B, DeepSeek-R1-Distill- ́ Qwen-14B et DeepSeek-R1-Distill-Qwen-1.5B — pour gen ́ erer des solutions pr ́ ecises ́ a des ` problemes math ` ematiques. Les mod ́ eles ont ` et ́ e ajust ́ es ́ a l’aide de m ` ethodes r ́ ecentes, notam- ́ ment l’adaptation par matrices de faible rang (LoRA), afin d’ameliorer les performances tout en ́ optimisant l’utilisation des ressources. L’evaluation de ces diff ́ erents mod ́ eles a permis de com- ` parer leurs performances et de mettre en evidence les approches les plus efficaces. Les r ́ esultats ́ demontrent le potentiel de la combinaison entre mod ́ eles de langage avanc ` es et ajustements effi- ́ caces pour soutenir l’enseignement des mathematiques en langue arabe et automatiser les t ́ aches ˆ de resolution de probl ́ emes. ` The automation of mathematical problem solving is a growing field driven by the evolution of language models. This work explores the use of four pre-trained large language models—GPT-2, Qwen2.5-7B, DeepSeek-R1-Distill-Qwen-14B, and DeepSeek-R1-Distill-Qwen-1.5B—to gen- erate accurate solutions for math problems. The models were fine-tuned using recent methods, including Low-Rank Adaptation (LoRA), to enhance performance while optimizing resource us- age. Evaluation across different models allowed for performance comparison and highlighted the most effective approaches. The results demonstrate the potential of combining advanced language models with efficient fine-tuning to support Arabic-language education and automate problem-solving tasks.

Description

Citation

Doga. Imane .Lila.Fatma zohra. Enhancing mathematical reasoning in large language models .Informatique department. FACULTY OF EXACT SCIENCES.2025. University of El Oued

Endorsement

Review

Supplemented By

Referenced By