Enhancing mathematical reasoning in large language models

dc.contributor.authorDoga,Imane
dc.contributor.authorLila,Fatma zohra
dc.date.accessioned2025-10-01T10:39:41Z
dc.date.issued2025-10-01
dc.description.abstractL’automatisation de la resolution de probl ́ emes math ` ematiques est un domaine en pleine ́ expansion, stimule par l’ ́ evolution des mod ́ eles de langage. Ce travail explore l’utilisation de ` quatre grands modeles de langage pr ` e-entra ́ ˆınes — GPT-2, Qwen2.5-7B, DeepSeek-R1-Distill- ́ Qwen-14B et DeepSeek-R1-Distill-Qwen-1.5B — pour gen ́ erer des solutions pr ́ ecises ́ a des ` problemes math ` ematiques. Les mod ́ eles ont ` et ́ e ajust ́ es ́ a l’aide de m ` ethodes r ́ ecentes, notam- ́ ment l’adaptation par matrices de faible rang (LoRA), afin d’ameliorer les performances tout en ́ optimisant l’utilisation des ressources. L’evaluation de ces diff ́ erents mod ́ eles a permis de com- ` parer leurs performances et de mettre en evidence les approches les plus efficaces. Les r ́ esultats ́ demontrent le potentiel de la combinaison entre mod ́ eles de langage avanc ` es et ajustements effi- ́ caces pour soutenir l’enseignement des mathematiques en langue arabe et automatiser les t ́ aches ˆ de resolution de probl ́ emes. ` The automation of mathematical problem solving is a growing field driven by the evolution of language models. This work explores the use of four pre-trained large language models—GPT-2, Qwen2.5-7B, DeepSeek-R1-Distill-Qwen-14B, and DeepSeek-R1-Distill-Qwen-1.5B—to gen- erate accurate solutions for math problems. The models were fine-tuned using recent methods, including Low-Rank Adaptation (LoRA), to enhance performance while optimizing resource us- age. Evaluation across different models allowed for performance comparison and highlighted the most effective approaches. The results demonstrate the potential of combining advanced language models with efficient fine-tuning to support Arabic-language education and automate problem-solving tasks.
dc.identifier.citationDoga. Imane .Lila.Fatma zohra. Enhancing mathematical reasoning in large language models .Informatique department. FACULTY OF EXACT SCIENCES.2025. University of El Oued
dc.identifier.urihttps://archives.univ-eloued.dz/handle/123456789/39297
dc.language.isoen
dc.publisherUniversité of eloued جامعة الوادي
dc.subjectcles : ́ Resolution de probl ́ emes math ` ematiques
dc.subjectLLMs
dc.subjectLoRA
dc.subjectQwen2.5-7B
dc.subjectDeepSeek
dc.subject́ GPT-2
dc.subjectMath Problem Solving
dc.subjectGPT-2
dc.titleEnhancing mathematical reasoning in large language models
dc.typemaster

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
memoire_1_tex__1_ (1) - O- Ima.pdf
Size:
1.66 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: