Enhancing mathematical reasoning in large language models
| dc.contributor.author | Doga,Imane | |
| dc.contributor.author | Lila,Fatma zohra | |
| dc.date.accessioned | 2025-10-01T10:39:41Z | |
| dc.date.issued | 2025-10-01 | |
| dc.description.abstract | L’automatisation de la resolution de probl ́ emes math ` ematiques est un domaine en pleine ́ expansion, stimule par l’ ́ evolution des mod ́ eles de langage. Ce travail explore l’utilisation de ` quatre grands modeles de langage pr ` e-entra ́ ˆınes — GPT-2, Qwen2.5-7B, DeepSeek-R1-Distill- ́ Qwen-14B et DeepSeek-R1-Distill-Qwen-1.5B — pour gen ́ erer des solutions pr ́ ecises ́ a des ` problemes math ` ematiques. Les mod ́ eles ont ` et ́ e ajust ́ es ́ a l’aide de m ` ethodes r ́ ecentes, notam- ́ ment l’adaptation par matrices de faible rang (LoRA), afin d’ameliorer les performances tout en ́ optimisant l’utilisation des ressources. L’evaluation de ces diff ́ erents mod ́ eles a permis de com- ` parer leurs performances et de mettre en evidence les approches les plus efficaces. Les r ́ esultats ́ demontrent le potentiel de la combinaison entre mod ́ eles de langage avanc ` es et ajustements effi- ́ caces pour soutenir l’enseignement des mathematiques en langue arabe et automatiser les t ́ aches ˆ de resolution de probl ́ emes. ` The automation of mathematical problem solving is a growing field driven by the evolution of language models. This work explores the use of four pre-trained large language models—GPT-2, Qwen2.5-7B, DeepSeek-R1-Distill-Qwen-14B, and DeepSeek-R1-Distill-Qwen-1.5B—to gen- erate accurate solutions for math problems. The models were fine-tuned using recent methods, including Low-Rank Adaptation (LoRA), to enhance performance while optimizing resource us- age. Evaluation across different models allowed for performance comparison and highlighted the most effective approaches. The results demonstrate the potential of combining advanced language models with efficient fine-tuning to support Arabic-language education and automate problem-solving tasks. | |
| dc.identifier.citation | Doga. Imane .Lila.Fatma zohra. Enhancing mathematical reasoning in large language models .Informatique department. FACULTY OF EXACT SCIENCES.2025. University of El Oued | |
| dc.identifier.uri | https://archives.univ-eloued.dz/handle/123456789/39297 | |
| dc.language.iso | en | |
| dc.publisher | Université of eloued جامعة الوادي | |
| dc.subject | cles : ́ Resolution de probl ́ emes math ` ematiques | |
| dc.subject | LLMs | |
| dc.subject | LoRA | |
| dc.subject | Qwen2.5-7B | |
| dc.subject | DeepSeek | |
| dc.subject | ́ GPT-2 | |
| dc.subject | Math Problem Solving | |
| dc.subject | GPT-2 | |
| dc.title | Enhancing mathematical reasoning in large language models | |
| dc.type | master |