Go to Polygence Scholars page
Ishaan Domkundwar's cover illustration
Polygence Scholar2024
Ishaan Domkundwar's profile

Ishaan Domkundwar

Class of 2025Pune, Maharashtra

About

Projects

  • "Improving the ability of transformers to perform pattern recognition and reasoning tasks" with mentor Alex (Sept. 28, 2024)

Ishaan's Symposium Presentation

Project Portfolio

Improving the ability of transformers to perform pattern recognition and reasoning tasks

Started May 23, 2024

Abstract or project description

This paper provides a comprehensive evaluation of advanced prompting techniques and the improvements they result in for logical reasoning for large language models (LLMs), with a particular focus on transformer architectures. Reasoning is imperative for LLMs to be able to propose solutions, especially in a more technical scenario, effectively. The study aims to address the limitations of current LLMs in handling complex pattern recognition and sequence prediction tasks by working with these prompting techniques and models. We assess the effectiveness of methods such as Chain-of-Thought (CoT) prompting across LLMs that include GPT-4o, Meta Llama 3.1-70B, Mixtral 8x7B v0.1, and Google Gemma 2. We find that multiple prompting techniques consistently enhance the reasoning capabilities of LLMs, leading to notable improvements in complex task performance, especially for GPT-4o and Meta Llama 3.1-70B. Techniques such as zero-shot CoT and retrieval-based prompting show promise, CoT stands out as the most effective with an enhanced score of 90% up from 47% for the hard testing set with GPT-4o. Some models go wrong through arithmetic problems as a result of following prompting techniques. This paper's findings offer insights into the strengths and limitations of current LLM prompting strategies, with implications for improving future model development through prompt-aware fine-tuning and architectural adaptations.