Go to Polygence Scholars page
EMILIO Medina's cover illustration
Polygence Scholar2024
EMILIO Medina's profile

EMILIO Medina

Class of 2025Mexico City, Mexico City

About

Projects

  • "Comparison of models for text generation" with mentor Joe (May 1, 2024)

Project Portfolio

Comparison of models for text generation

Started July 14, 2023

Abstract or project description

Sequence-to-sequence models are a type of machine learning encoder-decoder architecture designed for tasks involving sequential data. This data type is vast and of great significance, yet, little research is available about the performance comparison between different sequence-to-sequence models. This paper aims to give a quantitative and qualitative analysis and comparison of an RNN, GRU, LSTM, and Transformer Model. This was achieved using the most well-known sequence-to-sequence metrics: Rougescore, BLEU, and BERTscore. The analysis was done for the task of text generation of Homer’s writing style using a small corpus of data. It was observed that for these conditions, the automated scores (Rougescore and BLEU score) are futile since they reward the mimicking of a sentence rather than the similarity to the writing style. Additionally, it was noted that the lack of data impacted the performance of the more complex models, supporting the claim that when little data is available less complex models proved to be more efficient. These findings are relevant since they offered a comparison between models for text generation tasks and suggested the need for more and different sequence-to-sequence evaluation metrics.