TeenyTinyLlama: Open-source tiny language models trained in Brazilian Portuguese

Published in Machine Learning with Applications, 2024

Abstract

Large language models (LLMs) have significantly advanced natural language processing, but their progress has not been equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes constrain the outputs they produce, such as computational demands and licensing regimes. In this study, we document the development of open-foundation models tailored for use in low-resource settings, their limitations, and their benefits. This is the TeenyTinyLlama pair: two compact models for Brazilian Portuguese text generation. We release them under the permissive Apache 2.0 license on GitHub and Hugging Face for community use and further development.

BibTeX

@article{correa2024teenytinyllama,
  title={Teenytinyllama: open-source tiny language models trained in Brazilian Portuguese},
  author={Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
  journal={Machine Learning with Applications},
  volume={16},
  pages={100558},
  year={2024},
  publisher={Elsevier}
}