Tucano: Advancing neural text generation for Portuguese
Published in Patterns, 2025
Abstract
Natural language processing has seen substantial progress in recent years. However, current deep-learning-based language models demand extensive data and computational resources. This data-intensive paradigm has led to a divide between high-resource languages, where development is thriving, and low-resource languages, which lag. To address this disparity, this study introduces a new set of resources to advance neural text generation for Portuguese. Here, we document the development of GigaVerbo, a Portuguese text corpus amounting to 200 billion tokens. Using this corpus, we trained Tucano, a family of decoder-only transformer models. Our models consistently outperform comparable Portuguese and multilingual models on several benchmarks. All models, datasets, and tools developed in this work are openly available to the community to support reproducible research.
BibTeX
@article{correa2025tucano,
title={Tucano: Advancing neural text generation for Portuguese},
author={Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza},
journal={Patterns},
volume={6},
number={11},
year={2025},
publisher={Elsevier}
}
