Recent Publications
Tucano: Advancing Neural Text Generation for Portuguese
Significant advances have been made in natural language processing in recent years. However, our current deep learning approach to language
modeling requires substantial resources in terms of data and computation. One of the side effects of this data-hungry
paradigm is the current schism between languages, separating those considered high-resource, where most of the development
happens and resources are available, and the low-resource ones, which struggle to attain the same level of performance and
autonomy. This study aims to introduce a new set of resources to stimulate the future development of neural text generation
in Portuguese. In this work, we document the development of
GigaVerbo, a concatenation of deduplicated
Portuguese text corpora amounting to 200 billion tokens. Via this corpus, we trained a series of decoder-transformers
named
Tucano. Our models perform equal or superior to other Portuguese and multilingual language models of similar size in several
Portuguese benchmarks. The evaluation of our models also reveals that model performance on many currently available benchmarks
used by the Portuguese NLP community has
little to no correlation with the scaling of token ingestion during
training, highlighting the limitations of such evaluations when it comes to the assessment of Portuguese generative language models.
All derivatives of our study are openly released on
GitHub and
Hugging Face.
Dynamic Normativity: Necessary and Sufficient Conditions for Value Alignment
The critical inquiry pervading the realm of Philosophy, and perhaps extending its influence across all Humanities disciplines, revolves around the
intricacies of morality and normativity. Surprisingly, in recent years, this thematic thread has woven its way into an unexpected domain, one not
conventionally associated with pondering "what ought to be": the field of artificial intelligence (AI) research. Central to morality and AI, we find
"alignment", a problem related to the challenges of expressing human goals and values in a manner that artificial systems can follow without leading
to unwanted adversarial effects. More explicitly and with our current paradigm of AI development in mind, we can think of alignment as teaching human
values to non-anthropomorphic entities trained through opaque, gradient-based learning techniques. This work addresses alignment as a technical-philosophical
problem that requires solid philosophical foundations and practical implementations that bring normative theory to AI system development.
To accomplish this, we propose two sets of necessary and sufficient conditions that, we argue, should be considered in any alignment process.
While necessary conditions serve as metaphysical and metaethical roots that pertain to the permissibility of alignment, sufficient conditions
establish a blueprint for aligning AI systems under a learning-based paradigm. After laying such foundations, we present implementations of this
approach by using state-of-the-art techniques and methods for aligning general-purpose language systems. We call this framework
Dynamic Normativity.
Its central thesis is that any alignment process under a learning paradigm that cannot fulfill its necessary and sufficient conditions will fail in
producing aligned systems.
Crossing the principle-practice gap in AI ethics with ethical problem-solving
The past years have presented a surge in (AI) development, fueled by breakthroughs in deep learning, increased computational
power, and substantial investments in the field. Given the generative capabilities of more recent AI systems, the era of
large-scale AI models has transformed various domains that intersect our daily lives. However, this progress raises concerns
about the balance between technological advancement, ethical considerations, safety measures, and financial interests.
Moreover, using such systems in sensitive areas amplifies our general ethical awareness, prompting a re-emergence of
debates on governance, regulation, and human values. However, amidst this landscape, how to bridge the principle-practice
gap separating ethical discourse from the technical side of AI development remains an open problem. In response to this
challenge, the present work proposes a framework to help shorten this gap: ethical problem-solving (EPS). EPS is a
methodology promoting responsible, human-centric, and value-oriented AI development. The framework's core resides in
translating principles into practical implementations using impact assessment surveys and a differential recommendation
methodology. We utilize EPS as a blueprint to propose the implementation of an Ethics as a Service Platform, currently
available as a simple demonstration. We released all framework components openly and with a permissive license, hoping
the community would adopt and extend our efforts into other contexts. Available in
this URL.
TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese
Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be
equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally
underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts
they produce, like computational demands and licensing regimes. In this study, we document the development of
open-foundation models tailored for use in low-resource settings, their limitations, and their benefits. This is the
TeenyTinyLlama
pair: two compact models for Brazilian Portuguese text generation. We release them under the permissive Apache 2.0 license
on
GitHub and
Hugging Face for community use and further development.
Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance
The utilization of artificial intelligence (AI) applications has experienced tremendous growth in recent years, bringing forth
numerous benefits and conveniences. However, this expansion has also provoked ethical concerns, such as privacy breaches,
algorithmic discrimination, security and reliability issues, transparency, and other unintended consequences.
To determine whether a global consensus exists regarding the ethical principles that should govern AI applications
and to contribute to the formation of future regulations, this paper conducts a meta-analysis of 200 governance
policies and ethical guidelines for AI usage published by public bodies, academic institutions, private companies,
and civil society organizations worldwide.
We identified at least 17 resonating principles prevalent in the policies and guidelines of our dataset, released as an
open-source database and tool. We present the limitations of performing a global scale analysis study paired with a
critical analysis of our findings, presenting areas of consensus that should be incorporated into future
regulatory efforts.
Risks of Using Facial Recognition Technologies in Public Security Applications
Counterfactual Analysis by Algorithmic Complexity: A metric between possible worlds
Counterfactuals have become an important area of interdisciplinary interest, especially in logic, philosophy of language, epistemology,
metaphysics, psychology, decision theory, and even artificial intelligence. In this study, we propose a new form of analysis for
counterfactuals: analysis by algorithmic complexity, inspired by Lewis-Stalnaker's Possible Worlds Semantics.
Engaging in a dialogue with literature this study will seek to bring new insights and tools to the debate, so that the object of interest,
counterfactuals, may be understood in an intuitively plausible way, and a philosophically justifiable manner, aligned with the way we
usually think about counterfactual propositions and our imaginative reasoning.
On the efficiency of ethics as a governing tool for artificial intelligence
Meta-analyses of the AI Ethics research field point to convergence on certain ethical principles that supposedly govern
the AI industry. However, little is known aboutthe effectiveness of this form of "Ethics."" In this paper, we would like
to conducta critical analysis of the current state of AI Ethics and suggest that this form ofgovernance based on principled
ethical guidelines is not sufficient to norm theAI industry and its developers. We believe that drastic changes are necessary,
both in the training processes of professionals in the fields related to the development of software and intelligent systems
and in the increased regulation ofthese professionals and their industry.
To this end, we suggest that law shouldbenefit from recent contributions from bioethics, to make the contributions of AI ethics
to governance explicit in legal terms.
Progress in the Federal Senate 2022 PL 21/20 - PL 5051/19 - PL 872/21
Good AI for the Present of Humanity Democratizing AI Governance
Singularity and Coordination Problems: Pandemic Lessons from 2020
Metanormativity: Solving questions of moral and empirical uncertainty
How can someone reconcile the desire to eat meat, and a tendency toward vegetarian ideals? How should we reconcile
contradictory moral values? How can we aggregate different moral theories? How individual preferences can be
fairly aggregated to represent a will, norm, or social decision? Conflict resolution and preference aggregation
are tasks that intrigue philosophers, economists, sociologists, decision theorists, and many other scholars,
being a rich interdisciplinary area for research. When trying to solve questions about moral uncertainty a
meta understanding of the concept of normativity can help us to develop strategies to deal with norms themselves.
2 nd-order normativity, or norms about norms, is a hierarchical way to think about how to combine many
different normative structures and preferences into a single coherent decision. That is what metanormativity
is all about, a way to answer: what should we do when we don't know what to do? In this study, we will review
a decision-making strategy dealing with moral uncertainty, Maximization of Expected Choice-Worthiness.
Given the similarity to this metanormative strategy to expected utility theory, we will also show that it is possible
to integrate both models to address decision-making problems in situations of empirical and moral uncertainty.