Website Logo - a skateboarder upside down.

NK-Correa.

Hello! I am Nicholas Kluge.

Machine Learning Engineer

I am a postdoc researcher at the Center for Science and Thought at the University of Bonn (Germany). I am also the founder of the PUC-RS chapter of the AI Robotics Ethics Society.

I work as a AI Researcher/Machine Learning Engineer, and my main areas of research are AI Ethics, AI Safety, and AI Alignment. I like writing, developing code, studying, skateboarding, and playing D&D. If you would like to work with me, check out my ongoing research projets!

Skills

Projects

Tucano

Tucano is a series of decoder-transformers based on the Llama 2 architecture, pretrained natively in Portuguese, designed to help democratize LLMs for low-resource languages.

  • Transformers
  • PyTorch
  • Hugging Face

Ethical Problem-Solving

Ethical Problem-Solving (EPS) is a framework to promote the development of safe and ethical artificial intelligence.

  • Abstra

TeenyTinyLlama

TeenyTinyLlama is a pair of open-source compact language models based on the Llama 2 architecture trained on a Brazilian Portuguese corpus.

  • Transformers
  • PyTorch
  • Hugging Face

Aira

Aira is a series of chatbots achieved via instruction-tuning and DPO, developed as part of my doctoral dissertation, "Dynamic Normativity: Necessary and Sufficient Conditions for Value Alignment."

  • Transformers
  • PyTorch
  • Hugging Face

Worldwide AI Ethics

Worldwide AI Ethics is a systematic review of 200 documents ethical guidelines for AI governance that produced one of the most comprehensive databases of AI ethics principles.

  • Python
  • Dash

Teeny-Tiny Castle

Learn how to tackle ethical and safety issues in AI. This project is a series of interactive tutorials that teach you how to use AI responsibly.

  • Scikit-learn
  • TensorFlow
  • PyTorch

Model Library

The Model Library is a project that maps the risks associated with modern machine learning systems. Here, we assess some of the most recent and capable AI systems ever created.

  • Gradio

Sk8 Trick Classifier

This repository contains a dataset of accelerometry signals from skateboarding tricks, together with a trained an ensemble of neural networks to distinguish these tricks.

  • Scikit-learn
  • TensorFlow
  • PyTorch

AIRES at PUCRS

AIRES at PUCRS is the first international chapter of AIRES, where we focus on educating tomorrow's AI leaders in ethical AI principles to ensure AI is created ethically and responsibly.

  • PUC-RS
  • Philosophy
  • Computer Science

Recent Publications

Significant advances have been made in natural language processing in recent years. However, our current deep learning approach to language modeling requires substantial resources in terms of data and computation. One of the side effects of this data-hungry paradigm is the current schism between languages, separating those considered high-resource, where most of the development happens and resources are available, and the low-resource ones, which struggle to attain the same level of performance and autonomy. This study aims to introduce a new set of resources to stimulate the future development of neural text generation in Portuguese. In this work, we document the development of GigaVerbo, a concatenation of deduplicated Portuguese text corpora amounting to 200 billion tokens. Via this corpus, we trained a series of decoder-transformers named Tucano. Our models perform equal or superior to other Portuguese and multilingual language models of similar size in several Portuguese benchmarks. The evaluation of our models also reveals that model performance on many currently available benchmarks used by the Portuguese NLP community has little to no correlation with the scaling of token ingestion during training, highlighting the limitations of such evaluations when it comes to the assessment of Portuguese generative language models. All derivatives of our study are openly released on GitHub and Hugging Face.
This whitepaper offers normative and practical guidance for developers of artificial intelligence (AI) systems to achieve "Trustworthy AI". In it, we present overall ethical requirements and six ethical principles with value-specific recommendations for tools to implement these principles into technology. Our value-specific recommendations address the principles of fairness, privacy and data protection, safety and robustness, sustainability, transparency and explainability and truthfulness. For each principle, we also present examples of criteria for risk assessment and categorization of AI systems and applications in line with the categories of the European Union (EU) AI Act. Our work is aimed at stakeholders who can take it as a potential blueprint to fulfill minimum ethical requirements for trustworthy AI and AI Certification.
The critical inquiry pervading the realm of Philosophy, and perhaps extending its influence across all Humanities disciplines, revolves around the intricacies of morality and normativity. Surprisingly, in recent years, this thematic thread has woven its way into an unexpected domain, one not conventionally associated with pondering "what ought to be": the field of artificial intelligence (AI) research. Central to morality and AI, we find "alignment", a problem related to the challenges of expressing human goals and values in a manner that artificial systems can follow without leading to unwanted adversarial effects. More explicitly and with our current paradigm of AI development in mind, we can think of alignment as teaching human values to non-anthropomorphic entities trained through opaque, gradient-based learning techniques. This work addresses alignment as a technical-philosophical problem that requires solid philosophical foundations and practical implementations that bring normative theory to AI system development. To accomplish this, we propose two sets of necessary and sufficient conditions that, we argue, should be considered in any alignment process. While necessary conditions serve as metaphysical and metaethical roots that pertain to the permissibility of alignment, sufficient conditions establish a blueprint for aligning AI systems under a learning-based paradigm. After laying such foundations, we present implementations of this approach by using state-of-the-art techniques and methods for aligning general-purpose language systems. We call this framework Dynamic Normativity. Its central thesis is that any alignment process under a learning paradigm that cannot fulfill its necessary and sufficient conditions will fail in producing aligned systems.
The past years have presented a surge in (AI) development, fueled by breakthroughs in deep learning, increased computational power, and substantial investments in the field. Given the generative capabilities of more recent AI systems, the era of large-scale AI models has transformed various domains that intersect our daily lives. However, this progress raises concerns about the balance between technological advancement, ethical considerations, safety measures, and financial interests. Moreover, using such systems in sensitive areas amplifies our general ethical awareness, prompting a re-emergence of debates on governance, regulation, and human values. However, amidst this landscape, how to bridge the principle-practice gap separating ethical discourse from the technical side of AI development remains an open problem. In response to this challenge, the present work proposes a framework to help shorten this gap: ethical problem-solving (EPS). EPS is a methodology promoting responsible, human-centric, and value-oriented AI development. The framework's core resides in translating principles into practical implementations using impact assessment surveys and a differential recommendation methodology. We utilize EPS as a blueprint to propose the implementation of an Ethics as a Service Platform, currently available as a simple demonstration. We released all framework components openly and with a permissive license, hoping the community would adopt and extend our efforts into other contexts. Available in this URL.
Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts they produce, like computational demands and licensing regimes. In this study, we document the development of open-foundation models tailored for use in low-resource settings, their limitations, and their benefits. This is the TeenyTinyLlama pair: two compact models for Brazilian Portuguese text generation. We release them under the permissive Apache 2.0 license on GitHub and Hugging Face for community use and further development.
The utilization of artificial intelligence (AI) applications has experienced tremendous growth in recent years, bringing forth numerous benefits and conveniences. However, this expansion has also provoked ethical concerns, such as privacy breaches, algorithmic discrimination, security and reliability issues, transparency, and other unintended consequences. To determine whether a global consensus exists regarding the ethical principles that should govern AI applications and to contribute to the formation of future regulations, this paper conducts a meta-analysis of 200 governance policies and ethical guidelines for AI usage published by public bodies, academic institutions, private companies, and civil society organizations worldwide. We identified at least 17 resonating principles prevalent in the policies and guidelines of our dataset, released as an open-source database and tool. We present the limitations of performing a global scale analysis study paired with a critical analysis of our findings, presenting areas of consensus that should be incorporated into future regulatory efforts.
In this paper, "Technical Note: Risks of Using Facial Recognition Technologies for Public Safety Purposes," researchers from the AI Robotics Ethics Society (AIRES) and the Network for Ethical and Safe Artificial Intelligence (RAIES) present arguments from the technical (regarding the technology), legal (regarding the legality of FRTs), and ethical (regarding the ethical problems we face when using FRTs) arenas. We hope our work can be used to inform the discussion regarding the use of FRTs for public safety purposes.
Counterfactuals have become an important area of interdisciplinary interest, especially in logic, philosophy of language, epistemology, metaphysics, psychology, decision theory, and even artificial intelligence. In this study, we propose a new form of analysis for counterfactuals: analysis by algorithmic complexity, inspired by Lewis-Stalnaker's Possible Worlds Semantics. Engaging in a dialogue with literature this study will seek to bring new insights and tools to the debate, so that the object of interest, counterfactuals, may be understood in an intuitively plausible way, and a philosophically justifiable manner, aligned with the way we usually think about counterfactual propositions and our imaginative reasoning.
Meta-analyses of the AI Ethics research field point to convergence on certain ethical principles that supposedly govern the AI industry. However, little is known aboutthe effectiveness of this form of "Ethics."" In this paper, we would like to conducta critical analysis of the current state of AI Ethics and suggest that this form ofgovernance based on principled ethical guidelines is not sufficient to norm theAI industry and its developers. We believe that drastic changes are necessary, both in the training processes of professionals in the fields related to the development of software and intelligent systems and in the increased regulation ofthese professionals and their industry. To this end, we suggest that law shouldbenefit from recent contributions from bioethics, to make the contributions of AI ethics to governance explicit in legal terms.
To contribute to the ethical-normative guidelines already proposed by Bills n. 21/2020, n. 5051/2019 and n. 872/2021, and aims to assist in the ethical and safe development of AI, to avoid distortions in the expected results and the consequent accountability of those involved (when applicable), it is that AIRES at PUCRS and the PPGD of PUCRS Law School present their technical note structured from the main topics addressed in the three bills, with Bill no. 21/2020 as the main focus. The analysis was made with objective arguments and in accessible language, besides comparing each of the articles in Bill 21/2020 with Bill's n. 5051/2019 and PL's n. 872/2021, which are being discussed together.
What do Cyberpunk and AI Ethics have to do with each other? One similarity between AI Ethics and Cyberpunk literature is that both seek to explore future social and ethical problems that our technological advances may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed and debated, and several ethical principles and guides have been suggested as governance policies for the tech industry. However, would this be the role of AI Ethics? To serve as a soft and ambiguous version of the law? In this study, we will seek to expose some of the deficits of the underlying power structures of the AI industry, and suggest that AI governance be subject to public opinion, so that 'good AI' can become 'good AI for all'.
Are there any indications that a Technological Singularity may be on the horizon? In trying to answer these questions, the authors made a small introduction to the area of safety research in artificial intelligence. The authors review some of the current paradigms in the development of autonomous intelligent systems, searching for evidence that may indicate the coming of a possible Technological Singularity. Finally, the authors present a reflection using the COVID-19 pandemic, something that showed that global society's biggest problem in managing existential risks is its lack of coordination skills as a global society.
How can someone reconcile the desire to eat meat, and a tendency toward vegetarian ideals? How should we reconcile contradictory moral values? How can we aggregate different moral theories? How individual preferences can be fairly aggregated to represent a will, norm, or social decision? Conflict resolution and preference aggregation are tasks that intrigue philosophers, economists, sociologists, decision theorists, and many other scholars, being a rich interdisciplinary area for research. When trying to solve questions about moral uncertainty a meta understanding of the concept of normativity can help us to develop strategies to deal with norms themselves. 2 nd-order normativity, or norms about norms, is a hierarchical way to think about how to combine many different normative structures and preferences into a single coherent decision. That is what metanormativity is all about, a way to answer: what should we do when we don't know what to do? In this study, we will review a decision-making strategy dealing with moral uncertainty, Maximization of Expected Choice-Worthiness. Given the similarity to this metanormative strategy to expected utility theory, we will also show that it is possible to integrate both models to address decision-making problems in situations of empirical and moral uncertainty.

Contact

Contact me