Website Logo - a skateboarder upside down.

NK-Correa.

Hello! I am Nicholas Kluge.

Machine Learning Engineer

I am a postdoc researcher at the Center for Science and Thought at the University of Bonn (Germany). I am also the president of the PUC-RS chapter of the AI Robotics Ethics Society.

I work as a Machine Learning Engineer, and my main areas of research are AI Ethics, AI Safety, and AI Alignment. I like writing, developing code, studying, skateboarding, and playing D&D. If you would like to work with me, check out my ongoing research projets!

Skills

Projects

TeenyTinyLlama

TeenyTinyLlama is a pair of open-source compact language models based on the Llama 2 architecture trained on a Brazilian Portuguese corpus.

  • Transformers
  • PyTorch
  • Hugging Face

Aira

Aira is a series of chatbots achieved via instruction-tuning and DPO, developed as part of my doctoral dissertation, "Dynamic Normativity: Necessary and Sufficient Conditions for Value Alignment."

  • Transformers
  • PyTorch
  • Hugging Face

Worldwide AI Ethics

Worldwide AI Ethics is a systematic review of 200 documents ethical guidelines for AI governance that produced one of the most comprehensive databases of AI ethics principles.

  • Python
  • Dash

Teeny-Tiny Castle

Learn how to tackle ethical and safety issues in AI. This project is a series of interactive tutorials that teach you how to use AI responsibly.

  • Scikit-learn
  • TensorFlow
  • PyTorch

Model Library

The Model Library is a project that maps the risks associated with modern machine learning systems. Here, we assess some of the most recent and capable AI systems ever created.

  • Gradio

Sk8 Trick Classifier

This repository contains a dataset of accelerometry signals from skateboarding tricks, together with a trained an ensemble of neural networks to distinguish these tricks.

  • Scikit-learn
  • TensorFlow
  • PyTorch

AIRES at PUCRS

AIRES at PUCRS is the first international chapter of AIRES, where we focus on educating tomorrow's AI leaders in ethical AI principles to ensure AI is created ethically and responsibly.

  • PUC-RS
  • Philosophy
  • Computer Science

RAIES

RAIES (Network for Ethical and Safe Artificial Intelligence) seeks solutions that enable developers and companies that produce applications through intelligent systems to institute policies that stimulate the development of ethical and safe AI.

  • PUC-RS
  • Navi
  • FAPERGS

Greenmotor.ai

Greenmotor is a startup that uses predictive models to reduce supermarket food waste.

  • Scikit-learn
  • TensorFlow
  • PyTorch

Recent Publications

Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts they produce, like computational demands and licensing regimes. In this study, we document the development of open-foundation models tailored for use in low-resource settings, their limitations, and their benefits. This is the TeenyTinyLlama pair: two compact models for Brazilian Portuguese text generation. We release them under the permissive Apache 2.0 license on GitHub and Hugging Face for community use and further development.
The utilization of artificial intelligence (AI) applications has experienced tremendous growth in recent years, bringing forth numerous benefits and conveniences. However, this expansion has also provoked ethical concerns, such as privacy breaches, algorithmic discrimination, security and reliability issues, transparency, and other unintended consequences. To determine whether a global consensus exists regarding the ethical principles that should govern AI applications and to contribute to the formation of future regulations, this paper conducts a meta-analysis of 200 governance policies and ethical guidelines for AI usage published by public bodies, academic institutions, private companies, and civil society organizations worldwide. We identified at least 17 resonating principles prevalent in the policies and guidelines of our dataset, released as an open-source database and tool. We present the limitations of performing a global scale analysis study paired with a critical analysis of our findings, presenting areas of consensus that should be incorporated into future regulatory efforts.
In this paper, "Technical Note: Risks of Using Facial Recognition Technologies for Public Safety Purposes," researchers from the AI Robotics Ethics Society (AIRES) and the Network for Ethical and Safe Artificial Intelligence (RAIES) present arguments from the technical (regarding the technology), legal (regarding the legality of FRTs), and ethical (regarding the ethical problems we face when using FRTs) arenas. We hope our work can be used to inform the discussion regarding the use of FRTs for public safety purposes.
Counterfactuals have become an important area of interdisciplinary interest, especially in logic, philosophy of language, epistemology, metaphysics, psychology, decision theory, and even artificial intelligence. In this study, we propose a new form of analysis for counterfactuals: analysis by algorithmic complexity, inspired by Lewis-Stalnaker's Possible Worlds Semantics. Engaging in a dialogue with literature this study will seek to bring new insights and tools to the debate, so that the object of interest, counterfactuals, may be understood in an intuitively plausible way, and a philosophically justifiable manner, aligned with the way we usually think about counterfactual propositions and our imaginative reasoning.
Meta-analyses of the AI Ethics research field point to convergence on certain ethical principles that supposedly govern the AI industry. However, little is known aboutthe effectiveness of this form of "Ethics."" In this paper, we would like to conducta critical analysis of the current state of AI Ethics and suggest that this form ofgovernance based on principled ethical guidelines is not sufficient to norm theAI industry and its developers. We believe that drastic changes are necessary, both in the training processes of professionals in the fields related to the development of software and intelligent systems and in the increased regulation ofthese professionals and their industry. To this end, we suggest that law shouldbenefit from recent contributions from bioethics, to make the contributions of AI ethics to governance explicit in legal terms.
To contribute to the ethical-normative guidelines already proposed by Bills n. 21/2020, n. 5051/2019 and n. 872/2021, and aims to assist in the ethical and safe development of AI, to avoid distortions in the expected results and the consequent accountability of those involved (when applicable), it is that AIRES at PUCRS and the PPGD of PUCRS Law School present their technical note structured from the main topics addressed in the three bills, with Bill no. 21/2020 as the main focus. The analysis was made with objective arguments and in accessible language, besides comparing each of the articles in Bill 21/2020 with Bill's n. 5051/2019 and PL's n. 872/2021, which are being discussed together.
What do Cyberpunk and AI Ethics have to do with each other? One similarity between AI Ethics and Cyberpunk literature is that both seek to explore future social and ethical problems that our technological advances may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed and debated, and several ethical principles and guides have been suggested as governance policies for the tech industry. However, would this be the role of AI Ethics? To serve as a soft and ambiguous version of the law? In this study, we will seek to expose some of the deficits of the underlying power structures of the AI industry, and suggest that AI governance be subject to public opinion, so that 'good AI' can become 'good AI for all'.
Are there any indications that a Technological Singularity may be on the horizon? In trying to answer these questions, the authors made a small introduction to the area of safety research in artificial intelligence. The authors review some of the current paradigms in the development of autonomous intelligent systems, searching for evidence that may indicate the coming of a possible Technological Singularity. Finally, the authors present a reflection using the COVID-19 pandemic, something that showed that global society's biggest problem in managing existential risks is its lack of coordination skills as a global society.
How can someone reconcile the desire to eat meat, and a tendency toward vegetarian ideals? How should we reconcile contradictory moral values? How can we aggregate different moral theories? How individual preferences can be fairly aggregated to represent a will, norm, or social decision? Conflict resolution and preference aggregation are tasks that intrigue philosophers, economists, sociologists, decision theorists, and many other scholars, being a rich interdisciplinary area for research. When trying to solve questions about moral uncertainty a meta understanding of the concept of normativity can help us to develop strategies to deal with norms themselves. 2 nd-order normativity, or norms about norms, is a hierarchical way to think about how to combine many different normative structures and preferences into a single coherent decision. That is what metanormativity is all about, a way to answer: what should we do when we don't know what to do? In this study, we will review a decision-making strategy dealing with moral uncertainty, Maximization of Expected Choice-Worthiness. Given the similarity to this metanormative strategy to expected utility theory, we will also show that it is possible to integrate both models to address decision-making problems in situations of empirical and moral uncertainty.

Contact

Contact me