Website Logo - a skateboarder upside down.

NK-Correa.

Hello! I am Nicholas Kluge.

Machine Learning Engineer

I am a Ph.D. student and a Master in Electrical Engineering from the Pontifical Catholic University of Rio Grande do Sul (PUC-RS). Currently, I'm finishing my Ph.D. at the Bonn University (Germany).

I am also the president of the PUC-RS chapter of the AI Robotics Ethics Society.

I work as a Machine Learning Engineer, and my main area of research is AI Ethics, AI Safety, and AI Alignment. I like writing, programming, studying, skateboarding, and playing D&D. If you would like to work with me, check out my ongoing research projets!

Skills

Projects

Teeny-Tiny Castle

Learn how to create and use tools for addressing safety issues in AI.
You can also find an introductory course on ML organized by the AIRES at PUCRS!

  • Scikit-learn
  • Keras
  • Pytorch

AIRES Playground

The AIRES Playground is a web app that contains all of the tools and research results made possible by AIRES at PUCRS. There, one can find language model playgrounds, chatbots, tutorials, research results, and much more.

  • Python
  • Dash
  • Hugging Face

Ai.ra

Ai.ra is a chatbot designed to provide definitions on topics related to artificial intelligence, machine learning, AI ethics, and AI safety.

  • Python
  • Keras
  • Dash

AIRES at PUCRS

AIRES at PUCRS is the first international chapter of AIRES, where we focus on educating tomorrow's AI leaders in ethical AI principles to ensure AI is created ethically and responsibly.

  • PUC-RS
  • Brown
  • UCLA

Worldwide AI Ethics

Here you can find the source code used to create our Worldwide AI Ethics dashboard, a systematic review of 200 documents related to AI ethics and governance.

  • Python
  • Dash

RAIES

RAIES (Network for Ethical and Safe Artificial Intelligence) seeks solutions that enable developers and companies that produce applications through intelligent systems to institute policies that stimulate the development of ethical and safe AI.

  • PUC-RS
  • Navi
  • FAPERGS

Sk8 Trick Classifier

This repository contains accelerometry signals from skateboarding tricks, together with a trained neural network to distinguish these tricks.

  • Python
  • Keras
  • TensorFlow

Password Cracking

A simple Dash app designed to illustrate the vulnerabilities associated with having weak passwords (encrypted with broken hashes).

  • Python
  • Dash

Greenmotor.ai

Greenmotor is a startup that uses artificial intelligence to reduce supermarket food waste. Greenmotor proposes to bring assertive results of sales forecast, ideal stock, purchase proposal, and pricing to the supermarket segment.

  • Python
  • TensorFlow

Recent Publications

In the last decade, a great number of organizations have produced documents intended to standardize, in the normative sense, and promote guidance to our recent and rapid AI development. However, the full content and divergence of ideas presented in these documents have not yet been analyzed, except for a few meta-analyses and critical reviews of the field. In this work, we seek to expand on the work done by past researchers and create a tool for better data visualization of the contents and nature of these documents. We also provide our critical analysis of the results acquired by the application of our tool into a sample size of 200 documents.
In this paper, "Technical Note: Risks of Using Facial Recognition Technologies for Public Safety Purposes," researchers from the AI Robotics Ethics Society (AIRES) and the Network for Ethical and Safe Artificial Intelligence (RAIES) present arguments from the technical (regarding the technology), legal (regarding the legality of FRTs), and ethical (regarding the ethical problems we face when using FRTs) arenas. We hope our work can be used to inform the discussion regarding the use of FRTs for public safety purposes.
Counterfactuals have become an important area of interdisciplinary interest, especially in logic, philosophy of language, epistemology, metaphysics, psychology, decision theory, and even artificial intelligence. In this study, we propose a new form of analysis for counterfactuals: analysis by algorithmic complexity, inspired by Lewis-Stalnaker's Possible Worlds Semantics. Engaging in a dialogue with literature this study will seek to bring new insights and tools to the debate, so that the object of interest, counterfactuals, may be understood in an intuitively plausible way, and a philosophically justifiable manner, aligned with the way we usually think about counterfactual propositions and our imaginative reasoning.
Meta-analyses of the AI Ethics research field point to convergence on certain ethical principles that supposedly govern the AI industry. However, little is known aboutthe effectiveness of this form of "Ethics."" In this paper, we would like to conducta critical analysis of the current state of AI Ethics and suggest that this form ofgovernance based on principled ethical guidelines is not sufficient to norm theAI industry and its developers. We believe that drastic changes are necessary, both in the training processes of professionals in the fields related to the development of software and intelligent systems and in the increased regulation ofthese professionals and their industry. To this end, we suggest that law shouldbenefit from recent contributions from bioethics, to make the contributions of AI ethics to governance explicit in legal terms.
To contribute to the ethical-normative guidelines already proposed by Bills n. 21/2020, n. 5051/2019 and n. 872/2021, and aims to assist in the ethical and safe development of AI, to avoid distortions in the expected results and the consequent accountability of those involved (when applicable), it is that AIRES at PUCRS and the PPGD of PUCRS Law School present their technical note structured from the main topics addressed in the three bills, with Bill no. 21/2020 as the main focus. The analysis was made with objective arguments and in accessible language, besides comparing each of the articles in Bill 21/2020 with Bill's n. 5051/2019 and PL's n. 872/2021, which are being discussed together.
What do Cyberpunk and AI Ethics have to do with each other? One similarity between AI Ethics and Cyberpunk literature is that both seek to explore future social and ethical problems that our technological advances may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed and debated, and several ethical principles and guides have been suggested as governance policies for the tech industry. However, would this be the role of AI Ethics? To serve as a soft and ambiguous version of the law? In this study, we will seek to expose some of the deficits of the underlying power structures of the AI industry, and suggest that AI governance be subject to public opinion, so that 'good AI' can become 'good AI for all'.
Are there any indications that a Technological Singularity may be on the horizon? In trying to answer these questions, the authors made a small introduction to the area of safety research in artificial intelligence. The authors review some of the current paradigms in the development of autonomous intelligent systems, searching for evidence that may indicate the coming of a possible Technological Singularity. Finally, the authors present a reflection using the COVID-19 pandemic, something that showed that global society's biggest problem in managing existential risks is its lack of coordination skills as a global society.
How can someone reconcile the desire to eat meat, and a tendency toward vegetarian ideals? How should we reconcile contradictory moral values? How can we aggregate different moral theories? How individual preferences can be fairly aggregated to represent a will, norm, or social decision? Conflict resolution and preference aggregation are tasks that intrigue philosophers, economists, sociologists, decision theorists, and many other scholars, being a rich interdisciplinary area for research. When trying to solve questions about moral uncertainty a meta understanding of the concept of normativity can help us to develop strategies to deal with norms themselves. 2 nd-order normativity, or norms about norms, is a hierarchical way to think about how to combine many different normative structures and preferences into a single coherent decision. That is what metanormativity is all about, a way to answer: what should we do when we don't know what to do? In this study, we will review a decision-making strategy dealing with moral uncertainty, Maximization of Expected Choice-Worthiness. Given the similarity to this metanormative strategy to expected utility theory, we will also show that it is possible to integrate both models to address decision-making problems in situations of empirical and moral uncertainty.

Contact

Contact me