This is the first of three microposts laying out words of caution about Generative AI. Here we will outline the usage risks, and in future posts we will talk about voluntary misuse and risks of authority perception.

Generative AI refers to AI techniques that learn a representation of artifacts from data and use it to generate brand-new, unique artifacts that resemble but don’t repeat the original data.

A very powerful and promising set of technologies, although currently simplified by the hype, it is of utmost importance to understand the limitations and risks that come with it. Hopefully these give some quick-reference vocabulary and concepts of aspects to be prudent about:

  • Lack of context. The nature of generative AI models, especially with basic prompts, can lead to reductive or oversimplified results. Mostly, context in generative AI models is input by conversation with the user, but any pre-existing knowledge from authoritative data sources is not guaranteed.
  • Lack of creativity. Generative AI is designed to mimic observations from the real world. Yes, results can give new ideas, combine patterns in infrequent ways, and boost creativity in human users, but generative AI is not creative by itself, it lacks the ability to understand and synthesize concepts the same way humans do.
  • No explainability. Generative AI is a Black Box. Period. You don’t really know what is going on in Deep Learning algorithms, and if you tried to, it would not make sense to your human brain. So the only way to put some level of interpretability and control that enables trust in Generative AI is by reinforcing results with humans in the loop.
  • Inherited bias. Being trained by enormous amounts of public data, Generative AI feeds from many data sources that are incomplete or improperly governed. This leads to bias reflecting prejudgments from the data, which are inadvertently perpetuated in the downstream systems.
  • Hallucinations. Generative AI is designed for plausibility, not for accuracy. There are many cases when it generates incorrect content, but algorithms are incapable of detecting it (or even to admit they don’t know). As a consequence, incorrect outputs generate false tokens that are used as seeds for further generations, creating nonsensical content as a result.
  • Unethical AI. As a result of the limitations above, it is not rare to see a case when Generative AI creates content that is misinformed, opaque, harmful, or offensive. Since ethics are created by humans, it is often humans who need to be in the loop to train AI to remove toxic results.

As a result, for the moment the best and only approach to use Generative AI is to ensure there are verifications and guardrails in the produced results. Only these practices can reduce issues of trust, transparency, and ethics. Thus it is crucial to pair generative AI with a human in the loop to police the results. 

Although, we humans make our own share of mistakes too.

Share:

Related posts

Beyond the hype Part 3: Perception of Generative AI authority

Beyond the hype Part 3: Perception of Generative AI authority

This is the final of three microposts laying out words of caution about Generative AI. Here we will outline the dangers of perception of GenAI…

Beyond the hype Part 2: Misuses of Generative AI

Beyond the hype Part 2: Misuses of Generative AI

This is the second of three microposts laying out words of caution about Generative AI. First we talked about risks of using Generative AI inappropriately.…

Beyond the hype Part 1: Risks of Generative AI

Beyond the hype Part 1: Risks of Generative AI

This is the first of three microposts laying out words of caution about Generative AI. Here we will outline the usage risks, and in future…