This is the final of three microposts laying out words of caution about Generative AI. Here we will outline the dangers of perception of GenAI authority. In the past two microposts we talked about its risks and misuses.

Generative AI refers to AI techniques that learn a representation of artifacts from data and use it to generate brand-new, unique artifacts that resemble but don’t repeat the original data.

We are all super impressed by Generative AI. We ask questions to it and its responses look very real. Maybe we fact-check one or two items from the response just to be sure. And then we are ready to believe. Below are some features that make our human psychology rely too much on a Generative AI that is:

Sensorial. The most popular use cases of Generative AI are, by nature, generators of perceptions: synthesis of images and videos, Large Language Models (LLM), chatbots, generation of marketing content, and note taking. This information is entirely consumed by our senses, and does not necessarily make it all the way to our logical brain.

Plausible. Generative AI is a perfect salesperson. It will not necessarily have the right context or information sources to build the information that it will give you. But it is going to compose it in a way that seems grammatically or aesthetically coherent and will please its human audience. That makes us more receptive to believe the content is real and correct, even if it is a deepfake, without performing conscious reality checks.

Authoritative. We already said Generative AI tools could be hallucinating or biased. However, it works on the premise that it is correct. Unless configured, it will never tell you that it does not know the answer or it may be inaccurate. It will build an entire stream of arguments to support the thesis that is the base of its responses to you. It is hard to say no to such an overwhelmingly confident interlocutor.

Human-like. We all love talking to a warm human soul. And Generative AI can achieve that feeling. However, we have not achieved Artificial General Intelligence yet. We do not understand how algorithms do what they do, and we certainly know there is nothing similar to a conscious being behind the generation of content. This seems obvious, but we all subconsciously tend to forget it when we are in front of Generative AI.

To sum up, Generative AI is powerful and can work with impressive autonomy. However, it is fed with training data, out of which it cannot (yet) deliver confidently. When drifts happen in the base data schema or configuration, the models need to be retrained and algorithms may need to be adjusted, to maintain effectiveness and accuracy that makes contents trustworthy.

And yet, it looks so real…

 

Share:

Related posts

Beyond the hype Part 3: Perception of Generative AI authority

Beyond the hype Part 3: Perception of Generative AI authority

This is the final of three microposts laying out words of caution about Generative AI. Here we will outline the dangers of perception of GenAI…

Beyond the hype Part 2: Misuses of Generative AI

Beyond the hype Part 2: Misuses of Generative AI

This is the second of three microposts laying out words of caution about Generative AI. First we talked about risks of using Generative AI inappropriately.…

Beyond the hype Part 1: Risks of Generative AI

Beyond the hype Part 1: Risks of Generative AI

This is the first of three microposts laying out words of caution about Generative AI. Here we will outline the usage risks, and in future…