This is the second of three microposts laying out words of caution about Generative AI. First we talked about risks of using Generative AI inappropriately. Here we will outline the potentials for voluntary misuse of this technology for intentionally causing harm. In a future post, we will look at how our human psychology perceives Generative AI authority.

 

Generative AI refers to AI techniques that learn a representation of artifacts from data and use it to generate brand-new, unique artifacts that resemble but don’t repeat the original data.

With great power comes a great responsibility. The game-changing capabilities provided by Generative AI make it a dangerous tool if left unchecked in the wrong hands. This list outlines the different misuse cases as a quick-reference guide of vocabulary and concepts:

Misrepresentation. If humans are biased, Generative AI can take it to the extreme, since it is trained by data and materials curated by humans. Taking data sources from questionable origin, Generative AI can create content that intentionally suppresses the perspectives of disadvantaged minorities.

Deepfakes. Generative AI is designed for plausibility. Thus it is much easier now to create content that looks realistic but is fake, and use it to influence voters’ decisions or the stock market. The power that widespread deepfakes have to generate false beliefs cannot be underestimated.

Fraud. Tax forms, patient records, credit records. These deeply personal data points can be generated in a realistically-looking way very easily with Generative AI, allowing for the generation of synthetic individual records. These false persons, or portions of a person, can impersonate real individuals in sophisticated scam schemes.

Plagiarism. We all have a nephew who used Generative AI to write their dissertation for them. While this technology generates what looks like new text, without the right authoring checks, it can easily steal and misrepresent original ideas without identifying the need for source citing.

Deceptive advertisement. Generative AI is very useful in generating targeted content for specific audiences. However, there is a thin red line that separates from leading consumers into harmful or risky decisions. For this technology, generating just the right amount of misinformation and bias sufficient to misguide buyers is very easy.

What can be done to mitigate these risks? Media literacy and public awareness are essential to educate society into being information-skeptical. Creating a population of natural fact-checkers is arguably the ultimate line of defense. But a strict regulation of ethical guidelines and transparency, together with effective enforcement practices, may be necessary, and is already happening.

 

 

Share:

Related posts

Beyond the hype Part 3: Perception of Generative AI authority

Beyond the hype Part 3: Perception of Generative AI authority

This is the final of three microposts laying out words of caution about Generative AI. Here we will outline the dangers of perception of GenAI…

Beyond the hype Part 2: Misuses of Generative AI

Beyond the hype Part 2: Misuses of Generative AI

This is the second of three microposts laying out words of caution about Generative AI. First we talked about risks of using Generative AI inappropriately.…

Beyond the hype Part 1: Risks of Generative AI

Beyond the hype Part 1: Risks of Generative AI

This is the first of three microposts laying out words of caution about Generative AI. Here we will outline the usage risks, and in future…