Have you ever found yourself questioning the authenticity of the images, videos, or content you encounter? The rise of AI has blurred the boundary between what is real and what is not. This technological advancement has given fraudsters the power to craft convincing deepfakes, voice clones, and machine-generated messages, making their deceitful schemes increasingly potent and successful.
Although using AI for criminal purposes is a clear-cut abuse of the technology, it’s possible for well-intentioned businesses to violate the Federal Trade Commission (FTC) Act. Section 5 of the FTC Act, “Unfair or Deceptive Acts or Practices” prohibits any material representation, omission, or practice that’s likely to mislead consumers under ordinary circumstances. Here’s how to reduce the risk of violations.
Limiting The Technology’s Risk of FTC Violations
Using AI can help improve products, increase production efficiency, and enable your company to stand out in a crowded marketplace. But AI use also can lead to misrepresentation and unintentional violation of the FTC Act.
If you design AI-based solutions, make sure you set aside time to consider how they could be abused. Suppose you’re designing an application that uses AI to analyze a voice and create a new recording to mimic that individual. How might a fraudster use the technology to engage in illegal activity? If you can envision how someone might abuse your app, criminals certainly can, too. Don’t rush a product to the market only to take risk-management measures after customers (and criminals) start using it. Embed controls in AI, pre-release.
For example, when developing a voice cloning application, you might want to:
- Secure consent for the individuals to be cloned
- Embed a watermark in the audio noting it was generated by cloning
- Limit the number of voices a user can clone
Robust user authentication and verification, analytics to detect abuse and a strict data retention policy also can help mitigate AI’s inherent marketplace risk.
Responsibility To Customers
Although the technology for identifying AI-generated content is improving every day, it often falls behind technology employed to evade detection. Therefore, consumers may not know when AI is used or be able to detect it. But that shouldn’t be their responsibility. It’s better for your company to disclose AI use to preserve customer loyalty and avoid negative media coverage.
The same goes for using AI in advertising. Let’s say, for instance, that your ads take advantage of AI to create an image, voice, or written content. You don’t disclose the use of AI and consumers believe that AI wasn’t involved. This could attract regulatory scrutiny. In other words, if your company’s ads mislead, you could face FTC enforcement actions.
Be Proactive & Talk To Experts
Deceiving consumers isn’t your company’s objective. But you must be proactive and act responsibly when using AI in products, services, and advertising. Take time to evaluate how the technology could trick customers and violate the FTC Act. Consult your attorney and contact us with questions about how to embed checks and balances and limit the technology’s risk.