The Federal Trade Commission (FTC) is taking action on deepfakes and unauthorized impersonation using artificial intelligence (AI).
The FTC just announced an open period for public comment on a supplemental notice of proposed rule making that would prohibit impersonation of individuals.
🛑 Why does this matter to creators? 🧐
This announcement comes right on the heels of a controversy brewing across the creator economy over the unauthorized development of AI tools that impersonate well-known creators.
I actually provided some comments for a Passionfruit article by Charlotte Colombo covering “Agent Gold” — and AI service that (originally) let fans chat with AI versions of their favorite creators. There’s only one small problem… many of the creators didn’t consent to this service and have no affiliation to it!
Read more, including what Marques Brownlee had to say about it:
🤓 Current Issues with AI and NIL
The intersection of AI and name, image, and likeness (“NIL”) rights remains largely unregulated, which is why we are seeing this activity from the FTC and also from the U.S. Congress drafting new regulations in an attempt to fill the gaps in current laws.
My friend Eliana Torres wrote up a fantastic breakdown of NIL and AI issues.
Creators may be in a position to take advantage of the existing right of publicity laws, but it will depend upon where they live and the state laws that they can utilize.
However, it becomes challenging when creators are left to fight against platforms.
😇 Platform Liability (Maybe)
It’s arguably an easier fight if the platform is actively developing an AI service that commercializes an individual creator’s NIL.
But that’s not always how these unauthorized experiences and deepfakes are being created.
If someone uses an off-the-shelf service, such as those provided by OpenAI or Midjourney, to fine-tune or prompt engineer in a way that they can generate synthetic content that is based on a creator, the platform’s liability and obligations to the creator become less clear.
Platforms typically have not been held liable for the actions of their users under laws such as Section 230 or the DMCA.
Proposed legislation — such as the federal NO FAKES Act and No AI Fraud Act, and Tennessee’s ELVIS Act — aims to bring platform liability into the discussion.
Agent Gold’s response in Twitter/X? “We recognize the importance of upholding creator rights, and it was never our intention to misrepresent or misuse your likeness. We’re going to get this right.”
Read their full response:
The company’s response potentially opens them up to liability, which is why we’ve seen updates from companies (such as OpenAI) that restrict prompts that reference public or well-known individuals or would generate potentially infringing outputs under copyright law.
Since this issue went viral, Agent Gold has changed its messaging.
Let’s see what comes of this new FTC activity!
