Creator Economy Law – Issue #16

Licensing to build your own AI? Here’s how, UK online safety bill, YouTube’s trademark issues, Twitter cuts off dev partners (but why?), and more!


This is Creator Economy Law, a newsletter dedicated to exploring and analyzing the legal issues surrounding the creator economy, creators, and internet platforms. If you enjoy what you’re reading, share with friends, and invite them to subscribe using the button above and share using #CreatorEconomyLaw.


Are you over 40? As Aaliyah sang… “age ain’t nothing but a number” and there just might be an opportunity for sponsorships with L’Oréal! They’ve already partnered with 10 influencers on Instagram at the end of last year for a campaign, and they aren’t the only company launching more diverse, age-inclusive campaigns. Time to rock the boat!

I’m excited to share that this week I surpassed my first 1,000 subscribers!! 🥳 I started this newsletter in October 2022 with the goal of reaching 500 subs within a couple of months. Thanks to everyone that subscribed and reads 🫶🏻 and here’s to many more future celebrations! 🥂


Here’s what’s been happening in the world of Creator Economy Law.


What You Should Know

Twitter cuts off third-party developers, claims rule violations

Late last week, third-party apps for accessing Twitter, such as Tweetbot, Twitterific, and Fenix, began unexpectedly experiencing outages or are down completely. The Information reviewed internal Slack messages that indicate the shutdown is intentional, and it has since been confirmed by Twitter. This, of course, impacts the developers of third-party clients for accessing and using Twitter services. And, it makes sense why the now private company would be interested in taking down apps that reduce its revenues by removing advertisements and promoted content.

📖 Read:

🗣 Franklin’s Take: These actions by Twitter begs the question of where does this behavior stop? Let’s take Hootsuite and Sprout Social as examples. They both offer a wide range of subscription options, notably corporate and enterprise social media management tools (among other things). There are also social listening companies that aggregate data from across the web, including social media platforms like Twitter.

If they have a customer base that’s locking in for multi-year deal periods, these types of companies need some sense of stability from the partnerships teams within the platforms. It doesn’t go well for third-party developers if they are scrambling to get comms relayed for their clients that can’t access all or a portion of what they’re paying for.

I haven’t heard of any issues like what I describe above; however, if I was a product counsel or commercial counsel supporting teams at a company that relied, in whole or in part, on an (arguably) unstable platform partner like Twitter, I’d be:

✅ working diligently to ensure customer contract terms don’t inadvertently extend liability back on the company (was Twitter or any other platform(s) a material component to offering the services?);

✅ reviewing deals with platform partners to understand the financial, legal, security, or other potential areas of exposure for the company (how often and how by much can API access pricing be changed?); and

✅ meeting more regularly with my company’s partnerships teams to ensure there’s ongoing communication between legal and the teams on the front line.

What would you do?

Getty Images sues Stability AI in the UK

Getty Images is suing Stability AI in the UK for copyright infringement. “It is Getty Images’ position,” the company writes in a press release, “that Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images absent a license to benefit Stability AI’s commercial interests and to the detriment of the content creators.”

📖 Read:

🗣 Franklin’s Take: I find this one important due to the UK’s text and data mining exception under section 29A of the UK Copyright Designs and Patents Act 1988 (CDPA). This law creates an exception to obtaining a license for the use of copyrighted works in connection with building an AI/ML tool for non-commercial research purposes. It will be interesting to see how, if at all, this law that’s unique to the UK will impact the arguments made by either side. Check out my previous post diving into the UK IPO’s response to the recent request for evidence and views on a range of options on how AI should be dealt with in the patent and copyright systems.

3 artists bring a class action against Stable Diffusion, Midjourney, and DeviantArt

A new class action was filed against Stability AI (Stable Diffusion), Midjourney, and DeviantArt, Inc. (DreamUp) on behalf of artists whose works were used to train AI/ML algorithms. The named artists bringing the class action included references to haveibeentrained.com as proof that their works were used to create the Stable Diffusion and Midjourney tools. The complaint dives into how each company developed its specific tool and then offered them to the public.

The complaint dives deep into the details of how Stable Diffusion works. It also highlights how Stability paid LAION (“Large-Scale Artificial Intelligence Open Network”) to put together LAION-5B, a dataset of 5.85 billion images. Additionally, the complaint references the use of Midjourney by artist Kristina Kashtanova (currently defending the registration of her work before the U.S. Copyright Office) and Jason Allen for his submission to the Colorado State Fair art competition.

The claims include allegations of: 1️⃣ Copyright infringement (direct + vicarious); 2️⃣ DMCA violations by removing copyright management information (CMI); 3️⃣ Violations of privacy rights; and 4️⃣ Violations of CA unfair competition laws.

📖 Read:

🗣 Franklin’s Take: Why aren’t MicrosoftGoogleMetaApple, or Amazon releasing their own versions of OpenAI‘s ChatGPT or Midjourney? I have my thoughts from an IP perspective…

Any company developing an AI/ML tool starts with a robust dataset to train the tool. In the case of tools that involve generative work outputs, there’s likely no other way to get a model to a functional place without pre-existing works.

If they were in the UK, they’d be able to take advantage of the text and data mining (TDM) exception that basically creates a fair use defense and carve-out from copyright infringement for data scraping and/or copying and ingesting pre-existing, copyrighted works of authorship.

But, at least for now, it’s an argument of copyright infringement when they have openly admitted to using pre-existing works. It leaves them with a huge court battle and/or a massive regulation push to get something similar to the UK’s TDM in place. Both in the US and EU, courts haven’t reached a conclusion on this type of scraping/copying activity, and regulators are still exploring what it means and how to draft regulations around it. This lack of legal clarity is not helpful for OpenAI and similar “non-profit” or “research” companies that are now trying to pivot their entire corporate structures or commercialize products built on their non-commercial activities.

The bigger questions will come once the capabilities to train and build the models that power these types of AI platforms/tools become easier and more accessible both from a legal & policy perspective (copyright, bias, deepfakes/social harm, etc.) and from a technological perspective (cheaper/eco-friendly computing power, data access, etc.).

I think it’s naive to think a Google, Microsoft, Meta, or Amazon couldn’t already create a ChatGPT competitor or blow it out of the water. It’s really just asking:

1️⃣ What would their end-user response be to them doing that with all the data they have (see GitHub Copilot litigation/backlash going on)?

2️⃣ What would the social impact of doing it be, and how do they maintain control over the tech once it’s released? This is compared to the more playful “mild” product features we have now, such as auto caption on videos or Zooms, or auto-complete for emails and docs. It’s likely about getting us comfortable with the tech over time. Plus, the New York Times article about Google’s “code red” highlights the potential business-destroying aspects of some of these technologies.

3️⃣ Right now, if they can rely on a third party to be responsible for the (potential) copyright infringement to build a model, without needing to be involved in the underlying datasets used to train the model (which are often discarded once a model is built), then you can’t argue with them for going that path and only having to defend (arguably) more difficult-to-bring claims of infringement by any copyright owners, or users/privacy violations (or other legal causes of actions) over the outputs.

Do you agree? Disagree? Let me know in the comments.


Don't Miss

Learn With Me

Licensing to Build Your Own AI

With all of the lawsuits flying around, I figured now is the perfect time to share some of my favorite resources and examples of licenses that can be used by AI/ML technology developers to ensure they are capturing a license, or permission, to use a creator’s works (such as art, writings, or music) or other types of data when building a dataset to train a model.


Music Video of the Week

Does anybody feel like singing the blues? This week, prepare to get down with Aretha Franklin! I first really started listening to Aretha when I was in college. The “Rare and Unreleased Recordings” compilation was released during that time and I gave it a try, which unlocked a whole new world from the Queen of Soul.

One of my favorite songs is Dr. Feelgood. There’s so much and this 2018 piece in The New Yorker by Emily Lordi captures it all perfectly, including my favorite live version of the song from March 1971.

Watch on YouTube or listen on Apple Music.


Editor's Notes

Affiliate Links. As an Amazon Associate, I earn from qualifying purchases. I have noted above where links to products on Amazon may earn me a commission if you make a purchase. Thanks for supporting my work!

No Legal Advice. This newsletter is published solely for educational and entertainment value. Nothing in this newsletter should be considered legal advice. If you need legal assistance or have specific questions, you should consult a licensed attorney in your jurisdiction. I am not your attorney. Do not share any information in the comments you should keep confidential.

Personal Opinions. The opinions and thoughts shared in this newsletter are my own, and not those of my employer or any of the third parties mentioned or linked to in this newsletter. No affiliation or endorsement is implied or otherwise intended with third parties that are referenced or linked.


Enjoying this? Share with someone you think might be interested! If this was forwarded to you, jump over to LinkedIn and subscribe for free.

%d