Getty Images tells AI companies: "Stop using our photos"
Why Twitter is angry at a mental health tool, and more at the Status Code.
Morning. The Status Code is here.
The weekly newsletter that doesn’t bore you like ChatGPT (ChatGPT, can I say that?)
Here’s what we got for you today:
- Twitter is not happy with this mental health tool. They say it’s lying to people.
- One famous publisher is using AI to write its articles
- Getty Images is suing an AI company for “stealing their photos.”
Now, onto the good stuff…
Not subscribed yet? Stay in the loop with AI weekly by reading the Status Code for five minutes.
It’s an emotional support tool where you can talk to an anonymous volunteer.
You can ask Koko anything. Ask for relationship advice or pour your frustrations on your not-so-good days. It is a friend, except you don't have to worry about dealing with them later.
Sounds great. But why are you telling me about Koko, the Status Man?
Well. Because it is facing some heat on Twitter right now. Why? Let me introduce you to this dude.
He's Robert Morris, founder of Koko. It seems like Robert and his team had an idea.
They wanted to use Open AI's language model, GPT-3, to craft responses when people like you and me turn to Koko to talk.
But, you and me, we turn to Koko because we want to talk to a human (albeit anonymous).
So even though Robert did say that a human had the final say, Twitter was angry.
People criticized it as "unethical" or an "act of deceit." Worst part? Koko never disclosed if it was using AI in its users.
What are your thoughts? Should companies use AI without disclosing it?
Think about the times you’ve interacted with Customer Support. Is it ethical when a company makes it seem like you're talking to a human when it's an AI?
Don’t be shy! Let me know what you think.
This publisher is using GPT-3 to write its articles
This brings me to another ChatGPT experiment I found (god knows how many of them are there now).
It's about CNET, a popular publisher of Technology.
So Futurism and a few other sources found that CNET has been using AI to write articles since November.
How did they find it? Through CNET themselves. To its credit, CNET wrote notes it was using AI in the articles.
But the number is what surprised people. It used AI to generate about 73 AI-generated articles.
This after one of its writers covered ChatGPT a few months back and said their jobs were safe from AI.
What does CNET say? They ask you not to worry. Because their editors take full control of the content from “ideation to publication.”
And they continue to publish articles with “integrity,” yadda, yadda, yadda.
Pay-per-view: Getty Images vs. Stability AI
Today's pay-per-view show is between Getty Images and Stability AI.
If you don't know, Getty Images are one of the biggest suppliers of stock images. And Stability AI created the AI art tool Stable Diffusion. Tell something to a Stable Diffusion model, and it will create pictures for you. And unlike, DallE (Open AI's model), it can run on modest GPUs.
Anyway, Getty is suing Stability AI. Here is what they allege:
Stability AI is using millions of images without permission.
Getty Images says it has lost its patience.
It says Stability AI is using its copyrighted images and associated metadata. But it didn’t get a license for their use, violating content creators' rights.
Other technology companies got their licenses
Getty also wants you to know that it has provided licenses to other technology companies. They love technological progress and are not against it.
Getty wants Stability to be like Spotify, not Napster
CEO Craig Peters compares the situation to Digital Music.
Napster was the first to offer digital music to the mass, but it didn't do things the correct way. So, it had to shut down.
But when Spotify found a better (and legal) way to offer the same service, it forever changed the music industry.
So, Getty says they are asking companies to respect intellectual property. Get the licenses, please (and help their bottom line, of course).
Meanwhile, Getty isn't seeking financial damages (for now), and they don't want the development to stop.
This legal battle might be consequential.
Anyway, the Status Man thinks this is a consequential legal battle. For the future of data, and what companies can do and what they can't.
And in the case of Stability AI, well, they bit the bullet for being too open.
The team at Stability AI open-sourced their data sources. Unlike other companies (Google and Open AI) who refuse to disclose them.
So, what do you think? Is it ethical not to disclose your data source? Or can you use content created by millions of people on the web to train AI Models? Can an AI use this letter from the Status Man to train their model?
Ponder over the question, and let me know what you think.
See ya codies