JoeWilson
Mame
I saw many people (not here) arguing that we shouldn't generate AI images because it takes a lot of electricity to generate them, therefor, hurting the planet. That's where my argument about wasting electricity on entertainment comes from. I was not arguing that anyone is against entertainment.
As far as "stealing" art, that is one heck of an argument. Is emulating a style stealing? I don't think AI is going into the Louvre and bringing me the Mona Lisa. "Stealing" a style has been done by artists since the beginning of art, and happens in every single man-made thing we can possibly imagine. I doubt whoever designed your clothing had all of their ideas originate from their own brain. I doubt whoever designed my car came up with the design in a dream. If you think bonsai is an art form, are you copying the Japanese style, taking jobs from Japanese bonsai artists?
The only valid argument, in my humble and unknowledgeable opinion, is that many people (not only artists, but programmers like me, writers, engineers, etc etc etc) are going to lose work down the line.
As I mentioned a post or two back, copyright for an art style is a gray area. But this is largely an academic point, as designs, characters, and original intellectual property in any form are not only copyrightable, but the author of the work is granted copyright automatically. The reason the style argument is moot is that the same theft-based AI models, for example, Midjourney, which was trained on countless pieces of stolen artwork, generate both input-based style variations of photos like we saw in this thread, and prompt-based generative AI images. All of it fundamentally runs on theft. Every generative AI model that produces a passable image or video was trained on copyrighted material, either directly or in its pre-training dataset.
There are numerous ongoing lawsuits that outline the degree to which AI companies have trained on and benefited from stolen, copyrighted material. For example: https://www.bbc.com/news/articles/cg5vjqdm1ypo
Here's another, Anthropic, notoriously trained on seven million pirated books: https://www.bbc.com/news/articles/c77vr00enzyo
Some AI models claim to be "ethically trained", meaning trained only on content they own or that is in the public domain, but almost universally, they have licensed a training library that contains copyrighted material that was never meant for commercial use. There's a very common one that most AI companies use that escapes my memory.
OpenAI famously let it slip that they trained on millions of hours of YouTube content. I can do this all day...
PS: Theft and energy use are far from the only reasons AI models are problematic. Resource use (water) is a big problem, especially for the small communities where new data centers are built, which are struggling to provide water to homes. Relying on AI is literally making people dumber - parts of your brain start to shut down as you rely more and more on the AI to solve problems and do tasks for you. It simply sucks a lot of the time; the answers it provides are often wrong, sometimes dangerously so. Relying on it for anything critical (like medical advice/information) is extremely risky. And on and on.
Last edited: