Every article tag can be clicked to get a list of all articles in that category. Every article tag also has an RSS feed! You can customize an RSS feed too!
We do often include affiliate links to earn us some pennies. See more here.

AI this, AI that - you can't go anywhere without something trying to force AI on you. Usually a company trying to get you to buy into what they've wasted billions on. So indie devs have begun fighting back with their No Gen AI Seal.

There's an increasing amount of developers using some form of AI generation from small developers to the AAA lot that I keep spotting recently, and so you might want to pick some out that are actually 100% human made. While Steam (being the easiest example) does have newer rules around AI disclosures, these are buried at the bottom of store pages and can be pretty easy to miss.

One way would be for developers to put a big badge on a store page to show off their human side, and that's exactly what some indie developers have chosen to start doing.

Announced by Alex Kanaris-Sotiriou of Polygon Treehouse (Mythwrecked & Röki) on Bluesky, they've launched the free to use No Gen AI Seal available via the Polygon Treehouse website. Writing about why it can be problematic the website states:

Generative AI is a technology that can create pictures, movies, audio (music or voice action) and writing using artificial intelligence. The issue is that these generative technologies are trained on existing works by human artists who have not given their permission, or been compensated, for their work being utilised. Essentially their work has been stolen.

The seal looks like this:

You can see it on the store pages for the likes of Mythwrecked: Ambrosia Island, Rosewater and more in the sidebar.

Perhaps in future we might see stores add specific filters to select "No Gen AI". It's clearly a growing market though, which is what pushed Valve to add their new disclosure rules, and we're likely to see a whole lot more games use generative AI as time goes on. It's going to get more messy and confusing for consumers as time goes on, at least until the ridiculous bubble finally bursts.

There's a lot of silliness going around like how Phil Spencer of Microsoft Gaming thinks generative AI will help game preservation and the Take-Two CEO believing AI will increase employment and productivity. Going by the latest 2025 GDC Survey there's clearly a lot of developers concerned about it and plenty working in companies currently using it.

Article taken from GamingOnLinux.com.
Tags: AI, Game Dev, Misc
16 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly checked on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly. You can also follow my personal adventures on Bluesky.
See more from me
You can also find comments for this article on social media: Mastodon
All posts need to follow our rules. For users logged in: please hit the Report Flag icon on any post that breaks the rules or contains illegal / harmful content. Guest readers can email us for any issues.
27 comments Subscribe
Page: «2/2
  Go to:

tuubi 6 hours ago
  • Supporter Plus
Most of the AI models I am aware of used scrapers to collect openly accessible texts/images etc. If that's even illegal under current law is a different question.
Legality aside, in my opinion this is only ethical if openly accessible means public domain or otherwise permissively licenced work.

In any case, I'll just do my best to avoid using this tech until these important questions are addressed to my satisfaction. And if any future laws or regulations make it harder to make money off of LLM tech or off products created using them, I can live with that. My employer won't be happy, but I can live with that too.
Kimyrielle 6 hours ago
Legality aside, in my opinion this is only ethical if openly accessible means public domain or otherwise permissively licenced work.

There was at least one proof-of-concept image generation model that used only CC-0/Public Domain images for training. It was pretty good at classic art, from what I could see (which makes sense). But alas, while there is plenty of OSS code around to train coding-oriented models with, text and art is a different affair. People aren't nearly as liberal with placing that under free licenses.

For most of these models, "openly accessible" meant that they downloaded whatever was NOT behind a paywall or login barriers, and trained their models with that. Downloading unprotected assets from the internet is not considered a copyright violation, so legally that's fine until this point. Whether redistributing models trained on such data is, is currently what the lawyers argue about. The sticking point is that there is no trace of the original data in the weights, so arguing copyright violation is harder than some people seem to think.

The ethic side is a different affair, of course. The problem is that different people have different ethical standards, so that's why we urgently need legal clarification and/or new regulations. Requiring individual consent is prohibitively impractical (unless we want to strangulate AI model creation by requiring them to get hundreds of millions of signatures from content creators first). That's why I hope for some sort of taxation of commercial models (while keeping open source models exempt). But that's just me. Opinions of what should be done are all over the place.


Last edited by Kimyrielle on 24 Feb 2025 at 9:15 pm UTC
Caldathras 5 hours ago
What we're effectively talking about is IP piracy here. In the end, you are never going to get a consensus among creators on this matter. Look at situation with both music and books. Some musicians/authors were quite vehemently outspoken in their opposition to the piracy of their works. Others were okay with it -- they considered it a complement. Still others were indifferent -- they didn't care either way. In the end, the law always decides in favour of those that care -- because they are generally the ones that are most vocal about it.

It's funny -- if not ironic -- because copyright originally came about to protect the public domain's rights to access the material, not the creator's ownership of their IP (i.e., copyright means "the right to copy"). In many ways, the FOSS and copyleft movements are a direct result of the for-profit sector's corruption of copyright's original purpose.


Last edited by Caldathras on 24 Feb 2025 at 10:03 pm UTC
Caldathras 5 hours ago
@TheSHEEEP
Ah, yes, the "AI bad" bandwagon.
We'll see how that is going in 5-10 years
You're assuming that the resources will still exist to maintain this corporate boondoggle after that length of time.

I have no problem with indie devs using this seal.

Personally, I want nothing to do with this LLM that they are falsely spinning as AI. It is not true AI and never will be. I consider it just a costly waste of energy. But, that's my choice and definitely doesn't have to be anyone else's.


Last edited by Caldathras on 24 Feb 2025 at 10:06 pm UTC
You're assuming that the resources will still exist to maintain this corporate boondoggle after that length of time.
Small, local LLMs aren't going away. LLMs aren't likely to get much better, but they certainly aren't going away, however you feel about it.
Salvatos 3 hours ago
For most of these models, "openly accessible" meant that they downloaded whatever was NOT behind a paywall or login barriers, and trained their models with that. Downloading unprotected assets from the internet is not considered a copyright violation, so legally that's fine until this point.
I would like a great big "citation needed" on that, not to forget "under which jurisdiction?". Let’s take a look at the EU for instance (emphasis mine):
https://op.europa.eu/en/publication-detail/-/publication/8ca54353-87f9-11ec-8c40-01aa75ed71a1/language-en
The use of works available on the Internet usually requires prior authorisation of the copyright owner. That applies to pictures, marketing videos, clips, articles published in newspapers, corporate brochures, website design, etc. The mere fact that a work is available digitally does not mean copyright law does not protect it.

Quite to the contrary, when it comes to benefiting from copyright protection, the manner of fixation is irrelevant and often fixation is not even required at all[29] . Downloading content from any website is, in fact, making a copy of that content, which can be compared to making copies of a book in a library. Such action may therefore constitute a copyright infringement.
That sounds rather contrary to your assertion.


Requiring individual consent is prohibitively impractical (unless we want to strangulate AI model creation by requiring them to get hundreds of millions of signatures from content creators first).
I doubt they would need to engage hundreds of millions of content creators either way. In many cases, just getting approval from the right holders of vast collections of works would be enough to cover them legally – hopefully said right holders would obtain necessary consents from the original creators at the individual level if necessary, but that already greatly dilutes the burden on the model makers. We can think of things like music labels, publishing houses and newspapers, for example, where one legal entity is able to license a considerable amount of materials at once or in sizeable chunks.

It’s not like no one would accept given the chance. I’ve seen plenty of job ads asking for people to produce voice samples and chat bot queries, or manually validate and correct LLM responses and other "AI" output to improve quality. These companies are paying people to provide and improve the training materials for them, so why would others get a pass on just siphoning everything they can find on the Internet?

And regardless, such authorizations being difficult or time-consuming to obtain hardly trumps the copyright holders’ rights. If the AI model makers need to spend more money to train their models and take longer to improve them, that’s not anyone’s problem but theirs. And if specific right holders withhold their consent, tough luck, no AI model gets made based on their work. I’m not going to let a logging company harvest all the trees in my county at will just because it would be so much easier than obtaining contracts or permits from individual land owners and it’s not fair that we’re making it harder for them to make money.
emphy 3 hours ago
There's going to be so many issues with this. People using AI not declaring it, or using it for other aspects. Say, translation. What if I write a game in french and use the help of ChatGPT to translate it?

Not sure of the definitions, but I always understood translations not to be generative. As long as you use a dedicated translator ai tool (deepl, google translate) and are open about its use, you would be in the clear.

If it were to become an issue: quite a few indies rely on community translations.
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register