With AI generated content continuing to spread everywhere, the itch.io store has made a change to now require AI generated content disclosures.
In the announcement it specifically mentions it's now required for "Asset Creators" meaning developers who provide things like graphics, sounds and more for developers to use in their games. All pages get the option though, so game developers can tag their creations as using or not using AI.
From their updated guidelines:
We recently added the AI Disclosure feature. You will have time to review and update your pages accordingly, but we are strictly enforcing disclosure for all game asset pages due to legal ambiguity around rights associated with Generative AI content. Failure to tag your asset page may result in delisting.
Valve made a similar move for Steam back in early January.
This is a good change, because the store is getting flooded with it. Doing a quick search today, and keep in mind this is only those that are correctly tagged, showed 1,214 assets on itch.io that were made using some form of AI generation versus 13,536 now tagged as not using AI generation.
A number of game developers have updated their itch.io pages too, with it showing 337 made using AI generation at time of writing.
...because some time ago, when I expressed my concerns about the extensive and almost exclusive use of game engines like Unity and Unreal, the most frequent response was that it was fine because it allowed anyone, even those who didn’t know how to program, to develop games.
The engine takes care of the programming to a certain extent, you add the graphics and audio, and everyone is happy.
Now, the extensive use of AI for image generation seems to be perceived differently, but the principle remains the same: you handle the programming and audio while paying relatively little attention to the graphical aspect, yet... not everyone is happy.
Last edited by kokoko3k on 21 November 2024 at 3:02 pm UTC
Something puzzles me.
I ask because some time ago, when I expressed my concerns about the extensive and almost exclusive use of game engines like Unity and Unreal, the most frequent response was that it was fine because it allowed anyone, even those who didn’t know how to program, to develop games.
The engine takes care of the programming to a certain extent, you add the graphics and audio, and everyone is happy.
Now, the extensive use of AI for image generation seems to be perceived differently, but the principle remains the same: you handle the programming and audio while paying relatively little attention to the graphical aspect, yet... not everyone is happy.
Because the code that you use was written by the company that you licensed the engine from. With AI it is usually unclear what it's from and who is the rightful owner to the source material. Stores are simply attempting to cover their responsibility.
The engine takes care of the programming to a certain extent, you add the graphics and audio, and everyone is happy.
Now, the extensive use of AI for image generation seems to be perceived differently, but the principle remains the same: you handle the programming and audio while paying relatively little attention to the graphical aspect, yet... not everyone is happy.
Engines can be reused depending on their license, or the developer's goodwill in the case of mods. People pay attention to the engine licensing terms because they expect they will have to obey them, hence the reaction to the Unity runtime fee.
The current GenAI trend is driven by datasets of every picture ever made, ingested without the artists consent.
The proponents of the bubble do not expect to obey artists' rights such as copyright. So I am glad itch is requiring disclosures, it's a good first step against both slop and copyright laundering.
https://www.vice.com/en/article/ai-spits-out-exact-copies-of-training-images-real-people-logos-researchers-find/
https://www.salon.com/2024/01/09/impossible-openai-admits-chatgpt-cant-exist-without-pinching-copyrighted-work/
Because the code that you use was written by the company that you licensed the engine from. With AI it is usually unclear what it's from and who is the rightful owner to the source material. Stores are simply attempting to cover their responsibility.
That covers the stores, not people pointing their fingers on this but not that.
I would think it obvious that the potential legal issue also points to an ethical issue, that people might perhaps care about if they, you know, care about ethics. Also, there's an aesthetic issue--people often think AI assets will tend to be bland and crappy, and extensive use of them suggests a developer without aesthetic standards or a vision of their own.Because the code that you use was written by the company that you licensed the engine from. With AI it is usually unclear what it's from and who is the rightful owner to the source material. Stores are simply attempting to cover their responsibility.
That covers the stores, not people pointing their fingers on this but not that.
More broadly, some people worry about a kind of economic/ecological impact of widespread AI use. If everyone's using AI for written and/or artistic content, there are two potential impacts: One, nobody will be paying writers or artists and they will all lose their livelihoods. Two, because AI depends on original human writing and art as its source material and seems to get iteratively crappier if it is drawing on mostly AI stuff, the takeover of AI and loss of human artistic production would result in everything getting crappy as AI models are trained by scraping mostly the production of other AIs.
Last edited by Purple Library Guy on 21 November 2024 at 4:10 pm UTC
Because the code that you use was written by the company that you licensed the engine from. With AI it is usually unclear what it's from and who is the rightful owner to the source material. Stores are simply attempting to cover their responsibility.
That covers the stores, not people pointing their fingers on this but not that.
The stores depend on artists, there's the angle of labour rights, wealth concentration, the question of promoting creativity vs slop… Cory Doctorow has written a ton about those questions. itch.io in particular has to be sensitive to the mood of indies lest they get pushed out.
But although I agree with most of them, I still doubt this hype of "anti AI" is tied to those deep roots.
I still doubt this hype of "anti AI" is tied to those deep roots.Artists feel threatened by AI-generated art, so they loudly and publicly disapprove of it.
A lot of people sympathize with artists who express those opinions.
If you want a good incident that shows where Anti-AI sentiments are coming from, look no further than the Adobe incident this year. Adobe's policy change to scrape and use their users' data for training their AI if you use their apps was taken very poorly by every creative, to the point Adobe needed to issue a non-apology.
It's really the same thing as the Luddites. They felt threatened by new technology that could replace them to do the work more cheaply, and the end result was lower quality, but a lot of customers didn't care that much, so the more cheaply-produced and higher margin product won. Employees (usually children) that used the new machines were also in a lot more danger too, but that doesn't really relate.
The ultimate insult is that these AI generators use real artist's work as training data so they can eventually be used to replace the artists that "inspired" it.
I think AI generated art will continue to get better, and it will be easier to get better, more consistent art assets with less hallucinations over the years. And fewer artists will be hired—especially the lower-skilled ones—because an AI can do it more cheaply and never needs a day off. The highest-skilled artists will probably remain around for a long time, even just for the sake of the craft, but good luck to anyone new trying to get into the field.
It's just sad, thinking about it. Will we all end up too fat and happy to care in the future like in WALL-E? If there's no point learning how to express yourself through writing and art because AI can do it faster and better, what does that leave us to do? Oil the machines and watch AI-generated TV shows all day?
I think AI has its place, but I don't want to see so much of it in my games and stories. I read stories and play games to connect with a writer and artist who're trying to tell me something. It just feels cheaper and inauthentic. In the future, when I'm no longer able to tell the difference, the idea will feel cheap and inauthentic. I feel that connection when I'm playing SuperGiant games. I just don't feel it with an asset flip or AI-generated art in games. I think the worst experience I ever had was reading some fanfiction and realizing part way through that it must have been AI-generated, because no person would write like this, for as long as this. In fanfiction.
I think AI generated art will continue to get better, and it will be easier to get better, more consistent art assets with less hallucinations over the years.I'm not so sure. Optimists about this technology tend to be so because they think of it as a new, starting field and if the first results are this good there should be lots of room for improvement. But as far as I can figure, it isn't--the idea behind this and research into it at gradually increasing scales have been going on for quite some time, I think decades. It burst on us suddenly because we only saw it when the concept had been so thoroughly tested that someone was willing to step up and spend masses of dough to build models that scraped most of the internet and get the likes of Google and Microsoft to stick it in front of our faces. So rather than a brand new nascent technology, I think it's actually quite a mature technology, and it's already taken a key ingredient, model size, about as far as it can be taken. I think it may have already plateaued.
There will be improvements, but they'll be like pivot charts in spreadsheets--the basic way spreadsheets work hasn't changed much since Lotus, but there are lots of nice little improvements. So like maybe for ChatGPT they'll add a thing that can tell when you're asking a math question, and passes it to a dedicated little do-the-math routine, bypassing the main model so you don't get totally wrong answers.
On top of that, if in fact it displaces human-produced content on a mass scale, as I said above future versions may actually degrade in quality as we get AI models trained on AI output that was trained on AI output that was trained on AI output.
I'm sure stuff that produces better output is possible, and indeed I'm pretty sure true AI is possible (although it won't be all-powerful like some of the rich weirdos imagine). But that will be a different technology, not just iterations on current large language model concepts.
Something puzzles me.
...because some time ago, when I expressed my concerns about the extensive and almost exclusive use of game engines like Unity and Unreal, the most frequent response was that it was fine because it allowed anyone, even those who didn’t know how to program, to develop games.
The engine takes care of the programming to a certain extent, you add the graphics and audio, and everyone is happy.
Now, the extensive use of AI for image generation seems to be perceived differently, but the principle remains the same: you handle the programming and audio while paying relatively little attention to the graphical aspect, yet... not everyone is happy.
I guess I'll say it since nobody else will.
The game engine does not make the game for you. The game engine provides a foundation and some building blocks for you, but you still have to know how to put them together and program the game itself. Not to mention creating all of the assets for the game. This is all hard, very time consuming work. Not even using premade assets will spare you from the work it takes to create a high-quality, functioning game. There is a reason asset flips are so easy to spot, and there is a reason why almost nobody plays them.
This is like saying a compiler "does all the work for you" just because you're not programming directly in machine code. It is completely nonsensical.
Something puzzles me.
The game engine does not make the game for you.
Sure, I never stated it does.
The game engine does not make the game for you. The game engine provides a foundation and some building blocks for you, but you still have to know how to put them together and program the game itself. Not to mention creating all of the assets for the game. This is all hard, very time consuming work. Not even using premade assets will spare you from the work it takes to create a high-quality, functioning game. There is a reason asset flips are so easy to spot, and there is a reason why almost nobody plays them.
This does kind of sit at the core of why customers should be wary of AI.
The hardest thing of game development is not programming, but game design. This is why indies can outshine AAA studios with a hundred million budget. Designing interesting engaging gameplay is not a trivial task. I know, I've tried and I've been unable to get above mediocrity.
In parallel you can see that in graphics art design is the core of what makes things visually appealing. No amount of fidelity can make your art look good if the design of the visuals as a whole is not on point. For obvious reasons I've been playing Half-Life 2 again and it's shocking how easy it is to get immersed in that game. Even though models aren't super detailed and texture resolution is very low, you look past it because the design of everything is so good.
Both of these design aspects require careful thought and planning to execute well. This is something AI simply can't do. Even though the "I" is in the name, there is nothing intelligent about it. It's just an algorithm that spits out patterns based on the patterns it was trained on. It does not have any capacity for thought and reason and can't design anything. And honestly, I don't see that changing in the future because they'll run into physical limits of current computer technology well before closing the gap between the ability of skilled people.
Of course publishers would really like to use AI generated graphics because it would be cheap and be beneficial to their bottom line. But as a consumer you should really think twice whether you want to support this development because it's not going to make future games any better. So disclosure help me choose the path I want to support.
Something puzzles me.
...because some time ago, when I expressed my concerns about the extensive and almost exclusive use of game engines like Unity and Unreal, the most frequent response was that it was fine because it allowed anyone, even those who didn’t know how to program, to develop games.
The engine takes care of the programming to a certain extent, you add the graphics and audio, and everyone is happy.
Now, the extensive use of AI for image generation seems to be perceived differently, but the principle remains the same: you handle the programming and audio while paying relatively little attention to the graphical aspect, yet... not everyone is happy.
I want to spend my time with something a human has made. I want to somehow "interact" with that human (these humans). I want to consume art, and I don't feel a generation tool can produce something that has earned the term "art" - even if I probably cannot tell the difference in many cases! -, because it cannot "talk" to me about life.
Yes, you could create art with help of AI probably. I'd always search for the mistakes AI makes, spoiling the fun.
BTW, I just played Stray. Wow.
Something puzzles me.
...because some time ago, when I expressed my concerns about the extensive and almost exclusive use of game engines like Unity and Unreal, the most frequent response was that it was fine because it allowed anyone, even those who didn’t know how to program, to develop games.
The engine takes care of the programming to a certain extent, you add the graphics and audio, and everyone is happy.
Now, the extensive use of AI for image generation seems to be perceived differently, but the principle remains the same: you handle the programming and audio while paying relatively little attention to the graphical aspect, yet... not everyone is happy.
I want to spend my time with something a human has made. I want to somehow "interact" with that human (these humans). I want to consume art, and I don't feel a generation tool can produce something that has earned the term "art" - even if I probably cannot tell the difference in many cases! -, because it cannot "talk" to me about life.
Yes, you could create art with help of AI probably. I'd always search for the mistakes AI makes, spoiling the fun.
BTW, I just played Stray. Wow.
Don't get me wrong,
I don't like it either, and I totally agree with you.
I also think that programming itself is an art as well; I spend time and have a lot of fun just watching demos from scene.org or browsing shadertoy, so the more the use of a single game engine, the less the fun for me.
There is also the "issue" that most games made with the same engine has a fingerprint probably due to the reuse of stock shaders (mainly Unreal).
I've have to admit that procedurally generated levels put myself on the borderline :)
It does not have any capacity for thought and reason and can't design anything. And honestly, I don't see that changing in the future because they'll run into physical limits of current computer technology well before closing the gap between the ability of skilled people.I'm really not sure about the physical limits of current computer technology. I do think that people who talk about how many transistor-equivalents modern computers have vs. how many the human brain has are vastly underestimating the human brain, because they seem to treat one neuron connection as == to one computer transistor when it isn't remotely, because the neuron connection is an analogue thing that can have a range of strengths, clearly much more complex than one transistor, and I don't think anyone has decently modeled how much more (or, maybe someone has, but their paper got buried in the snowstorm of academic papers and nobody bases their thinking on it). There are a couple of other things about the ways neurons grow and branch and how the connections can be strengthened or weakened that are probably not comparable to basic computer hardware and also add complexity . . . you could probably simulate that behaviour in software, but that's just showing that it's behaviour that requires much more than one transistor to model.
So, human brains, often underestimated by computer people IMO. And the differences could be huge--say one neural connection is just 10 times as complex as one transistor (definitely an underestimate). Wouldn't 2 neural connections working together be 100 times as complex as two transistors? Put that to the power of the number of neural connections, it could be insane how many transistors it would take to model a brain.
But still--modern computers have tons of processing power, and they can stick a bunch of them together. I think the limitation is actually the model. The whole Large Language Model (or for art, large pattern model I guess) thing is based on a hypothesis: Maybe intelligence would arise if we just gave a computer so many examples of language in use that it could get the patterns of how words are used together from them; maybe the meaning of language, of words, is just somehow in the ways they are combined, and if a computer knew all the ways they are combined it would know what they mean. IMO that hypothesis, while honestly it has produced some quite impressive results, turns out to have been wrong, and pushing it harder will not change that. After all, at this point these models have absorbed huge multiples more language than any human ever has and they still clearly don't know what the words they are using mean. To get something that's really qualitatively better they will need a different hypothesis.
So rather than a brand new nascent technology, I think it's actually quite a mature technology, and it's already taken a key ingredient, model size, about as far as it can be taken. I think it may have already plateaued.Based on the past two years, this seems likely to be true. I haven't seen many large enhancements in ChatGPT in the past two years, but I have seen a lot of expansion into other fields. I haven't seen any noticeable change in machine translation since the rise of ChatGPT, likely because LLMs were already being used for machine translation for a long time.
But you can get some really good results with ChatGPT if you prompt it properly. I've seen good writing come out of ChatGPT, but I certainly can't get it to do that. So while this technology branch may not get much better, people will get better at using it. And maybe that will make a big difference. But I have heard people say, "ChatGPT is getting dumber." I don't know whether that's because the shine has worn off, or the ourobouros is already close to eating itself.
I was very impressed when Photoshop was able to remove text from an image and paint over it with a consistent-looking background using its AI generation tool because I did not have the original files. The Google Pixel Magic Eraser tool is similarly impressive if it works as advertised.
So, it seems to be quite good at editing images. As for fabricating art entirely...I have yet to be impressed.
I'm sure stuff that produces better output is possible, and indeed I'm pretty sure true AI is possible (although it won't be all-powerful like some of the rich weirdos imagine). But that will be a different technology, not just iterations on current large language model concepts.I completely agree.
I'm sure stuff that produces better output is possible, and indeed I'm pretty sure true AI is possible (although it won't be all-powerful like some of the rich weirdos imagine). But that will be a different technology, not just iterations on current large language model concepts.
Mirrors my thoughts on it. Current digital electronics have limitations that will break the pace of development when we get too close. Just as happened with CPUs. We're in times where major leaps have turned into minor incremental changes. Parallelism is where the focus is now and even there you can see that things complicate the process. Communication/bandwidth/synchronisation puts some hefty constraints on what can be achieved. But a continuously running self learning neural network I don't see happening with our current computer technology. It just doesn't seem to be a good match.
There are a couple of other things about the ways neurons grow and branch and how the connections can be strengthened or weakened that are probably not comparable to basic computer hardware and also add complexity . . . you could probably simulate that behaviour in software, but that's just showing that it's behaviour that requires much more than one transistor to model.
I cannot remember transistors being compared to neurons. Modern neural networks to have such things though, being weakened and strengthened by the learning process.
To get something that's really qualitatively better they will need a different hypothesis.
Here you go: AI needs a childhood. And "parents". It needs to be told, maybe over years, what it is doing right and what it is doing wrong. This is done in parts already, but only by cheap click workers. Nobody wants to pay "real" AI parents.
Here you go: AI needs a childhood. And "parents". It needs to be told, maybe over years, what it is doing right and what it is doing wrong. This is done in parts already, but only by cheap click workers. Nobody wants to pay "real" AI parents.Now I'm just imagining an angsty teenage AI yelling petulantly "You're not my real parents!" 🤣
Nobody wants to pay "real" AI parents.
Oh my, oh my, oh my.
I'm just figuring it with great disgust, and that kinda surprises myself, because when I was younger I always was by the part of the AIs (lt. Data my fav.)
It has to be the misuse we're doing out of it.
See more from me