With AI generated content continuing to spread everywhere, the itch.io store has made a change to now require AI generated content disclosures.
In the announcement it specifically mentions it's now required for "Asset Creators" meaning developers who provide things like graphics, sounds and more for developers to use in their games. All pages get the option though, so game developers can tag their creations as using or not using AI.
From their updated guidelines:
We recently added the AI Disclosure feature. You will have time to review and update your pages accordingly, but we are strictly enforcing disclosure for all game asset pages due to legal ambiguity around rights associated with Generative AI content. Failure to tag your asset page may result in delisting.
Valve made a similar move for Steam back in early January.
This is a good change, because the store is getting flooded with it. Doing a quick search today, and keep in mind this is only those that are correctly tagged, showed 1,214 assets on itch.io that were made using some form of AI generation versus 13,536 now tagged as not using AI generation.
A number of game developers have updated their itch.io pages too, with it showing 337 made using AI generation at time of writing.
Quoting: kokoko3kSomething puzzles me.
The game engine does not make the game for you.
Sure, I never stated it does.
Quoting: yellowThe game engine does not make the game for you. The game engine provides a foundation and some building blocks for you, but you still have to know how to put them together and program the game itself. Not to mention creating all of the assets for the game. This is all hard, very time consuming work. Not even using premade assets will spare you from the work it takes to create a high-quality, functioning game. There is a reason asset flips are so easy to spot, and there is a reason why almost nobody plays them.
This does kind of sit at the core of why customers should be wary of AI.
The hardest thing of game development is not programming, but game design. This is why indies can outshine AAA studios with a hundred million budget. Designing interesting engaging gameplay is not a trivial task. I know, I've tried and I've been unable to get above mediocrity.
In parallel you can see that in graphics art design is the core of what makes things visually appealing. No amount of fidelity can make your art look good if the design of the visuals as a whole is not on point. For obvious reasons I've been playing Half-Life 2 again and it's shocking how easy it is to get immersed in that game. Even though models aren't super detailed and texture resolution is very low, you look past it because the design of everything is so good.
Both of these design aspects require careful thought and planning to execute well. This is something AI simply can't do. Even though the "I" is in the name, there is nothing intelligent about it. It's just an algorithm that spits out patterns based on the patterns it was trained on. It does not have any capacity for thought and reason and can't design anything. And honestly, I don't see that changing in the future because they'll run into physical limits of current computer technology well before closing the gap between the ability of skilled people.
Of course publishers would really like to use AI generated graphics because it would be cheap and be beneficial to their bottom line. But as a consumer you should really think twice whether you want to support this development because it's not going to make future games any better. So disclosure help me choose the path I want to support.
Quoting: kokoko3kSomething puzzles me.
...because some time ago, when I expressed my concerns about the extensive and almost exclusive use of game engines like Unity and Unreal, the most frequent response was that it was fine because it allowed anyone, even those who didn’t know how to program, to develop games.
The engine takes care of the programming to a certain extent, you add the graphics and audio, and everyone is happy.
Now, the extensive use of AI for image generation seems to be perceived differently, but the principle remains the same: you handle the programming and audio while paying relatively little attention to the graphical aspect, yet... not everyone is happy.
I want to spend my time with something a human has made. I want to somehow "interact" with that human (these humans). I want to consume art, and I don't feel a generation tool can produce something that has earned the term "art" - even if I probably cannot tell the difference in many cases! -, because it cannot "talk" to me about life.
Yes, you could create art with help of AI probably. I'd always search for the mistakes AI makes, spoiling the fun.
BTW, I just played Stray. Wow.
Quoting: EikeQuoting: kokoko3kSomething puzzles me.
...because some time ago, when I expressed my concerns about the extensive and almost exclusive use of game engines like Unity and Unreal, the most frequent response was that it was fine because it allowed anyone, even those who didn’t know how to program, to develop games.
The engine takes care of the programming to a certain extent, you add the graphics and audio, and everyone is happy.
Now, the extensive use of AI for image generation seems to be perceived differently, but the principle remains the same: you handle the programming and audio while paying relatively little attention to the graphical aspect, yet... not everyone is happy.
I want to spend my time with something a human has made. I want to somehow "interact" with that human (these humans). I want to consume art, and I don't feel a generation tool can produce something that has earned the term "art" - even if I probably cannot tell the difference in many cases! -, because it cannot "talk" to me about life.
Yes, you could create art with help of AI probably. I'd always search for the mistakes AI makes, spoiling the fun.
BTW, I just played Stray. Wow.
Don't get me wrong,
I don't like it either, and I totally agree with you.
I also think that programming itself is an art as well; I spend time and have a lot of fun just watching demos from scene.org or browsing shadertoy, so the more the use of a single game engine, the less the fun for me.
There is also the "issue" that most games made with the same engine has a fingerprint probably due to the reuse of stock shaders (mainly Unreal).
I've have to admit that procedurally generated levels put myself on the borderline :)
Quoting: EhvisIt does not have any capacity for thought and reason and can't design anything. And honestly, I don't see that changing in the future because they'll run into physical limits of current computer technology well before closing the gap between the ability of skilled people.I'm really not sure about the physical limits of current computer technology. I do think that people who talk about how many transistor-equivalents modern computers have vs. how many the human brain has are vastly underestimating the human brain, because they seem to treat one neuron connection as == to one computer transistor when it isn't remotely, because the neuron connection is an analogue thing that can have a range of strengths, clearly much more complex than one transistor, and I don't think anyone has decently modeled how much more (or, maybe someone has, but their paper got buried in the snowstorm of academic papers and nobody bases their thinking on it). There are a couple of other things about the ways neurons grow and branch and how the connections can be strengthened or weakened that are probably not comparable to basic computer hardware and also add complexity . . . you could probably simulate that behaviour in software, but that's just showing that it's behaviour that requires much more than one transistor to model.
So, human brains, often underestimated by computer people IMO. And the differences could be huge--say one neural connection is just 10 times as complex as one transistor (definitely an underestimate). Wouldn't 2 neural connections working together be 100 times as complex as two transistors? Put that to the power of the number of neural connections, it could be insane how many transistors it would take to model a brain.
But still--modern computers have tons of processing power, and they can stick a bunch of them together. I think the limitation is actually the model. The whole Large Language Model (or for art, large pattern model I guess) thing is based on a hypothesis: Maybe intelligence would arise if we just gave a computer so many examples of language in use that it could get the patterns of how words are used together from them; maybe the meaning of language, of words, is just somehow in the ways they are combined, and if a computer knew all the ways they are combined it would know what they mean. IMO that hypothesis, while honestly it has produced some quite impressive results, turns out to have been wrong, and pushing it harder will not change that. After all, at this point these models have absorbed huge multiples more language than any human ever has and they still clearly don't know what the words they are using mean. To get something that's really qualitatively better they will need a different hypothesis.
Quoting: Purple Library GuySo rather than a brand new nascent technology, I think it's actually quite a mature technology, and it's already taken a key ingredient, model size, about as far as it can be taken. I think it may have already plateaued.Based on the past two years, this seems likely to be true. I haven't seen many large enhancements in ChatGPT in the past two years, but I have seen a lot of expansion into other fields. I haven't seen any noticeable change in machine translation since the rise of ChatGPT, likely because LLMs were already being used for machine translation for a long time.
But you can get some really good results with ChatGPT if you prompt it properly. I've seen good writing come out of ChatGPT, but I certainly can't get it to do that. So while this technology branch may not get much better, people will get better at using it. And maybe that will make a big difference. But I have heard people say, "ChatGPT is getting dumber." I don't know whether that's because the shine has worn off, or the ourobouros is already close to eating itself.
I was very impressed when Photoshop was able to remove text from an image and paint over it with a consistent-looking background using its AI generation tool because I did not have the original files. The Google Pixel Magic Eraser tool is similarly impressive if it works as advertised.
So, it seems to be quite good at editing images. As for fabricating art entirely...I have yet to be impressed.
Quoting: Purple Library GuyI'm sure stuff that produces better output is possible, and indeed I'm pretty sure true AI is possible (although it won't be all-powerful like some of the rich weirdos imagine). But that will be a different technology, not just iterations on current large language model concepts.I completely agree.
Quoting: Purple Library GuyI'm sure stuff that produces better output is possible, and indeed I'm pretty sure true AI is possible (although it won't be all-powerful like some of the rich weirdos imagine). But that will be a different technology, not just iterations on current large language model concepts.
Mirrors my thoughts on it. Current digital electronics have limitations that will break the pace of development when we get too close. Just as happened with CPUs. We're in times where major leaps have turned into minor incremental changes. Parallelism is where the focus is now and even there you can see that things complicate the process. Communication/bandwidth/synchronisation puts some hefty constraints on what can be achieved. But a continuously running self learning neural network I don't see happening with our current computer technology. It just doesn't seem to be a good match.
Quoting: Purple Library GuyThere are a couple of other things about the ways neurons grow and branch and how the connections can be strengthened or weakened that are probably not comparable to basic computer hardware and also add complexity . . . you could probably simulate that behaviour in software, but that's just showing that it's behaviour that requires much more than one transistor to model.
I cannot remember transistors being compared to neurons. Modern neural networks to have such things though, being weakened and strengthened by the learning process.
Quoting: Purple Library GuyTo get something that's really qualitatively better they will need a different hypothesis.
Here you go: AI needs a childhood. And "parents". It needs to be told, maybe over years, what it is doing right and what it is doing wrong. This is done in parts already, but only by cheap click workers. Nobody wants to pay "real" AI parents.
Quoting: EikeHere you go: AI needs a childhood. And "parents". It needs to be told, maybe over years, what it is doing right and what it is doing wrong. This is done in parts already, but only by cheap click workers. Nobody wants to pay "real" AI parents.Now I'm just imagining an angsty teenage AI yelling petulantly "You're not my real parents!" 🤣
Quoting: EikeNobody wants to pay "real" AI parents.
Oh my, oh my, oh my.
I'm just figuring it with great disgust, and that kinda surprises myself, because when I was younger I always was by the part of the AIs (lt. Data my fav.)
It has to be the misuse we're doing out of it.
See more from me