Valve have now confirmed their rules for game developers using AI generated content on Steam, and while they're not banning AI they are going to ensure it's clearly stated for players. Like it or not, "AI" use is only going to increase in the games industry and so Valve had to do something and what they've announced sounds reasonable enough.
When developers fill out a survey for Valve to get their game on Steam, it now includes AI disclosures developers have to fill out and they separate it into two categories:
- Pre-Generated: Any kind of content (art/code/sound/etc) created with the help of AI tools during development. Under the Steam Distribution Agreement, you promise Valve that your game will not include illegal or infringing content, and that your game will be consistent with your marketing materials. In our pre-release review, we will evaluate the output of AI generated content in your game the same way we evaluate all non-AI content - including a check that your game meets those promises.
- Live-Generated: Any kind of content created with the help of AI tools while the game is running. In addition to following the same rules as Pre-Generated AI content, this comes with an additional requirement: in the Content Survey, you'll need to tell us what kind of guardrails you're putting on your AI to ensure it's not generating illegal content.
Valve said they will also "include much of your disclosure on the Steam store page for your game, so customers can also understand how the game uses AI". So with that, if you plan to avoid AI games, at least Valve will give you a clear way to spot them.
On top of that Valve said they're implementing a new system to allow players to "report illegal content inside games that contain Live-Generated AI content", which you'll do via the Steam Overlay.
Together these rules and features allow Valve to be "much more open to releasing games using AI technology on Steam" but Valve said clearly that Adult Only Sexual Content created with Live-Generated AI is not currently allowed on Steam. That one feels like an obvious one to not allow, for reasons I'm sure I don't need to go into.
What do you think to Valve's stance on this?
See the full announcement here.
Quoting: Cloversheen. . . From a sentient being that knows the meaning of the words it uses? Why is this hard?Quoting: Purple Library GuyQuoting: CybolicI'm for the idea, but the language of "content [...] created with the help of AI tools" might cause issues.Really? People trust the feedback of those things? On stuff like themes? That's, um, the word that keeps coming to my head is "pathetic".
A popular method of using (what's commonly called) AI is for sketching out ideas and getting rapid feedback on them. If for example, a writer puts in a page worth of text and asks the AI to give its feedback on, say, the themes or tone of the text
Out of curiosity, what would you consider a non-pathetic way to get feedback?
Quoting: Purple Library GuyQuoting: CybolicI'm for the idea, but the language of "content [...] created with the help of AI tools" might cause issues.Really? People trust the feedback of those things? On stuff like themes? That's, um, the word that keeps coming to my head is "pathetic".
A popular method of using (what's commonly called) AI is for sketching out ideas and getting rapid feedback on them. If for example, a writer puts in a page worth of text and asks the AI to give its feedback on, say, the themes or tone of the text
What I could imagine as a useful application for AI when writing is having proposed a different way to express stuff, and then choose on your own what is better. (And I feel the former sentence could totally benefit. :D )
Last edited by Eike on 11 January 2024 at 12:47 pm UTC
Quoting: Purple Library GuyThey do, to a degree. It's all about using the tech for what it actually is. If the "AI" says that the tone is "this" and the themes are "that", then there's a pretty good chance that the text as it least similar to other pieces of text that have been classified as such by actual human beings. They're fancy pattern recognition engines, and when used as such, I don't see any issue with them.Quoting: CybolicI'm for the idea, but the language of "content [...] created with the help of AI tools" might cause issues.Really? People trust the feedback of those things? On stuff like themes? That's, um, the word that keeps coming to my head is "pathetic".
A popular method of using (what's commonly called) AI is for sketching out ideas and getting rapid feedback on them. If for example, a writer puts in a page worth of text and asks the AI to give its feedback on, say, the themes or tone of the text
Mind you, that's a completely different use-case from just using "AI" output as (or close to) the final product, which I presume is what this labelling is meant for.
Quoting: Purple Library GuyCompletely agree, but a sentient being can't really be run in the background to do a real-time litmus check on your prose; "AI" can. When the first draft is done, I absolutely agree it should be checked by a real human.Quoting: Cloversheen. . . From a sentient being that knows the meaning of the words it uses? Why is this hard?Quoting: Purple Library GuyQuoting: CybolicI'm for the idea, but the language of "content [...] created with the help of AI tools" might cause issues.Really? People trust the feedback of those things? On stuff like themes? That's, um, the word that keeps coming to my head is "pathetic".
A popular method of using (what's commonly called) AI is for sketching out ideas and getting rapid feedback on them. If for example, a writer puts in a page worth of text and asks the AI to give its feedback on, say, the themes or tone of the text
Out of curiosity, what would you consider a non-pathetic way to get feedback?
Quoting: CybolicMaybe I'm just vain. My position is that, since "AI" make stodgy mediocre text themselves, their advice on how to write would tend to adjust my writing towards stodgy and mediocre. And since on my own I am of course totally awesome, I really don't like the idea of being regressed towards the mean. So, confronted with the idea of people checking their prose with an AI, my instinctive reaction is "Where's your pride?!" But I guess people's mileage varies.Quoting: Purple Library GuyThey do, to a degree. It's all about using the tech for what it actually is. If the "AI" says that the tone is "this" and the themes are "that", then there's a pretty good chance that the text as it least similar to other pieces of text that have been classified as such by actual human beings. They're fancy pattern recognition engines, and when used as such, I don't see any issue with them.Quoting: CybolicI'm for the idea, but the language of "content [...] created with the help of AI tools" might cause issues.Really? People trust the feedback of those things? On stuff like themes? That's, um, the word that keeps coming to my head is "pathetic".
A popular method of using (what's commonly called) AI is for sketching out ideas and getting rapid feedback on them. If for example, a writer puts in a page worth of text and asks the AI to give its feedback on, say, the themes or tone of the text
Mind you, that's a completely different use-case from just using "AI" output as (or close to) the final product, which I presume is what this labelling is meant for.
See more from me