Update 01/07/2023 - Valve sent over a statement here's what they said:
We are continuing to learn about AI, the ways it can be used in game development, and how to factor it in to our process for reviewing games submitted for distribution on Steam. Our priority, as always, is to try to ship as many of the titles we receive as we can. The introduction of AI can sometimes make it harder to show a developer has sufficient rights in using AI to create assets, including images, text, and music. In particular, there is some legal uncertainty relating to data used to train AI models. It is the developer's responsibility to make sure they have the appropriate rights to ship their game.
We know it is a constantly evolving tech, and our goal is not to discourage the use of it on Steam; instead, we're working through how to integrate it into our already-existing review policies. Stated plainly, our review process is a reflection of current copyright law and policies, not an added layer of our opinion. As these laws and policies evolve over time, so will our process.
We welcome and encourage innovation, and AI technology is bound to create new and exciting experiences in gaming. While developers can use these AI technologies in their work with appropriate commercial licenses, they can not infringe on existing copyrights.
Lastly, while App-submission credits are usually non-refundable, we're more than happy to offer them in these cases as we continue to work on our review process.
Original article below:
Here's an interesting one on Steam publishing for you. Valve appear to be clamping down on AI art used in games due to the murky legal waters. AI art is such a huge topic of discussion everywhere right now, as is other forms of "AI" like ChatGPT and it's just — everywhere. I can't seem to get away from talk on it from people for and against it.
In a post on Reddit, a developer who tried to release their game on Steam got word back from Valve that they have denied listing it. Here's what they sent the developer:
Hello,
While we strive to ship most titles submitted to us, we cannot ship games for which the developer does not have all of the necessary rights.
After reviewing, we have identified intellectual property in [Game Name Here] which appears to belongs to one or more third parties. In particular, [Game Name Here] contains art assets generated by artificial intelligence that appears to be relying on copyrighted material owned by third parties. As the legal ownership of such AI-generated art is unclear, we cannot ship your game while it contains these AI-generated assets, unless you can affirmatively confirm that you own the rights to all of the IP used in the data set that trained the AI to create the assets in your game.
We are failing your build and will give you one (1) opportunity to remove all content that you do not have the rights to from your build.
If you fail to remove all such content, we will not be able to ship your game on Steam, and this app will be banned.
That developer mentioned they tweaked the artwork, so it wasn't so obviously AI generated and spoke to Valve again but Valve once again rejected it noting:
Hello,
Thank you for your patience as we reviewed [Game Name Here] and took our time to better understand the AI tech used to create it. Again, while we strive to ship most titles submitted to us, we cannot ship games for which the developer does not have all of the necessary rights. At this time, we are declining to distribute your game since it’s unclear if the underlying AI tech used to create the assets has sufficient rights to the training data.
App credits are usually non-refundable, but we’d like to make an exception here and offer you a refund. Please confirm and we’ll proceed.
Thanks,
Given the current issues with AI art and how it's generated, this really seems like a no-brainer for Valve to deny publishing games that have AI art unless the developers of the games can prove fully they own the full rights. Their own guidelines are pretty clear on it, developers cannot publish games on Steam they don't have "adequate rights" to.
That said, this is a difficult topic to fully address. With the tools Valve will be using to flag these games, how will they be dealing with false positives? It's not likely Valve will be individually personally going over every game with a human checking it, and algorithms can be problematic. It's going to be interesting to see how this develops over time. Seems like more developers will need to have everything they need ready to ensure they can prove ownership of all artwork.
I've reached out to Valve to see if they have any comments on it to share.
What do you think about this? Let me know in the comments.
This is not generally a good thing, in my view. Jobs will be lost and the quality of human creativity will be diluted and swamped by a tsunami of this stuff.
These people ... they can not stop the future. And nobody even needs to find them!
As for Valve, sorta understandable, yet extremely disappointing, I hope soon enough that can be changed.
This rule seems more to be about taking Valve out of the line of fire coming from the anti-AI peanut gallery. As is evident by a surprising number of posters is this thread, humans still hate change and will fight everything new because it's new. Teachers have fought calculators when they were new, and artists considered Photoshop the end of all art when it released, too. Now the newest object of hate is generative AI. It's not surprising. It's just how people are. In 10 years, there will be two kinds of game studios: The ones that embraced AI tools, and the bankrupt ones. Until then, the peanut gallery will throw a few more peanuts, I guess.
A different issue with ML technology is that it is most useful at recognizing patterns/automating things like automatic surveillance (see China). It is a technology that can very easily be abused. Like the invention of the atomic bomb it will change the world – probably not for good. We've seen it used by banks to automate credit ratings where it was trained to be sexist. It's used by publishers to reject books that are not similar to past bestsellers. It's used in cameras to improve the picture quality/focus – there have been cases where it was shown to work better for people with white skin.
Sorry, but I paint mountains for a living and your terrain generator algorithms you used for your game graphics are infringing on my work. Those are my texture patterns! ( /sarcasm )
P.S. I should say that I don't think AI art should be copyrightable. If you use it, you don't get those protections.
Last edited by Grogan on 30 June 2023 at 5:42 pm UTC
People said similar things about music sharing--where did it go?Yes, this is good. Gotta get rid of the AI generated images (it is difficult to call it art).
It's obvious that this will not happen, right?
I do understand - and share - such feelings, but in the end, it's like trying to get rid of photos in the early stages of photography because they're "not art".
Yeah, but the point is everything an AI produces is somebody's Yoda or Harry--maybe two or three somebodies mixed together if you're lucky.They could Paint Yoda and Harry already.
Painting Yoda or Harry is IP infringement unless one can prove fair use.
You missunderstood what I tried to say: They can paint Yoda and Harry already, and thus infringe copyright, just like they can with AI. So I see nothing new with regards to Yoda and Harry.
Did you actually try image making AI? I gave it an attempt, and while of course I don't know all the art out there, I'm confident this wasn't just something slightly morphed.There's a lot of weird stuff on DeviantArt. I wouldn't want to bet too hard if I were you. Certainly when a couple of my friends were fiddling with AI art, they found that when they did variants on some pretty odd requests, there were strong patterns in what they got. Some definitely looked like the same picture was being warped in somewhat different ways.
When I think of it, probably the more "normal" a request, the less of this phenomenon you're likely to see, because there will be more source material and the results will be a blend of more sources.
Yeah, but the point is everything an AI produces is somebody's Yoda or Harry--maybe two or three somebodies mixed together if you're lucky.
Everything everybody creates has elements of something somebody else has created. It's how humans progress, by building on the work of others.
Right now AI is rudimentary, but soon it will be doing much more than mixing and matching samples it has learned.
What devs likely would - and will - do is e.g. generating backgrounds instead of painting them. I've already seen this in point and click announcements. Looks very nice on first sight, on second you see some stuff is wrong.This will probably be a technique that will be used some, but I wonder if it will turn out to work any better than other computer techniques people already use for landscapes that don't call themselves "artificial intelligence", whether it's procedural generation or just hitting real landscapes with filters.IfWhen AI learns, and doesn't paint some nonsense, well, that's easily done and looks nice. And shouldn't infringe anything.
Yeah, but the point is everything an AI produces is somebody's Yoda or Harry--maybe two or three somebodies mixed together if you're lucky.
This is widespread, but still false assumption. A good approximation to 100% of all characters generative AI will draw have never been drawn by anyone before. The AI has learned how to draw people by analyzing drawings of people. But it's not a "collage tool" (which this ridiculous lawsuit filed by some artists claims). There are no fragments of Yoda or Harry in the model, unlike these people claim. If you make the AI to, then yes, it can draw an accurate Harry, because it has learned the concept of Harry by analyzing pictures of him. But chances that it will draw Harry without you prompting for him are pretty close to zero. If you don't prompt for copyrighted characters, you are very likely to get an original one.
If you want to make sure to get something original and avoid overfitting (which would be prudent if you plan to use the assets in a game), you'd never prompt for just one artist's style anyway, you'd combine many.
Your first point, yeah, true enough. One of the most "creative", out there, poems ever written, "Kubla Khan", a guy wrote a book dissecting everything Samuel Taylor Coleridge had been reading over the previous few years to find the sources of all the elements that went into the poem. But he understood what those sources were, and did intentional, thoughtful things (and unconscious, stewing in the background of his mind things) to them when he brought them into his poem. I think there's a difference between that and what these art programs do.Yeah, but the point is everything an AI produces is somebody's Yoda or Harry--maybe two or three somebodies mixed together if you're lucky.
Everything everybody creates has elements of something somebody else has created. It's how humans progress, by building on the work of others.
Right now AI is rudimentary, but soon it will be doing much more than mixing and matching samples it has learned.
On your second point, I'm not sure it will. The stuff they're calling "AI" right now isn't the result of some conceptual breakthrough or anything. It's just autocomplete on a whole lot of steroids. If you look at the output of ChatGPT, there are "tells" every so often that it doesn't actually understand what it's saying--or rather, that there isn't really an "it" to do any understanding. It'll say things that any fool can tell make no sense together, things that contradict each other and stuff. And the scale of the datasets is so large now that I think they've pretty much reached the limits of the "get a bigger hammer" approach--there isn't much more data to get.
Don't get me wrong, the results are impressive. It's quite surprising what ChatGPT can pull off. But we assume it will continue to get a lot better because we think of it as "artificial intelligence" that just isn't very smart yet. If that were so, it would have lots of potential to improve, it would just be a matter of learning to understand better. Problem is, it isn't artificial intelligence, it's just software, and software tends to plateau in its capabilities at a certain point. How much better have word processors or spreadsheets gotten in the last dozen years? The current art and chat oriented "AI" programs are not the product of a brand new concept; the ideas behind them have been maturing for some time, and getting trained on larger and larger datasets for quite a few years. They may not have a whole lot better to get. To get very much better, they would have to actually understand what the words mean that they're putting out--and they don't, and that would be a completely different kind of breakthrough that nobody currently has a handle on.
The hype would be quite a bit smaller if they weren't calling it "artificial intelligence", which it really isn't.
Last edited by Purple Library Guy on 30 June 2023 at 6:20 pm UTC
That's a strong statement. How do you know?Yeah, but the point is everything an AI produces is somebody's Yoda or Harry--maybe two or three somebodies mixed together if you're lucky.
This is widespread, but still false assumption. A good approximation to 100% of all characters generative AI will draw have never been drawn by anyone before.
That's a strong statement. How do you know?Yeah, but the point is everything an AI produces is somebody's Yoda or Harry--maybe two or three somebodies mixed together if you're lucky.
This is widespread, but still false assumption. A good approximation to 100% of all characters generative AI will draw have never been drawn by anyone before.
Because the people who trained these models aren't some beginners who don't know how to avoid overfitting in the training process. Also, feel free to try yourself by making a model generate say 1000 images and see how many close matches you get when feeding them to Google Images...
The limited legal precedent suggests ML art is uncopyrightable, and hence Public Domain, so it should be possible to include.
I wonder what specifically tripped Valve's alarm system.
The project relies heavily on AI generated content and as such is an experiment in itself. The development progress is documented on the game's website. For good or for worse.
On its Discord the developer said:
I sent a message to Valve. It will be a great topic of research to see of the are willing to host this game or not.
Not sure if they will respond or no.
Interesting info about the use of AI in the Dev Blog
Yeah, but the point is everything an AI produces is somebody's Yoda or Harry--maybe two or three somebodies mixed together if you're lucky.They could Paint Yoda and Harry already.
Painting Yoda or Harry is IP infringement unless one can prove fair use.
You missunderstood what I tried to say: They can paint Yoda and Harry already, and thus infringe copyright, just like they can with AI. So I see nothing new with regards to Yoda and Harry.
Ok, I do get the point, but I don't believe the premise is true. AI can come up with stuff at the back that doesn't look like anything put in at the front. (When I think of it, my body ca... well, let's skip that. :D )
See more from me