We do often include affiliate links to earn us some pennies. See more here.

After killing off and selling off various studios and causing lots of job losses, Embracer Group have put out their latest annual report and they have big plans for AI.

Embracer Group control the likes of THQ Nordic, PLAION, Coffee Stain, Amplifier Game Invest, DECA Games, Easybrain, Asmodee Group, Dark Horse Media, Freemode and Crystal Dynamics – Eidos. They have around 106 game development studios under their belt.

It was only a matter of time of course until they started talking up AI, as with other major game developers and publishers, and basically every big company that exists — they're all trying to shove AI into something so they can say "hey look we have AI too!".

Embracer think AI and large language models (LLMs) will empower their developer teams and say they don't want AI to replace people. From the annual report in their Risk and Mitigation overview it states:

AI has the capability to massively enhance game development by increasing resource efficiency, adding intelligent behaviors, personalization, and optimization to gameplay experiences. By leveraging AI, we create more engaging and immersive experiences that provide each player with a unique, dynamic, and personalized experience. We also see great opportunities for AI in game development speed, logistics and planning. Embracer Group also understands the potential risks associated with the use of AI. Our aim is to empower our employees with AI applications.

So what they've done is create a "Group AI Policy" that includes various guiding principles, risk assessment, and a risk framework that they're implementing across their companies with the idea being "human empowerment". Here's what Tomas Hedman, Head of Privacy & AI Governance at Embracer Group, had to say about it:

"Certainly, one of the major risks for a company is not to use AI, as this would mean a competitive disadvantage vis-à-vis other industry players. Most companies will move forward on AI integration in different ways. For us, it is the way that we do this that is the most critical element.

We do not want to replace people with AI, we want to empower them. This is the core of our human-centric approach to leveraging the potential with AI.

It’s not just that AI enables our developers to do even more, and to become more efficient on certain tasks, it will also open up coding to a broader group of developers. Entry into the industry might be easier for individuals with disabilities who, for instance, cannot use a keyboard as easily as others.

AI is trained on historical data, which tilts in a certain direction. As a result, you can end up with imbalanced automated decision-making. Let’s say you’re building a village. If you use AI for this, depending on how it’s trained and the decisions it takes, you may end up with a village with a demography that displays some sort of imbalance.

As AI models become more powerful, we can leverage their capacity also in the creative process, for example, by identifying inconsistencies in scripts and storytelling. There will be tremendous benefits for our creative teams regarding scriptwriting, image creation, idea generation, quality control, and more. And, as models become more human-like, the interaction between players and AI-supported functions will be much more dynamic. If in a game scenario you bargain, AI can remember this the next time. That makes the whole gaming experience much more interesting and lifelike."


It's worth noting that in the last GDC Survey, it showed that 31% of devs were already using AI.

What are your thoughts on this?

Article taken from GamingOnLinux.com.
Tags: AI, Embracer, Misc
8 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly came back to check on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly.
See more from me
44 comments
Page: «3/5»
  Go to:

Quoting: pb
Quoting: Salvatos
QuoteIf in a game scenario you bargain, AI can remember this the next time.
Holy shit, they just invented persistent game states! Now I understand why everyone is hype about AI. Sign me up!

Imagine this: you buy an RPG game and the NPCs view and treat you depending on whether you bought it for a full price or on sale. If you bought it at -90%, your reputation is minimal and everyone sees you as a pariah. On the other hand, if your reputation is balanced, every cosmetic DLC you buy makes you more of a celebrity in the game's world.
An interesting/horrible idea. But it still wouldn't require AI in any way shape or form.
Kimyrielle Jun 21
Let's be honest, anyone saying that AI won't kill jobs is either a blatant liar or really ignorant. Of course it will. It will kill a whole lot of jobs. And in all honesty, there is nothing wrong with it, as long as we find ways to transform our society to adapt to a "work is mostly optional" world and let go of the silly notion that people aren't valuable unless they earn an income.

About the less distant future: I think in 10 years, there will be two kinds of game studios. The ones that have adopted AI in their tool-chain, and the bankrupt ones. I cannot recall any instance in history where businesses thrived for overly long by refusing to adapt transformative technologies. And people should be very careful to dismiss AI as a hype because the current models produce buggy code or people with six fingers. They will not do that for too much longer.
Quoting: KimyrielleThey will not do that for too much longer.
This is the assumption I'm not sure about. We tend to think of these AI things as a disruptive new technology just at the beginning of a climb to amazingness. But in reality, as far as I can tell they're actually a technology that has been around for some time, which over many years was developed to the point where someone with seriously deep pockets felt it was worth putting the muscle in to get a really huge data set shoved into one. So rather than a disruptive newborn technology, it's more like a fairly mature technology which has now been brought to the full industrial scale.

So now we have them with the really huge data sets, and I think it's shown us some impressive things about this kind of "AI" technology, but also some limitations. They got what they got currently mainly by getting a bigger (data) hammer, and there aren't a lot of bigger hammers left to get, so I'm not convinced they'll go that much further. It's like the way self-driving cars looked so promising and then their capabilities kind of plateaued and now they don't get talked about that much and one of the pioneers of the field is now working on simplified self-driving for mining and other industrial site vehicles because he's decided the full consumer self-driving is not a problem solvable with the kind of technology current self-driving is based on.

I think they're also unusually difficult to improve. Normal technologies have known characteristics, which you can then tweak and build on. These "AI" technologies, including the self-driving cars, are about a kind of forced evolution process which results in black boxes. You can't really tweak and build on them because you don't understand what they do in the first place.

I'm not saying they're bad technology. Really, they'd be pretty cool if weren't for all the relentless grift and hype and the likelihood that anything useful they do will be used by really rich people to make everyone else's lives worse. I just think they're a lot closer to a plateau in terms of their capabilities than a lot of people expect.


Last edited by Purple Library Guy on 21 June 2024 at 8:12 pm UTC
Kimyrielle Jun 21
Quoting: Purple Library GuyBut in reality, as far as I can tell they're actually a technology that has been around for some time, which over many years was developed to the point where someone with seriously deep pockets felt it was worth putting the muscle in to get a really huge data set shoved into one. So rather than a disruptive newborn technology, it's more like a fairly mature technology which has now been brought to the full industrial scale.

Yes, and no. The math has been there for a while, true. But only now we have the serious computing power to actually train these models to the degree they're useful. Can't train GPT-4 on a cluster of 386s, I guess. ;)

QuoteThey got what they got currently mainly by getting a bigger (data) hammer, and there aren't a lot of bigger hammers left to get, so I'm not convinced they'll go that much further.

I don't know about that? Looking at NVidia's AI chips, every new generation seems to be ridiculously more powerful than the last. I guess we will throw bigger hammers at the problem for a while longer, and we're already at a point where eliminating major issues with the tech is within reach. We will never be able to 100% eliminate hallucination, because that's not how the math works, but reducing it enough is sufficient. Humans hallucinate, too. The AI only needs to be as good as them, not perfect. We should be there soon enough.

QuoteI'm not saying they're bad technology. Really, they'd be pretty cool if weren't for all the relentless grift and hype and the likelihood that anything useful they do will be used by really rich people to make everyone else's lives worse. I just think they're a lot closer to a plateau in terms of their capabilities than a lot of people expect.

I keep reading papers on improvements and new approaches for existing tech on a daily basis. We're not even close to plateauing. And while I share your concern about AI being monopolized by Big Evil Tech, so far it hasn't happened. There are a lot of open source models around that can compete fairly ok with the closed-source cloud services.
Quoting: Kimyrielle
Quoting: Purple Library GuyBut in reality, as far as I can tell they're actually a technology that has been around for some time, which over many years was developed to the point where someone with seriously deep pockets felt it was worth putting the muscle in to get a really huge data set shoved into one. So rather than a disruptive newborn technology, it's more like a fairly mature technology which has now been brought to the full industrial scale.

Yes, and no. The math has been there for a while, true. But only now we have the serious computing power to actually train these models to the degree they're useful. Can't train GPT-4 on a cluster of 386s, I guess. ;)

QuoteThey got what they got currently mainly by getting a bigger (data) hammer, and there aren't a lot of bigger hammers left to get, so I'm not convinced they'll go that much further.

I don't know about that? Looking at NVidia's AI chips, every new generation seems to be ridiculously more powerful than the last. I guess we will throw bigger hammers at the problem for a while longer, and we're already at a point where eliminating major issues with the tech is within reach. We will never be able to 100% eliminate hallucination, because that's not how the math works, but reducing it enough is sufficient. Humans hallucinate, too. The AI only needs to be as good as them, not perfect. We should be there soon enough.

QuoteI'm not saying they're bad technology. Really, they'd be pretty cool if weren't for all the relentless grift and hype and the likelihood that anything useful they do will be used by really rich people to make everyone else's lives worse. I just think they're a lot closer to a plateau in terms of their capabilities than a lot of people expect.

I keep reading papers on improvements and new approaches for existing tech on a daily basis. We're not even close to plateauing. And while I share your concern about AI being monopolized by Big Evil Tech, so far it hasn't happened. There are a lot of open source models around that can compete fairly ok with the closed-source cloud services.
Just to be clear, when I said bigger (data) hammer, I meant the amount of data being used to train the model, not processing power, which doesn't seem to be the limitation. The top ones currently have basically scraped the whole internet, so I'm not sure there's a whole lot more to scrape. If anything, going forward there may be less, as copyright holders block such uses and privacy models pay attention to the issue; as well as worse, as AI models increasingly scrape AI content.

When I say plateau, I don't mean to say nothing whatsoever will happen; plateaus aren't entirely flat. But there's a difference between the progress in aviation from the Wright Brothers to the 747, and the progress in aviation since then (not to mention regression; 737MAX anyone?). I'm saying that "AI" bursting on the scene all of a sudden makes us feel like this is the Wright Brothers time for this kind of technology, when it may instead be the 747 moment and the earlier part was just quiet. There will probably in the future be other kinds of "AI" based on different ideas that give results that are intelligence-like in more important ways, but I think it's likely they won't be souped-up ChatGPT.
Salvatos Jun 21
Quoting: KimyrielleWe will never be able to 100% eliminate hallucination, because that's not how the math works, but reducing it enough is sufficient. Humans hallucinate, too. The AI only needs to be as good as them, not perfect.
I'll do you one better: if my field is any indication, the AI doesn't need to be as good as humans, just faster. Then companies can pay a fraction of the humans a fraction of their former salary to review and "adjust as needed" and deliver more output in a fraction of the time. That's a lot of money at stake for the people not actually producing anything.


Last edited by Salvatos on 21 June 2024 at 11:29 pm UTC
Pengling Jun 22
Quoting: JustinWoodThere's an absolutely delightful blog I read yesterday that I feel does a wonderful job about why the AI hype is just grifters overpromising on what generative AI can do. Recommend folks take a look at it! https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/
That was a great read and a great laugh (I totally lost it at "Synergy Greg"! ) - many thanks for linking to it! Lots of common-sense in there, always nice to see (he even linked to this !).


Last edited by Pengling on 22 June 2024 at 3:04 am UTC
dvd Jun 22
Quoting: DrMcCoyThe problem with Embracer is that they're not in the business of making games, making art. They're in the business of making money. They don't care how. They don't care the games apart from them being their current vehicle of making money.

That's every game company that is (or is owned by) a company on the stock market. They are happy about an inferior product as long as it improves the bottom line.
CatKiller Jun 22
View PC info
  • Supporter Plus
Quoting: DrMcCoyThe problem with Embracer is that they're not in the business of making games, making art. They're in the business of making money. They don't care how. They don't care the games apart from them being their current vehicle of making money.
That isn't true. It is true of Activision post Bobby "I want to take all the fun out of making games" Kotick. It wasn't even true of EA when they were still Electronic Arts, although it is true now.

Embracer is the built-up business of someone that started out selling second-hand comics. You could say that they aren't in the business of making games, but that's because getting a game over the finish line is hard, and they haven't demonstrated much efficiency at getting that done, rather than from lack of will. And they're currently in the business of getting out of a $2 billion hole; rather than firing people for S&Gs after a super profitable period like, say, Microsoft, or Ubisoft, or Take-Two, or any of the others that have done that exact thing.
Caldathras Jun 22
Quoting: JustinWoodThere's an absolutely delightful blog I read yesterday that I feel does a wonderful job about why the AI hype is just grifters overpromising on what generative AI can do. Recommend folks take a look at it! https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/
Thanks for this. I am in complete agreement with the author of this blog post. I wish people would stop using the term "AI" to describe this technology. LLM (Large Language Model) may be many things but it is not true artificial intelligence. As I see it, it is, at best, a large database parser, nothing more.
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register


Or login with...
Sign in with Steam Sign in with Google
Social logins require cookies to stay logged in.