Latest 30 Comments
News - Replicat is basically the classic card game snap meets Balatro - check out the demo
By Linux_Rocks, 30 Oct 2025 at 8:12 pm UTC
By Linux_Rocks, 30 Oct 2025 at 8:12 pm UTC
Aww, snap! Aww, snap!
Come to our macaroni party then we'll take a nap!
Come to our macaroni party then we'll take a nap!
News - The extraction shooter ARC Raiders is out and appears to work on Linux
By Purple Library Guy, 30 Oct 2025 at 7:44 pm UTC
By Purple Library Guy, 30 Oct 2025 at 7:44 pm UTC
Is text-to-speech actually the same technology as generative AI? I feel like it should be a different thing, but I don't know.
News - Merge dogs to make bigger dogs in the delightfully silly roguelike deckbuilder Dogpile
By Nagezahn, 30 Oct 2025 at 7:09 pm UTC
By Nagezahn, 30 Oct 2025 at 7:09 pm UTC
Looks like the demo is worth a try. Though I'm really wondering why what looks like a simple game (and a demo at that) requires 5 GB of space. emojiQuoting myself here. Checked it out again today, and the download was just shy of 300 MB now. What a change! Will try it out later.
News - Fedora Linux project agrees to allow AI-assisted contributions with a new policy
By Kimyrielle, 30 Oct 2025 at 7:05 pm UTC
Well, from a purely technical point of view, the question is easy to answer: It made the code up, based on knowledge it gained from looking at other people's code. That's really all there is to it.
The legality of doing that is murky, as of today. Mostly because traditional copyright law wasn't designed with AI training in mind. Keep in in mind that no trace of source material is left in the trained model, which puts the model weights outside of copyright law's reach. Several lawsuits have been filed, arguing AI training with copyrighted material to be illegal. Every single one of them so far has been tossed out by courts. In case you wonder, yes, Meta was found guilty of copyright infringement, but that wasn't about the training, it was about them torrenting books they used for the training.
Unless copyright law is getting updated (I am not seeing anything in the pipes in any relevant jurisdiction), that leaves ethical considerations. And as we know, these are very much subjective.
Same applies to the actual output. To be copyrightable, a work needs to be human created (anyone remember the famous case about the monkey selfie?). AI output is clearly not human made, so the output is not copyrightable - and thus cannot be affected or bound by any kind of license. It's legally public domain.
The one issue is if the model accidentally or on purpose produces full replicas of copyrighted/trademarked material. Queen Elsa doesn't stop being copyrighted just because an AI model drew her. Which is behind the case of Disney vs Midjourney - their model is trained on Disney's work and can reproduce it on prompt. Which - since the outputs are technically distributed when the customer downloads them - could be a copyright violation. I do actually expect Disney to win this case, but let's see. In the end, it looks like a bigger issue than it is. People could make a replica of Disney IP by copy/pasting it, without the AI detour. The result will probably be API model providers having to block people from generating copyrighted/trademarked material. Most newer models I am aware of already aren't trained on specific artists to prevent these issues.
By Kimyrielle, 30 Oct 2025 at 7:05 pm UTC
I think the question is more "When you ask a Large Language Model to 'write' you some code, where did that code come from and whose copyrights is it infringing?"
Well, from a purely technical point of view, the question is easy to answer: It made the code up, based on knowledge it gained from looking at other people's code. That's really all there is to it.
The legality of doing that is murky, as of today. Mostly because traditional copyright law wasn't designed with AI training in mind. Keep in in mind that no trace of source material is left in the trained model, which puts the model weights outside of copyright law's reach. Several lawsuits have been filed, arguing AI training with copyrighted material to be illegal. Every single one of them so far has been tossed out by courts. In case you wonder, yes, Meta was found guilty of copyright infringement, but that wasn't about the training, it was about them torrenting books they used for the training.
Unless copyright law is getting updated (I am not seeing anything in the pipes in any relevant jurisdiction), that leaves ethical considerations. And as we know, these are very much subjective.
Same applies to the actual output. To be copyrightable, a work needs to be human created (anyone remember the famous case about the monkey selfie?). AI output is clearly not human made, so the output is not copyrightable - and thus cannot be affected or bound by any kind of license. It's legally public domain.
The one issue is if the model accidentally or on purpose produces full replicas of copyrighted/trademarked material. Queen Elsa doesn't stop being copyrighted just because an AI model drew her. Which is behind the case of Disney vs Midjourney - their model is trained on Disney's work and can reproduce it on prompt. Which - since the outputs are technically distributed when the customer downloads them - could be a copyright violation. I do actually expect Disney to win this case, but let's see. In the end, it looks like a bigger issue than it is. People could make a replica of Disney IP by copy/pasting it, without the AI detour. The result will probably be API model providers having to block people from generating copyrighted/trademarked material. Most newer models I am aware of already aren't trained on specific artists to prevent these issues.
News - Mesa 25.2.6 rolls out with more fixes for Intel GPUs, Zink and NVK
By Chinstrap , 30 Oct 2025 at 6:11 pm UTC
By Chinstrap , 30 Oct 2025 at 6:11 pm UTC
Anyone tested out how the NVK Driver updates work with this update on the 50 series Nvidia cards?
News - Bazzite using Fedora 43 is out now with full Xbox Ally / Xbox Ally X support
By dmacofalltrades, 30 Oct 2025 at 5:53 pm UTC
By dmacofalltrades, 30 Oct 2025 at 5:53 pm UTC
I adore Bazzite. I even switched to it on my non-gaming Dell XPS. Most of the setup I do on a new install is cooked into Bazzite out of the box. I've been very happy with this distro and I'm not switching anytime soon.
I'm also excited for the new donation options. I wanna support these developers as much as possible.
I'm also excited for the new donation options. I wanna support these developers as much as possible.
News - Fedora Linux project agrees to allow AI-assisted contributions with a new policy
By ivarhill, 30 Oct 2025 at 5:50 pm UTC
There's way more to this though, both in terms of free software ideals and in terms how to define LLMs. I think it would be fair to compare this to Microsoft's recent efforts in advancing WSL and OSS more broadly (very intentionally leaving out the FL there!) - after all, Microsoft has a lot of projects out there that theoretically adhere to open licenses and in a purely practical sense support the free software community.
However, if anyone within said community says "I'm choosing not to engage with any Microsoft-developed projects" I think almost everyone would understand why and find that reasonable even if one can find some projects that technically adhere to certain standards.
Within the LLM space, OpenAI is a good example of this as well. Sure, they provide models that by a particular definition are "open", but engaging with these models ignores the bigger context of how they came to be developed, who is furthering their development and through what means, and whether they actively strive to maximize user freedom.
And they absolutely do not - which is fine, this is more or less representative of the distinction between open source and free/libre software - but that is the metric by which I'm arguing here. I don't think it's enough to see "open source" LLMs, since that definition is purely practical in nature and ignores the bigger picture. What is really necessary is:
I'm of course not trying to just reiterate FSF speaking points here
- but in all honesty, this rise in LLMs and how they have been developed thus far I think really illustrates why it's important to draw a distinction between open source and free software, and why it matters to take a more holistic view.
By definition, a free/libre software approach implies caring about the user above the code, and there can be no free users if the code (directly or indirectly) contributes to a technocratic oligarchy or if there is no livable planet for users to live on. I get that this may seem a bit out of left field, but this has to be the main metric by which we look at LLMs or very soon it will be too late to even attempt any genuinely libre approaches to this entire category of technology. These are the points that companies such as OpenAI, Microsoft or Google could never make the top priority, and why even if they use open licenses, that well is poisoned by its very definition.
By ivarhill, 30 Oct 2025 at 5:50 pm UTC
I am honestly not sure what about MIT-licensed (Deepseek) or Apache 2.0 (Qwen) isn't free enough. Even OpenAI has a OSS model now, if you absolutely insist it being Western-made (it's garbage, though).I completely agree, within the context of the models themselves and the licenses they use.
There's way more to this though, both in terms of free software ideals and in terms how to define LLMs. I think it would be fair to compare this to Microsoft's recent efforts in advancing WSL and OSS more broadly (very intentionally leaving out the FL there!) - after all, Microsoft has a lot of projects out there that theoretically adhere to open licenses and in a purely practical sense support the free software community.
However, if anyone within said community says "I'm choosing not to engage with any Microsoft-developed projects" I think almost everyone would understand why and find that reasonable even if one can find some projects that technically adhere to certain standards.
Within the LLM space, OpenAI is a good example of this as well. Sure, they provide models that by a particular definition are "open", but engaging with these models ignores the bigger context of how they came to be developed, who is furthering their development and through what means, and whether they actively strive to maximize user freedom.
And they absolutely do not - which is fine, this is more or less representative of the distinction between open source and free/libre software - but that is the metric by which I'm arguing here. I don't think it's enough to see "open source" LLMs, since that definition is purely practical in nature and ignores the bigger picture. What is really necessary is:
- Technology that has been developed through free software standards from a foundational level. This includes not only where the technology comes from and how it is controlled, but also addressing environmental concerns! An 'open source' project can ignore these things, but an honestly libre LLM technology has to address this before anything else.
- Models that have been developed entirely on top of these foundations, and through fully consenting use of data. Like the point before, this last matter has to be resolved before moving forward.
- And finally, open distribution where anyone is free to adapt, use, develop on and further these technologies. This is the step that I believe you are addressing, and it is very important - but far from the whole picture.
I'm of course not trying to just reiterate FSF speaking points here
By definition, a free/libre software approach implies caring about the user above the code, and there can be no free users if the code (directly or indirectly) contributes to a technocratic oligarchy or if there is no livable planet for users to live on. I get that this may seem a bit out of left field, but this has to be the main metric by which we look at LLMs or very soon it will be too late to even attempt any genuinely libre approaches to this entire category of technology. These are the points that companies such as OpenAI, Microsoft or Google could never make the top priority, and why even if they use open licenses, that well is poisoned by its very definition.
News - Fedora Linux project agrees to allow AI-assisted contributions with a new policy
By Purple Library Guy, 30 Oct 2025 at 5:46 pm UTC
By Purple Library Guy, 30 Oct 2025 at 5:46 pm UTC
I think the question is more "When you ask a Large Language Model to 'write' you some code, where did that code come from and whose copyrights is it infringing?"
News - Fedora Linux 43 has officially arrived
By Purple Library Guy, 30 Oct 2025 at 5:43 pm UTC
By Purple Library Guy, 30 Oct 2025 at 5:43 pm UTC
I really think you're misunderstanding the technology (of specifically Large Language Model, generative "AI") and what it can do. The thing is that while there's a real thing there, and it can do some interesting things, it cannot actually do most of the transformational things that are claimed about it, and some of the key stuff that it supposedly does, it actually kind of doesn't. And while propenents will say sure, but it's a technology in its infancy . . . it actually isn't, and it seems to have plateaued in its capabilities.
So like, if it could actually be used to vastly increase coding productivity, then it would be here to stay in that function and perhaps nobody would be able to do anything about it. Firms that use it would outcompete firms that didn't and so on. But if it doesn't increase coding productivity, and there is significant evidence that actually it does not and may even reduce it, then it's mostly just a fad in that space.
Similarly for jobs--if the job is something like making third rate rehashed articles for publication, then yes, AI is disrupting that space. But most jobs require making decisions--not necessarily important decisions, but all those little day to day decisions, often in chains where making one little decision leads to the next one. And the AI people are touting AI "agents" to do this. If those worked, then there are a lot of jobs generative AI would be able to disrupt. But they don't, they're complete and utter crap at that stuff. And we're not talking like with asking AI questions where sometimes it will hallucinate and give a nonsense answer. We're talking tiny percentages of success, pretty much random level. Agents just don't work.
The implication of that is that AI just can't replace most jobs. Companies jumping on the hype wagon and doing it anyway will have problems. So there again, it's probably not "here to stay" in the "replacing ordinary jobs in organizations that do real things" sector.
Again, as for its continuing to improve . . . the thing is that what they did to make the big LLMs is not really based on very new ideas. People have been studying this stuff for a long time, and as far as I can make out LLMs are based on a fairly normal line of research that computer scientists had been thinking about for a while. It's just the first time someone really threw the big money at them and made the Language Models really Large. So it's not as infant a technology as it seems. Further, it shares a difficulty with all these sort of software-evolution approaches: You can't iteratively improve them in the normal way because nobody wrote the program and nobody understands it. So you can't just go "I'll clean up this bit of code so it runs better", "I'll add this feature, it will be fairly easy because it hooks into the API I made just so that I'd be able to add features", "I'll fix this error" or that sort of thing, because you don't know what's in there. All you can do is train again with slightly different parameters and hope something good comes out. Or scrape even more data to make the Language Model even Larger, or curate it a bit differently. But they're about at the limits of size already. And they also seem to have hit the limits of what kind of thing LLMs are willing to do. They have an elephant; it is fine at walking, but to get it to do the transformative things they want, they need it to be able to climb, and that's not what elephants do. Even the hallucinations seem to be kind of baked into what gives the LLMs the ability to say lots of different things in the first place. At this point I think LLMs are a surprisingly mature technology for their apparent age, which have hit a development plateau.
So bottom line, I think you're just simply wrong. Whether I wanted generative AI to replace everyone's job or not, it is not going to, and it may well not be "here to stay" even in some roles it's being used for at the moment. It's being used in those roles not because it is good at them, but because of the hype; if and when the hype goes away, its tide will recede.
Secondarily, generative AI as done by the big hyped Western companies is a bubble. Its costs are far greater than its revenue, and nobody seems to have any plans to change that very much. The key AI companies seem to be run by a bunch of Sam Bankman-Frieds. Hucksters and grifters who keep changing their stories to whatever sounds impressive in the moment. So those companies will go under, and when they go under and stop paying their staff and utility bills, all their server farms will stop answering queries. And when their server farms stop answering queries, the companies that had been using them won't be able to make queries. At that point, for those companies, generative AI will not be here to stay even if it was actually working for them. So in that limited sense, generative AI will not be here to stay. Although the Chinese stuff will still be going.
I expect in the future some other AI technology will come along that does more things and impacts more areas. But it will be another technology, not this one.
Self-driving cars are also a different technology entirely. Yes, they're both called "AI" even though they aren't really, and they're both kind of black boxes that get "trained" rather than being actual programs that anyone really understands or can maintain, but beyond that I don't think there's a ton of similarity. Self-driving cars also seemed to have a lot of promise, also turn out to actually kind of suck and also seem to have run into a technological plateau, so the grand plans for them have also stalled out rather, but they're a separate technology doing that pattern for separate reasons.
So like, if it could actually be used to vastly increase coding productivity, then it would be here to stay in that function and perhaps nobody would be able to do anything about it. Firms that use it would outcompete firms that didn't and so on. But if it doesn't increase coding productivity, and there is significant evidence that actually it does not and may even reduce it, then it's mostly just a fad in that space.
Similarly for jobs--if the job is something like making third rate rehashed articles for publication, then yes, AI is disrupting that space. But most jobs require making decisions--not necessarily important decisions, but all those little day to day decisions, often in chains where making one little decision leads to the next one. And the AI people are touting AI "agents" to do this. If those worked, then there are a lot of jobs generative AI would be able to disrupt. But they don't, they're complete and utter crap at that stuff. And we're not talking like with asking AI questions where sometimes it will hallucinate and give a nonsense answer. We're talking tiny percentages of success, pretty much random level. Agents just don't work.
The implication of that is that AI just can't replace most jobs. Companies jumping on the hype wagon and doing it anyway will have problems. So there again, it's probably not "here to stay" in the "replacing ordinary jobs in organizations that do real things" sector.
Again, as for its continuing to improve . . . the thing is that what they did to make the big LLMs is not really based on very new ideas. People have been studying this stuff for a long time, and as far as I can make out LLMs are based on a fairly normal line of research that computer scientists had been thinking about for a while. It's just the first time someone really threw the big money at them and made the Language Models really Large. So it's not as infant a technology as it seems. Further, it shares a difficulty with all these sort of software-evolution approaches: You can't iteratively improve them in the normal way because nobody wrote the program and nobody understands it. So you can't just go "I'll clean up this bit of code so it runs better", "I'll add this feature, it will be fairly easy because it hooks into the API I made just so that I'd be able to add features", "I'll fix this error" or that sort of thing, because you don't know what's in there. All you can do is train again with slightly different parameters and hope something good comes out. Or scrape even more data to make the Language Model even Larger, or curate it a bit differently. But they're about at the limits of size already. And they also seem to have hit the limits of what kind of thing LLMs are willing to do. They have an elephant; it is fine at walking, but to get it to do the transformative things they want, they need it to be able to climb, and that's not what elephants do. Even the hallucinations seem to be kind of baked into what gives the LLMs the ability to say lots of different things in the first place. At this point I think LLMs are a surprisingly mature technology for their apparent age, which have hit a development plateau.
So bottom line, I think you're just simply wrong. Whether I wanted generative AI to replace everyone's job or not, it is not going to, and it may well not be "here to stay" even in some roles it's being used for at the moment. It's being used in those roles not because it is good at them, but because of the hype; if and when the hype goes away, its tide will recede.
Secondarily, generative AI as done by the big hyped Western companies is a bubble. Its costs are far greater than its revenue, and nobody seems to have any plans to change that very much. The key AI companies seem to be run by a bunch of Sam Bankman-Frieds. Hucksters and grifters who keep changing their stories to whatever sounds impressive in the moment. So those companies will go under, and when they go under and stop paying their staff and utility bills, all their server farms will stop answering queries. And when their server farms stop answering queries, the companies that had been using them won't be able to make queries. At that point, for those companies, generative AI will not be here to stay even if it was actually working for them. So in that limited sense, generative AI will not be here to stay. Although the Chinese stuff will still be going.
I expect in the future some other AI technology will come along that does more things and impacts more areas. But it will be another technology, not this one.
Self-driving cars are also a different technology entirely. Yes, they're both called "AI" even though they aren't really, and they're both kind of black boxes that get "trained" rather than being actual programs that anyone really understands or can maintain, but beyond that I don't think there's a ton of similarity. Self-driving cars also seemed to have a lot of promise, also turn out to actually kind of suck and also seem to have run into a technological plateau, so the grand plans for them have also stalled out rather, but they're a separate technology doing that pattern for separate reasons.
News - The extraction shooter ARC Raiders is out and appears to work on Linux
By mr-victory, 30 Oct 2025 at 5:36 pm UTC
By mr-victory, 30 Oct 2025 at 5:36 pm UTC
Tech Test 2 of Arc Raiders was broken on Linux when it started, 2 days later The Finals also broke in a similar fashion. The bug was fixed before Tech Test ended but still I'd expect this game to break no less often than The Finals. Both games use same tech backend.
News - Fedora Linux 43 has officially arrived
By tuubi, 30 Oct 2025 at 4:49 pm UTC
Sure, and it'll happen as soon as we have the technology. LLM isn't it.
By tuubi, 30 Oct 2025 at 4:49 pm UTC
Especially such stuff as "bureaucracy" is definitely handled much more efficient in future by an automated and versatile AI-algorhithm than mentally tired over caffeinated office "workers".
Sure, and it'll happen as soon as we have the technology. LLM isn't it.
News - Fedora Linux project agrees to allow AI-assisted contributions with a new policy
By Kimyrielle, 30 Oct 2025 at 4:45 pm UTC
I am honestly not sure what about MIT-licensed (Deepseek) or Apache 2.0 (Qwen) isn't free enough. Even OpenAI has a OSS model now, if you absolutely insist it being Western-made (it's garbage, though).
By Kimyrielle, 30 Oct 2025 at 4:45 pm UTC
to advance new and free technologies around LLMs and generative AI that actually respects these ideals
I am honestly not sure what about MIT-licensed (Deepseek) or Apache 2.0 (Qwen) isn't free enough. Even OpenAI has a OSS model now, if you absolutely insist it being Western-made (it's garbage, though).
News - Ubuntu getting optimisations for modern processors with architecture variants
By Tuxee, 30 Oct 2025 at 4:10 pm UTC
Why should this improve performance? It improves first-time launch time. That's all. I suppose your MS Edit application starts within milliseconds when started a second time on a running modern system. Extremely long startup times are a problem of the packager, not the technology itself. If MS Edit takes five seconds for the first start it is more telling about the competence of the packager. Besides I just gave it a try: the initial launch of version 1.2.1 took a fraction of a second on my system.
Once the Blender snap is up I still might be interested an faster code on my Zen4 architecture.
By Tuxee, 30 Oct 2025 at 4:10 pm UTC
The best thing one can do for performance on ununtu is to get rid of snaps.
Why should this improve performance? It improves first-time launch time. That's all. I suppose your MS Edit application starts within milliseconds when started a second time on a running modern system. Extremely long startup times are a problem of the packager, not the technology itself. If MS Edit takes five seconds for the first start it is more telling about the competence of the packager. Besides I just gave it a try: the initial launch of version 1.2.1 took a fraction of a second on my system.
Once the Blender snap is up I still might be interested an faster code on my Zen4 architecture.
News - The extraction shooter ARC Raiders is out and appears to work on Linux
By Xpander, 30 Oct 2025 at 4:05 pm UTC
By Xpander, 30 Oct 2025 at 4:05 pm UTC
Server Slam playtest worked flawlessly without any issues for me and with great performance (epic settings, static lighting, DLSS quality 2560x1440 100+ FPS with 5800X3D/RTX3080). Great to hear it still works on the launch. I will keep my eye on it and let the dust settle a bit and then probably grab it also.
News - The big Crusader Kings III: All Under Heaven expansion is out
By Psyringe, 30 Oct 2025 at 3:34 pm UTC
By Psyringe, 30 Oct 2025 at 3:34 pm UTC
Couldn't be happier! Good job Paradox!
News - The extraction shooter ARC Raiders is out and appears to work on Linux
By BigRob029, 30 Oct 2025 at 2:58 pm UTC
By BigRob029, 30 Oct 2025 at 2:58 pm UTC
I have been LOVING The Finals on Linux, but I certainly worry about the future. I have been following ARC Raiders since the initial trailer so I can't help but pick it up. However, their unresponsiveness to your emails is also disappointing.... huge youtube creators campaign, interviews at twitchcon, but a popular press blog cant get any love?
Hopefully putting up some cash will help move the needle on some spreadsheet somewhere to show how powerful and passionate Linux gamers are/can be.
Hopefully putting up some cash will help move the needle on some spreadsheet somewhere to show how powerful and passionate Linux gamers are/can be.
News - Ubuntu getting optimisations for modern processors with architecture variants
By The_Real_Bitterman, 30 Oct 2025 at 2:30 pm UTC
By The_Real_Bitterman, 30 Oct 2025 at 2:30 pm UTC
Nice to see more distros moving towards optimized builds for various x86 feature levels. So Windows can left behind in the dust even quicker. Considered how bad Windows already performs compared to Linux even without optimized CPU architecture packages.
Even though Linus hates this "abomination" of these feature levels as they are not really a thing from a CPU architecture point of view. Some do exposure only some "v3" Features, some with hybrid cores do even have "v3" only on one set of cores while the efficiency cores for example don't.
Anyway: Hope in the future Ubuntu does automatically install thoe v3 optimized packages on eligible hardware as Tumbleweed does. (Yes that is the whole reason I wrote this comment to say TW already does it)
Even though Linus hates this "abomination" of these feature levels as they are not really a thing from a CPU architecture point of view. Some do exposure only some "v3" Features, some with hybrid cores do even have "v3" only on one set of cores while the efficiency cores for example don't.
Anyway: Hope in the future Ubuntu does automatically install thoe v3 optimized packages on eligible hardware as Tumbleweed does. (Yes that is the whole reason I wrote this comment to say TW already does it)
News - The extraction shooter ARC Raiders is out and appears to work on Linux
By Werner, 30 Oct 2025 at 2:07 pm UTC
By Werner, 30 Oct 2025 at 2:07 pm UTC
i played the Playtest and i really liked the atmosphere, but i will wait some months, to see if they break it for Linux.
News - Ubuntu getting optimisations for modern processors with architecture variants
By emphy, 30 Oct 2025 at 1:48 pm UTC
The best thing one can do for performance on ununtu is to get rid of snaps.
The snaps version of microsoft edit was reported (by omgubuntu) to take 5 seconds to start on modern system.
Let me repeat that: 5 seconds to start *edit* because of a flagship tech.
By emphy, 30 Oct 2025 at 1:48 pm UTC
For those wanting to squeeze every possible bit of performance out of their machines, this sounds like a nice upgrade for Canonical to be working on.
The best thing one can do for performance on ununtu is to get rid of snaps.
The snaps version of microsoft edit was reported (by omgubuntu) to take 5 seconds to start on modern system.
Let me repeat that: 5 seconds to start *edit* because of a flagship tech.
News - Valve fix some games on Linux / Steam Deck having incorrect Steam Play settings and added a chat warning
By Stella, 30 Oct 2025 at 1:25 pm UTC
By Stella, 30 Oct 2025 at 1:25 pm UTC
I think I ran into the incorrect Steam Play thing when I purchased Forza Horizon 5, right after purchase it would tell me 'only available on Windows' and would refuse to install it, this was fixed after a restart
News - Minecraft Java modding is about to get a lot easier and more interesting
By walther von stolzing, 30 Oct 2025 at 1:18 pm UTC
By walther von stolzing, 30 Oct 2025 at 1:18 pm UTC
Nice; Minecraft itself started as a mod on top of Zachtronics's Infiniminer.
News - Bazzite using Fedora 43 is out now with full Xbox Ally / Xbox Ally X support
By Stella, 30 Oct 2025 at 1:15 pm UTC
By Stella, 30 Oct 2025 at 1:15 pm UTC
What an amazing update! I already updated everything to Bazzite 43 (AMD Desktop, Nvidia Laptop, Asus Ally X) and it works extremely well. Nothing seems broken and the new Bazaar is a joy, especially with those new pride progress bars
This is the best experience I've ever had with upgrading across OS versions, compared to something like Kubuntu which would always partially break with upgrades. I already donated to them their new OpenCollective page because Bazzite has made my Linux gaming experience infinitely better and I'm grateful for that
News - Resident Evil HD REMASTER and Resident Evil 0 now on GOG and in the Preservation Program
By mrdeathjr, 30 Oct 2025 at 1:02 pm UTC
By mrdeathjr, 30 Oct 2025 at 1:02 pm UTC
oh my
https://i.postimg.cc/bNdVY5Kj/get.gif

https://i.postimg.cc/bNdVY5Kj/get.gif
News - The extraction shooter ARC Raiders is out and appears to work on Linux
By mZSq7Fq3qs, 30 Oct 2025 at 12:29 pm UTC
By mZSq7Fq3qs, 30 Oct 2025 at 12:29 pm UTC
I would like to buy this but I am sure that they will anticheat it away...
News - Fedora Linux 43 has officially arrived
By dziadulewicz, 30 Oct 2025 at 12:05 pm UTC
By dziadulewicz, 30 Oct 2025 at 12:05 pm UTC
Wild response or not, whole concept of "work life" is about to change. GenAI, AI, whatever AI. We're talking about AI, yes.
It's obvious that AI as a tool frees times already. Whether ppl turn it into pure free time is of course debatable. Inside many modern people is this programmed (early in life) code: "gotta gotta gotta".
There will be no human bus and cab drivers much longer in "developed countries". AI can be an extension to our own goals and even history. Modern brain can only take so much, and the information flood has made masses quite exhausted mentally, noticed or not. For example ancient hieroglyphs forgotten languages have been deciphered by AI in moments, whereas the human way would have taken years or even decades to reach the same (to make it readable for modern ppl). Factories? Certainly no need for "machine maintainers and button pushers" for much longer. It is "sad" that people lose these jobs, but it has happened before. Just have to find something else to make that buck (i suggested humanitarian work all around the world).
Also my message "freaked out" right off the bat as did ssj17vegeta's (though he talked about coding, same essence: human coding can be "replaced" (read: aided). This freaking out is because this is a sensitive subject and scary to those who assume that world will go on about unchanged in its lines for eternity. The truth is that people are simply just not "needed" on many areas of mechanical society anymore. Simple things and "jobs" automate increasingly and "just work".
Also hey: we're not talking the very now, this is just a beginning in "progress" if someone wants to call it that. It is here to stay and indeed is irreversible. We (or some of us) did this to ourselves to effect whole human collective. Now in this very beginning we can see huge impact:
Amazon is laying off approximately 14,000 corporate employees as part of organizational changes aimed at reducing bureaucracy and reallocating resources, particularly towards AI initiatives.
Especially such stuff as "bureaucracy" is definitely handled much more efficient in future by an automated and versatile AI-algorhithm than mentally tired over caffeinated office "workers". Why are they sitting there all day every day, indeed wasting their life, in the first place. I indeed see this as a chance to get that free time available more if individuals even aim for that. Monetary system itself then again, is a whole 'nother matter (problem). Natural resources (and those which are considered scarce, after research - are definitely not) are there with or without our printed moneys or screen moneys you know.
It's obvious that AI as a tool frees times already. Whether ppl turn it into pure free time is of course debatable. Inside many modern people is this programmed (early in life) code: "gotta gotta gotta".
There will be no human bus and cab drivers much longer in "developed countries". AI can be an extension to our own goals and even history. Modern brain can only take so much, and the information flood has made masses quite exhausted mentally, noticed or not. For example ancient hieroglyphs forgotten languages have been deciphered by AI in moments, whereas the human way would have taken years or even decades to reach the same (to make it readable for modern ppl). Factories? Certainly no need for "machine maintainers and button pushers" for much longer. It is "sad" that people lose these jobs, but it has happened before. Just have to find something else to make that buck (i suggested humanitarian work all around the world).
Also my message "freaked out" right off the bat as did ssj17vegeta's (though he talked about coding, same essence: human coding can be "replaced" (read: aided). This freaking out is because this is a sensitive subject and scary to those who assume that world will go on about unchanged in its lines for eternity. The truth is that people are simply just not "needed" on many areas of mechanical society anymore. Simple things and "jobs" automate increasingly and "just work".
Also hey: we're not talking the very now, this is just a beginning in "progress" if someone wants to call it that. It is here to stay and indeed is irreversible. We (or some of us) did this to ourselves to effect whole human collective. Now in this very beginning we can see huge impact:
Amazon is laying off approximately 14,000 corporate employees as part of organizational changes aimed at reducing bureaucracy and reallocating resources, particularly towards AI initiatives.
Especially such stuff as "bureaucracy" is definitely handled much more efficient in future by an automated and versatile AI-algorhithm than mentally tired over caffeinated office "workers". Why are they sitting there all day every day, indeed wasting their life, in the first place. I indeed see this as a chance to get that free time available more if individuals even aim for that. Monetary system itself then again, is a whole 'nother matter (problem). Natural resources (and those which are considered scarce, after research - are definitely not) are there with or without our printed moneys or screen moneys you know.
News - OpenRazer expands Razer device support with new hardware for Linux users
By AsciiWolf, 30 Oct 2025 at 11:28 am UTC
By AsciiWolf, 30 Oct 2025 at 11:28 am UTC
OpenRazer is great, but sadly still unusable in immutable systems because of the very custom udev rules and other system-wide changes needed.
News - Minecraft Java modding is about to get a lot easier and more interesting
By hardpenguin, 30 Oct 2025 at 11:13 am UTC
By hardpenguin, 30 Oct 2025 at 11:13 am UTC
Very nice!
* continues to play Luanti with VoxeLibre *
* continues to play Luanti with VoxeLibre *
News - Indiana Jones and the Great Circle is now on GOG
By emphy, 30 Oct 2025 at 8:11 am UTC
By emphy, 30 Oct 2025 at 8:11 am UTC
My interest in this game has plummeted like a cow's tail since the steam release. Not sure if it is the excitement around it having died down or whether the recent microsoft-negativity caused it, but where I might have purchased it at first sale opportunity if it had released on gog back on launch day, I am simply meh-ing at the news now.
Think I will brush up my gog version of last crusade instead. I am astonished to find out it even exists and still wondering how I missed it back in the day and how it got in my library.
Think I will brush up my gog version of last crusade instead. I am astonished to find out it even exists and still wondering how I missed it back in the day and how it got in my library.
News - The excellent city-builder Timberborn is approaching the 1.0 release
By emphy, 30 Oct 2025 at 8:00 am UTC
By emphy, 30 Oct 2025 at 8:00 am UTC
Oof, forgot I already have timberborn in my library. Good opportunity to have a more extensive gander at it.
Though, and I know this is silly, them beavers walking on their hind legs when carrying loads is bugging me for some reason. I think I would have preferred some more leaning into quadruped aesthetics.
Though, and I know this is silly, them beavers walking on their hind legs when carrying loads is bugging me for some reason. I think I would have preferred some more leaning into quadruped aesthetics.
News - Minecraft Java modding is about to get a lot easier and more interesting
By tonitch, 30 Oct 2025 at 6:50 am UTC
By tonitch, 30 Oct 2025 at 6:50 am UTC
I vaguely remember that notch selling the game was under condition about the game and that's always how I understood that java was not yet a pay to play game like on bedrock. And if I remember correctly there was something about java staying the main game... I might be wrong tho.
- Indiana Jones and the Great Circle will perform better on AMD GPUs with Mesa 26
- Cronos: The New Dawn now has a demo available on Steam
- From former Telltale Games veterans, Dispatch is out and Steam Deck Verified
- Inspired by 1930s cartoons, MOUSE: P.I. For Hire set for launch in 2026
- Roman city-builder Nova Roma from the devs of Kingdoms & Castles arrives in January
- > See more over 30 days here
How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck