Latest 30 Comments
	News -  Fedora Linux project agrees to allow AI-assisted contributions with a new policy
By grigi, 31 Oct 2025 at 12:39 pm UTC
	
	
	
	
By grigi, 31 Oct 2025 at 12:39 pm UTC
It needs to change and have the source of data available. Right now it's basically a huge data anonymising machine that can verbatim memorise and spit out someone else's work, but can't tell you where it got it from.
The "references" they generate are just things that look like whatever they made up.
Look at the Kurzgesagt video on AI slop for reference.
For me it's that the data they feed into these opensource licensed models is still suspect. How do we know the data was sourced in respect to their license? Much "free" information is provided on a free for personal use, but not business use, basis. How much of their costs have they externalised and not cared about? Did they include wikipedia content without citing wikipedia as the source? That's against the wikipedia license, for example.
It's not just a privacy issue, it's also an ethical issue.
	The "references" they generate are just things that look like whatever they made up.
Look at the Kurzgesagt video on AI slop for reference.
For me it's that the data they feed into these opensource licensed models is still suspect. How do we know the data was sourced in respect to their license? Much "free" information is provided on a free for personal use, but not business use, basis. How much of their costs have they externalised and not cared about? Did they include wikipedia content without citing wikipedia as the source? That's against the wikipedia license, for example.
It's not just a privacy issue, it's also an ethical issue.
	News -  DRAGON QUEST I & II HD-2D Remake out now and Steam Deck Playable
By scaine, 31 Oct 2025 at 11:32 am UTC
	
	
	
	
By scaine, 31 Oct 2025 at 11:32 am UTC
Yeah, they lost a sale here, thanks to Denuvo. I never played the originals, but would love to have given this a try.
	
	News -  DRAGON QUEST I & II HD-2D Remake out now and Steam Deck Playable
By robvv, 31 Oct 2025 at 11:09 am UTC
	
	
	
	
By robvv, 31 Oct 2025 at 11:09 am UTC
For those who are (understandably) anti-DRM, the store page says that this release has Denuvo Anti-Tamper. This can cause problems if switching between Proton versions too many times.
	
	News -  Ubuntu getting optimisations for modern processors with architecture variants
By Linuxer, 31 Oct 2025 at 11:05 am UTC
	
	
	
	
By Linuxer, 31 Oct 2025 at 11:05 am UTC
huh the first comment  snaps are great more secure way of doin and distributin software. sandboxed too
 snaps are great more secure way of doin and distributin software. sandboxed too
	 snaps are great more secure way of doin and distributin software. sandboxed too
 snaps are great more secure way of doin and distributin software. sandboxed too
	News -  Civilization VII set for a big change to allow you to play as one civ continuously
By Musang, 31 Oct 2025 at 9:26 am UTC
	
	
	
	
By Musang, 31 Oct 2025 at 9:26 am UTC
After getting into Old World (Linux native on Steam!), I can never go back to Civ... The careful thought put into the design just is not comparable to Civ anymore. It is also updated almost monthly with balance, UI and modding support. For flashy features like sci-fi setting or magic and heroes, I would consider other 4Xs, but for the core Civ-like experience, this is it for me...
The last few months they've even been organizing the 2025 community tournament, which is an open tournament in a 1V1 format which has been followed and commentated by community members on Youtube/Twitch.
Truly the best in this format of games right now...
	The last few months they've even been organizing the 2025 community tournament, which is an open tournament in a 1V1 format which has been followed and commentated by community members on Youtube/Twitch.
Truly the best in this format of games right now...
	News -  Ubuntu getting optimisations for modern processors with architecture variants
By Brokatt, 31 Oct 2025 at 7:55 am UTC
	
	
This is simply not true anymore and you need to update your sources. It was true maybe 5 years ago but today the difference between native packages, flatpaks and snaps are negligible if we talk about startups after the first-time launch. Canonical has made very [nice](https://ubuntu.com/blog/snap-speed-improvements-with-new-compression-algorithm) [changes](https://ubuntu.com/blog/snap-startup-time-improvements) to snaps over the years.
In reality this article has very little to do with snaps. I would be very pleased with this change if I was still on Ubuntu.
	
	
	
By Brokatt, 31 Oct 2025 at 7:55 am UTC
The best thing one can do for performance on ununtu is to get rid of snaps.
The snaps version of microsoft edit was reported (by omgubuntu) to take 5 seconds to start on modern system.
This is simply not true anymore and you need to update your sources. It was true maybe 5 years ago but today the difference between native packages, flatpaks and snaps are negligible if we talk about startups after the first-time launch. Canonical has made very [nice](https://ubuntu.com/blog/snap-speed-improvements-with-new-compression-algorithm) [changes](https://ubuntu.com/blog/snap-startup-time-improvements) to snaps over the years.
In reality this article has very little to do with snaps. I would be very pleased with this change if I was still on Ubuntu.
	News -  Bazzite using Fedora 43 is out now with full Xbox Ally / Xbox Ally X support
By Persephone the Sheep, 31 Oct 2025 at 7:55 am UTC
	
	
	
	
	
By Persephone the Sheep, 31 Oct 2025 at 7:55 am UTC
and the new Bazaar is a joy, especially with those new pride progress bars emojiJust updated and oh my god their is so many pride flag I love this so much.
	News -  Bazzite using Fedora 43 is out now with full Xbox Ally / Xbox Ally X support
By Persephone the Sheep, 31 Oct 2025 at 7:48 am UTC
	
	
	
	
By Persephone the Sheep, 31 Oct 2025 at 7:48 am UTC
@chickenb00
Its been effectively no issues on the Linux side their is a KDE bug where sometimes when switching from tablet to laptop mode the cursor doesn't come back and requires a reset. That may have been fixed in this update as I can't trigger it anymore. I have scaling set to 125% and have had only one program not play nice with the scaling (fuzzy text and large curser). No other issues I can think of with the software.
As for the laptop itself I have the i5-1334U version and I got separately 48gb of Crucial ram first boot took some time and scared me. The laptop is loud so I almost always have it in powersave mode because of this. I wish the battery life was a bit longer tho I've not tried it out and about as I've been stuck at home so maybe when I go back to school or work it will be fine. I don't think this is a framework/linux issues as other laptop I've used on windows or linux the microphone can't be set to more then 30-40% volume or it will always be static. Those are my only complaints really compared to my last laptop I had since 2018 the screen is much better on the framework and way more repairable. All the expansion cards I've got have no issue Ethernet, SD card, and HDMI just work.
Everyone I've shown my laptop to really likes it. My dad loves how repairable it is as me and him used to run a computer repair store. My mom likes it since it reduces waist and the need to get a whole new laptop. My sister is jealous that I have a pink/blue laptop. And my uncle/god father who works on cars in his free time loves it for how repairable and modular it is.
	Its been effectively no issues on the Linux side their is a KDE bug where sometimes when switching from tablet to laptop mode the cursor doesn't come back and requires a reset. That may have been fixed in this update as I can't trigger it anymore. I have scaling set to 125% and have had only one program not play nice with the scaling (fuzzy text and large curser). No other issues I can think of with the software.
As for the laptop itself I have the i5-1334U version and I got separately 48gb of Crucial ram first boot took some time and scared me. The laptop is loud so I almost always have it in powersave mode because of this. I wish the battery life was a bit longer tho I've not tried it out and about as I've been stuck at home so maybe when I go back to school or work it will be fine. I don't think this is a framework/linux issues as other laptop I've used on windows or linux the microphone can't be set to more then 30-40% volume or it will always be static. Those are my only complaints really compared to my last laptop I had since 2018 the screen is much better on the framework and way more repairable. All the expansion cards I've got have no issue Ethernet, SD card, and HDMI just work.
Everyone I've shown my laptop to really likes it. My dad loves how repairable it is as me and him used to run a computer repair store. My mom likes it since it reduces waist and the need to get a whole new laptop. My sister is jealous that I have a pink/blue laptop. And my uncle/god father who works on cars in his free time loves it for how repairable and modular it is.
	News -  Bazzite using Fedora 43 is out now with full Xbox Ally / Xbox Ally X support
By chickenb00, 31 Oct 2025 at 3:20 am UTC
	
	
	
	
By chickenb00, 31 Oct 2025 at 3:20 am UTC
@Persephone The Sheep
How do you like your Framework 12? And it's been a good experience using Linux and bazzite?
	How do you like your Framework 12? And it's been a good experience using Linux and bazzite?
	News -  Bazzite using Fedora 43 is out now with full Xbox Ally / Xbox Ally X support
By Persephone the Sheep, 31 Oct 2025 at 2:25 am UTC
	
	
	
	
By Persephone the Sheep, 31 Oct 2025 at 2:25 am UTC
Been using Bazzite on my Framework 12 since I've got it and it has cause no issues just a little weird to get a VPN on it.
I've also installed it on a old office PC my aunt got rid of since it couldn't update to windows 11 so I made a living room PC. Had some spare Graphics cards and power supplies from me and my friends upgrading. Started with a RX 580 4GB but the case would trap hot air in the bottom so I switch to my Vega 56 just now which has a flow though cooler and it doesn't trap the air. Only issue is that HDR colors in game mode are wrong but the living room TV doesn't have HDR so thats not an issue.
I'm surprised at how well Sandy Bridge holds up the i5-2320 gets 60fps or close to it in alot of games. Stellar blade 50-60 fps Digimon Story Time Stranger 60 fps Monster hunter Rise 50-60fps Scarlet nexus 60-100 fps. Also the LADV CPU scheduler helps a lot on this CPU.
	I've also installed it on a old office PC my aunt got rid of since it couldn't update to windows 11 so I made a living room PC. Had some spare Graphics cards and power supplies from me and my friends upgrading. Started with a RX 580 4GB but the case would trap hot air in the bottom so I switch to my Vega 56 just now which has a flow though cooler and it doesn't trap the air. Only issue is that HDR colors in game mode are wrong but the living room TV doesn't have HDR so thats not an issue.
I'm surprised at how well Sandy Bridge holds up the i5-2320 gets 60fps or close to it in alot of games. Stellar blade 50-60 fps Digimon Story Time Stranger 60 fps Monster hunter Rise 50-60fps Scarlet nexus 60-100 fps. Also the LADV CPU scheduler helps a lot on this CPU.
	News -  Resident Evil HD REMASTER and Resident Evil 0 now on GOG and in the Preservation Program
By eev, 31 Oct 2025 at 2:08 am UTC
	
	
	
	
By eev, 31 Oct 2025 at 2:08 am UTC
Hell, this is actually pretty notable in my eyes as I think it'd be the first time we'd get one of the Resident Evil remakes bundled with its original counterpart, even though this one is also pretty different from the rest.
I gave these a spin on the Steam Deck so I'll give you a heads up, VSync messed up cutscenes so I turned it off and used the regular framerate cap, and I also got the FPS Fix for 0 that's listed on the pcgamingwiki to get it to stick to 60FPS, though you can also just play at 30 if you want, this will require a DLL override. These issues seem common for the port overall though, so no specific Linux issues as far as my about half an hour of initial play is concerned.
	I gave these a spin on the Steam Deck so I'll give you a heads up, VSync messed up cutscenes so I turned it off and used the regular framerate cap, and I also got the FPS Fix for 0 that's listed on the pcgamingwiki to get it to stick to 60FPS, though you can also just play at 30 if you want, this will require a DLL override. These issues seem common for the port overall though, so no specific Linux issues as far as my about half an hour of initial play is concerned.
	News -  Bazzite using Fedora 43 is out now with full Xbox Ally / Xbox Ally X support
By melkemind, 31 Oct 2025 at 12:06 am UTC
	
	
	
	
By melkemind, 31 Oct 2025 at 12:06 am UTC
This will be great for all the people who end up with buyer's remorse after getting this "Xbox" that will  gradually begin to inject Copilot into every aspect of it once they've lured in their victims. 
	
	News -  The extraction shooter ARC Raiders is out and appears to work on Linux
By sickgrinder, 30 Oct 2025 at 10:12 pm UTC
	
	
	
	
By sickgrinder, 30 Oct 2025 at 10:12 pm UTC
Runs great on tumbleweed, sadly the game that I have been waiting on is way to boring, refunding this one. 
	
	News -  Replicat is basically the classic card game snap meets Balatro - check out the demo
By Linux_Rocks, 30 Oct 2025 at 8:12 pm UTC
	
	
	
	
By Linux_Rocks, 30 Oct 2025 at 8:12 pm UTC
Aww, snap! Aww, snap!
Come to our macaroni party then we'll take a nap!
	Come to our macaroni party then we'll take a nap!
	News -  The extraction shooter ARC Raiders is out and appears to work on Linux
By Purple Library Guy, 30 Oct 2025 at 7:44 pm UTC
	
	
	
	
By Purple Library Guy, 30 Oct 2025 at 7:44 pm UTC
Is text-to-speech actually the same technology as generative AI?  I feel like it should be a different thing, but I don't know.
	
	News -  Merge dogs to make bigger dogs in the delightfully silly roguelike deckbuilder Dogpile
By Nagezahn, 30 Oct 2025 at 7:09 pm UTC
	
	
	
	
	
By Nagezahn, 30 Oct 2025 at 7:09 pm UTC
Looks like the demo is worth a try. Though I'm really wondering why what looks like a simple game (and a demo at that) requires 5 GB of space. emojiQuoting myself here. Checked it out again today, and the download was just shy of 300 MB now. What a change! Will try it out later.
	News -  Fedora Linux project agrees to allow AI-assisted contributions with a new policy
By Kimyrielle, 30 Oct 2025 at 7:05 pm UTC
	
	
Well, from a purely technical point of view, the question is easy to answer: It made the code up, based on knowledge it gained from looking at other people's code. That's really all there is to it.
The legality of doing that is murky, as of today. Mostly because traditional copyright law wasn't designed with AI training in mind. Keep in in mind that no trace of source material is left in the trained model, which puts the model weights outside of copyright law's reach. Several lawsuits have been filed, arguing AI training with copyrighted material to be illegal. Every single one of them so far has been tossed out by courts. In case you wonder, yes, Meta was found guilty of copyright infringement, but that wasn't about the training, it was about them torrenting books they used for the training.
Unless copyright law is getting updated (I am not seeing anything in the pipes in any relevant jurisdiction), that leaves ethical considerations. And as we know, these are very much subjective.
Same applies to the actual output. To be copyrightable, a work needs to be human created (anyone remember the famous case about the monkey selfie?). AI output is clearly not human made, so the output is not copyrightable - and thus cannot be affected or bound by any kind of license. It's legally public domain.
The one issue is if the model accidentally or on purpose produces full replicas of copyrighted/trademarked material. Queen Elsa doesn't stop being copyrighted just because an AI model drew her. Which is behind the case of Disney vs Midjourney - their model is trained on Disney's work and can reproduce it on prompt. Which - since the outputs are technically distributed when the customer downloads them - could be a copyright violation. I do actually expect Disney to win this case, but let's see. In the end, it looks like a bigger issue than it is. People could make a replica of Disney IP by copy/pasting it, without the AI detour. The result will probably be API model providers having to block people from generating copyrighted/trademarked material. Most newer models I am aware of already aren't trained on specific artists to prevent these issues.
	
	
	
By Kimyrielle, 30 Oct 2025 at 7:05 pm UTC
I think the question is more "When you ask a Large Language Model to 'write' you some code, where did that code come from and whose copyrights is it infringing?"
Well, from a purely technical point of view, the question is easy to answer: It made the code up, based on knowledge it gained from looking at other people's code. That's really all there is to it.
The legality of doing that is murky, as of today. Mostly because traditional copyright law wasn't designed with AI training in mind. Keep in in mind that no trace of source material is left in the trained model, which puts the model weights outside of copyright law's reach. Several lawsuits have been filed, arguing AI training with copyrighted material to be illegal. Every single one of them so far has been tossed out by courts. In case you wonder, yes, Meta was found guilty of copyright infringement, but that wasn't about the training, it was about them torrenting books they used for the training.
Unless copyright law is getting updated (I am not seeing anything in the pipes in any relevant jurisdiction), that leaves ethical considerations. And as we know, these are very much subjective.
Same applies to the actual output. To be copyrightable, a work needs to be human created (anyone remember the famous case about the monkey selfie?). AI output is clearly not human made, so the output is not copyrightable - and thus cannot be affected or bound by any kind of license. It's legally public domain.
The one issue is if the model accidentally or on purpose produces full replicas of copyrighted/trademarked material. Queen Elsa doesn't stop being copyrighted just because an AI model drew her. Which is behind the case of Disney vs Midjourney - their model is trained on Disney's work and can reproduce it on prompt. Which - since the outputs are technically distributed when the customer downloads them - could be a copyright violation. I do actually expect Disney to win this case, but let's see. In the end, it looks like a bigger issue than it is. People could make a replica of Disney IP by copy/pasting it, without the AI detour. The result will probably be API model providers having to block people from generating copyrighted/trademarked material. Most newer models I am aware of already aren't trained on specific artists to prevent these issues.
	News -  Mesa 25.2.6 rolls out with more fixes for Intel GPUs, Zink and NVK
By Chinstrap , 30 Oct 2025 at 6:11 pm UTC
	
	
	
	
By Chinstrap , 30 Oct 2025 at 6:11 pm UTC
Anyone tested out how the NVK Driver updates work with this update on the 50 series Nvidia cards?
	
	News -  Bazzite using Fedora 43 is out now with full Xbox Ally / Xbox Ally X support
By dmacofalltrades, 30 Oct 2025 at 5:53 pm UTC
	
	
	
	
By dmacofalltrades, 30 Oct 2025 at 5:53 pm UTC
I adore Bazzite. I even switched to it on my non-gaming Dell XPS. Most of the setup I do on a new install is cooked into Bazzite out of the box. I've been very happy with this distro and I'm not switching anytime soon.
I'm also excited for the new donation options. I wanna support these developers as much as possible.
	I'm also excited for the new donation options. I wanna support these developers as much as possible.
	News -  Fedora Linux project agrees to allow AI-assisted contributions with a new policy
By ivarhill, 30 Oct 2025 at 5:50 pm UTC
	
	
There's way more to this though, both in terms of free software ideals and in terms how to define LLMs. I think it would be fair to compare this to Microsoft's recent efforts in advancing WSL and OSS more broadly (very intentionally leaving out the FL there!) - after all, Microsoft has a lot of projects out there that theoretically adhere to open licenses and in a purely practical sense support the free software community.
However, if anyone within said community says "I'm choosing not to engage with any Microsoft-developed projects" I think almost everyone would understand why and find that reasonable even if one can find some projects that technically adhere to certain standards.
Within the LLM space, OpenAI is a good example of this as well. Sure, they provide models that by a particular definition are "open", but engaging with these models ignores the bigger context of how they came to be developed, who is furthering their development and through what means, and whether they actively strive to maximize user freedom.
And they absolutely do not - which is fine, this is more or less representative of the distinction between open source and free/libre software - but that is the metric by which I'm arguing here. I don't think it's enough to see "open source" LLMs, since that definition is purely practical in nature and ignores the bigger picture. What is really necessary is:
I'm of course not trying to just reiterate FSF speaking points here - but in all honesty, this rise in LLMs and how they have been developed thus far I think really illustrates why it's important to draw a distinction between open source and free software, and why it matters to take a more holistic view.
 - but in all honesty, this rise in LLMs and how they have been developed thus far I think really illustrates why it's important to draw a distinction between open source and free software, and why it matters to take a more holistic view.
By definition, a free/libre software approach implies caring about the user above the code, and there can be no free users if the code (directly or indirectly) contributes to a technocratic oligarchy or if there is no livable planet for users to live on. I get that this may seem a bit out of left field, but this has to be the main metric by which we look at LLMs or very soon it will be too late to even attempt any genuinely libre approaches to this entire category of technology. These are the points that companies such as OpenAI, Microsoft or Google could never make the top priority, and why even if they use open licenses, that well is poisoned by its very definition.
	
	
	
By ivarhill, 30 Oct 2025 at 5:50 pm UTC
I am honestly not sure what about MIT-licensed (Deepseek) or Apache 2.0 (Qwen) isn't free enough. Even OpenAI has a OSS model now, if you absolutely insist it being Western-made (it's garbage, though).I completely agree, within the context of the models themselves and the licenses they use.
There's way more to this though, both in terms of free software ideals and in terms how to define LLMs. I think it would be fair to compare this to Microsoft's recent efforts in advancing WSL and OSS more broadly (very intentionally leaving out the FL there!) - after all, Microsoft has a lot of projects out there that theoretically adhere to open licenses and in a purely practical sense support the free software community.
However, if anyone within said community says "I'm choosing not to engage with any Microsoft-developed projects" I think almost everyone would understand why and find that reasonable even if one can find some projects that technically adhere to certain standards.
Within the LLM space, OpenAI is a good example of this as well. Sure, they provide models that by a particular definition are "open", but engaging with these models ignores the bigger context of how they came to be developed, who is furthering their development and through what means, and whether they actively strive to maximize user freedom.
And they absolutely do not - which is fine, this is more or less representative of the distinction between open source and free/libre software - but that is the metric by which I'm arguing here. I don't think it's enough to see "open source" LLMs, since that definition is purely practical in nature and ignores the bigger picture. What is really necessary is:
- Technology that has been developed through free software standards from a foundational level. This includes not only where the technology comes from and how it is controlled, but also addressing environmental concerns! An 'open source' project can ignore these things, but an honestly libre LLM technology has to address this before anything else.
- Models that have been developed entirely on top of these foundations, and through fully consenting use of data. Like the point before, this last matter has to be resolved before moving forward.
- And finally, open distribution where anyone is free to adapt, use, develop on and further these technologies. This is the step that I believe you are addressing, and it is very important - but far from the whole picture.
I'm of course not trying to just reiterate FSF speaking points here
 - but in all honesty, this rise in LLMs and how they have been developed thus far I think really illustrates why it's important to draw a distinction between open source and free software, and why it matters to take a more holistic view.
 - but in all honesty, this rise in LLMs and how they have been developed thus far I think really illustrates why it's important to draw a distinction between open source and free software, and why it matters to take a more holistic view.By definition, a free/libre software approach implies caring about the user above the code, and there can be no free users if the code (directly or indirectly) contributes to a technocratic oligarchy or if there is no livable planet for users to live on. I get that this may seem a bit out of left field, but this has to be the main metric by which we look at LLMs or very soon it will be too late to even attempt any genuinely libre approaches to this entire category of technology. These are the points that companies such as OpenAI, Microsoft or Google could never make the top priority, and why even if they use open licenses, that well is poisoned by its very definition.
	News -  Fedora Linux project agrees to allow AI-assisted contributions with a new policy
By Purple Library Guy, 30 Oct 2025 at 5:46 pm UTC
	
	
	
	
By Purple Library Guy, 30 Oct 2025 at 5:46 pm UTC
I think the question is more "When you ask a Large Language Model to 'write' you some code, where did that code come from and whose copyrights is it infringing?"
	
	News -  Fedora Linux 43 has officially arrived
By Purple Library Guy, 30 Oct 2025 at 5:43 pm UTC
	
	
	
	
By Purple Library Guy, 30 Oct 2025 at 5:43 pm UTC
I really think you're misunderstanding the technology (of specifically Large Language Model, generative "AI") and what it can do.  The thing is that while there's a real thing there, and it can do some interesting things, it cannot actually do most of the transformational things that are claimed about it, and some of the key stuff that it supposedly does, it actually kind of doesn't.  And while propenents will say sure, but it's a technology in its infancy . . . it actually isn't, and it seems to have plateaued in its capabilities.
So like, if it could actually be used to vastly increase coding productivity, then it would be here to stay in that function and perhaps nobody would be able to do anything about it. Firms that use it would outcompete firms that didn't and so on. But if it doesn't increase coding productivity, and there is significant evidence that actually it does not and may even reduce it, then it's mostly just a fad in that space.
Similarly for jobs--if the job is something like making third rate rehashed articles for publication, then yes, AI is disrupting that space. But most jobs require making decisions--not necessarily important decisions, but all those little day to day decisions, often in chains where making one little decision leads to the next one. And the AI people are touting AI "agents" to do this. If those worked, then there are a lot of jobs generative AI would be able to disrupt. But they don't, they're complete and utter crap at that stuff. And we're not talking like with asking AI questions where sometimes it will hallucinate and give a nonsense answer. We're talking tiny percentages of success, pretty much random level. Agents just don't work.
The implication of that is that AI just can't replace most jobs. Companies jumping on the hype wagon and doing it anyway will have problems. So there again, it's probably not "here to stay" in the "replacing ordinary jobs in organizations that do real things" sector.
Again, as for its continuing to improve . . . the thing is that what they did to make the big LLMs is not really based on very new ideas. People have been studying this stuff for a long time, and as far as I can make out LLMs are based on a fairly normal line of research that computer scientists had been thinking about for a while. It's just the first time someone really threw the big money at them and made the Language Models really Large. So it's not as infant a technology as it seems. Further, it shares a difficulty with all these sort of software-evolution approaches: You can't iteratively improve them in the normal way because nobody wrote the program and nobody understands it. So you can't just go "I'll clean up this bit of code so it runs better", "I'll add this feature, it will be fairly easy because it hooks into the API I made just so that I'd be able to add features", "I'll fix this error" or that sort of thing, because you don't know what's in there. All you can do is train again with slightly different parameters and hope something good comes out. Or scrape even more data to make the Language Model even Larger, or curate it a bit differently. But they're about at the limits of size already. And they also seem to have hit the limits of what kind of thing LLMs are willing to do. They have an elephant; it is fine at walking, but to get it to do the transformative things they want, they need it to be able to climb, and that's not what elephants do. Even the hallucinations seem to be kind of baked into what gives the LLMs the ability to say lots of different things in the first place. At this point I think LLMs are a surprisingly mature technology for their apparent age, which have hit a development plateau.
So bottom line, I think you're just simply wrong. Whether I wanted generative AI to replace everyone's job or not, it is not going to, and it may well not be "here to stay" even in some roles it's being used for at the moment. It's being used in those roles not because it is good at them, but because of the hype; if and when the hype goes away, its tide will recede.
Secondarily, generative AI as done by the big hyped Western companies is a bubble. Its costs are far greater than its revenue, and nobody seems to have any plans to change that very much. The key AI companies seem to be run by a bunch of Sam Bankman-Frieds. Hucksters and grifters who keep changing their stories to whatever sounds impressive in the moment. So those companies will go under, and when they go under and stop paying their staff and utility bills, all their server farms will stop answering queries. And when their server farms stop answering queries, the companies that had been using them won't be able to make queries. At that point, for those companies, generative AI will not be here to stay even if it was actually working for them. So in that limited sense, generative AI will not be here to stay. Although the Chinese stuff will still be going.
I expect in the future some other AI technology will come along that does more things and impacts more areas. But it will be another technology, not this one.
Self-driving cars are also a different technology entirely. Yes, they're both called "AI" even though they aren't really, and they're both kind of black boxes that get "trained" rather than being actual programs that anyone really understands or can maintain, but beyond that I don't think there's a ton of similarity. Self-driving cars also seemed to have a lot of promise, also turn out to actually kind of suck and also seem to have run into a technological plateau, so the grand plans for them have also stalled out rather, but they're a separate technology doing that pattern for separate reasons.
	So like, if it could actually be used to vastly increase coding productivity, then it would be here to stay in that function and perhaps nobody would be able to do anything about it. Firms that use it would outcompete firms that didn't and so on. But if it doesn't increase coding productivity, and there is significant evidence that actually it does not and may even reduce it, then it's mostly just a fad in that space.
Similarly for jobs--if the job is something like making third rate rehashed articles for publication, then yes, AI is disrupting that space. But most jobs require making decisions--not necessarily important decisions, but all those little day to day decisions, often in chains where making one little decision leads to the next one. And the AI people are touting AI "agents" to do this. If those worked, then there are a lot of jobs generative AI would be able to disrupt. But they don't, they're complete and utter crap at that stuff. And we're not talking like with asking AI questions where sometimes it will hallucinate and give a nonsense answer. We're talking tiny percentages of success, pretty much random level. Agents just don't work.
The implication of that is that AI just can't replace most jobs. Companies jumping on the hype wagon and doing it anyway will have problems. So there again, it's probably not "here to stay" in the "replacing ordinary jobs in organizations that do real things" sector.
Again, as for its continuing to improve . . . the thing is that what they did to make the big LLMs is not really based on very new ideas. People have been studying this stuff for a long time, and as far as I can make out LLMs are based on a fairly normal line of research that computer scientists had been thinking about for a while. It's just the first time someone really threw the big money at them and made the Language Models really Large. So it's not as infant a technology as it seems. Further, it shares a difficulty with all these sort of software-evolution approaches: You can't iteratively improve them in the normal way because nobody wrote the program and nobody understands it. So you can't just go "I'll clean up this bit of code so it runs better", "I'll add this feature, it will be fairly easy because it hooks into the API I made just so that I'd be able to add features", "I'll fix this error" or that sort of thing, because you don't know what's in there. All you can do is train again with slightly different parameters and hope something good comes out. Or scrape even more data to make the Language Model even Larger, or curate it a bit differently. But they're about at the limits of size already. And they also seem to have hit the limits of what kind of thing LLMs are willing to do. They have an elephant; it is fine at walking, but to get it to do the transformative things they want, they need it to be able to climb, and that's not what elephants do. Even the hallucinations seem to be kind of baked into what gives the LLMs the ability to say lots of different things in the first place. At this point I think LLMs are a surprisingly mature technology for their apparent age, which have hit a development plateau.
So bottom line, I think you're just simply wrong. Whether I wanted generative AI to replace everyone's job or not, it is not going to, and it may well not be "here to stay" even in some roles it's being used for at the moment. It's being used in those roles not because it is good at them, but because of the hype; if and when the hype goes away, its tide will recede.
Secondarily, generative AI as done by the big hyped Western companies is a bubble. Its costs are far greater than its revenue, and nobody seems to have any plans to change that very much. The key AI companies seem to be run by a bunch of Sam Bankman-Frieds. Hucksters and grifters who keep changing their stories to whatever sounds impressive in the moment. So those companies will go under, and when they go under and stop paying their staff and utility bills, all their server farms will stop answering queries. And when their server farms stop answering queries, the companies that had been using them won't be able to make queries. At that point, for those companies, generative AI will not be here to stay even if it was actually working for them. So in that limited sense, generative AI will not be here to stay. Although the Chinese stuff will still be going.
I expect in the future some other AI technology will come along that does more things and impacts more areas. But it will be another technology, not this one.
Self-driving cars are also a different technology entirely. Yes, they're both called "AI" even though they aren't really, and they're both kind of black boxes that get "trained" rather than being actual programs that anyone really understands or can maintain, but beyond that I don't think there's a ton of similarity. Self-driving cars also seemed to have a lot of promise, also turn out to actually kind of suck and also seem to have run into a technological plateau, so the grand plans for them have also stalled out rather, but they're a separate technology doing that pattern for separate reasons.
	News -  The extraction shooter ARC Raiders is out and appears to work on Linux
By mr-victory, 30 Oct 2025 at 5:36 pm UTC
	
	
	
	
By mr-victory, 30 Oct 2025 at 5:36 pm UTC
Tech Test 2 of Arc Raiders was broken on Linux when it started, 2 days later The Finals also broke in a similar fashion. The bug was fixed before Tech Test ended but still I'd expect this game to break no less often than The Finals. Both games use same tech backend.
	
	News -  Fedora Linux 43 has officially arrived
By tuubi, 30 Oct 2025 at 4:49 pm UTC
	
	
Sure, and it'll happen as soon as we have the technology. LLM isn't it.
	
	
	
By tuubi, 30 Oct 2025 at 4:49 pm UTC
Especially such stuff as "bureaucracy" is definitely handled much more efficient in future by an automated and versatile AI-algorhithm than mentally tired over caffeinated office "workers".
Sure, and it'll happen as soon as we have the technology. LLM isn't it.
	News -  Fedora Linux project agrees to allow AI-assisted contributions with a new policy
By Kimyrielle, 30 Oct 2025 at 4:45 pm UTC
	
	
I am honestly not sure what about MIT-licensed (Deepseek) or Apache 2.0 (Qwen) isn't free enough. Even OpenAI has a OSS model now, if you absolutely insist it being Western-made (it's garbage, though).
	
	
	
By Kimyrielle, 30 Oct 2025 at 4:45 pm UTC
to advance new and free technologies around LLMs and generative AI that actually respects these ideals
I am honestly not sure what about MIT-licensed (Deepseek) or Apache 2.0 (Qwen) isn't free enough. Even OpenAI has a OSS model now, if you absolutely insist it being Western-made (it's garbage, though).
	News -  Ubuntu getting optimisations for modern processors with architecture variants
By Tuxee, 30 Oct 2025 at 4:10 pm UTC
	
	
Why should this improve performance? It improves first-time launch time. That's all. I suppose your MS Edit application starts within milliseconds when started a second time on a running modern system. Extremely long startup times are a problem of the packager, not the technology itself. If MS Edit takes five seconds for the first start it is more telling about the competence of the packager. Besides I just gave it a try: the initial launch of version 1.2.1 took a fraction of a second on my system.
Once the Blender snap is up I still might be interested an faster code on my Zen4 architecture.
	
	
	
By Tuxee, 30 Oct 2025 at 4:10 pm UTC
The best thing one can do for performance on ununtu is to get rid of snaps.
Why should this improve performance? It improves first-time launch time. That's all. I suppose your MS Edit application starts within milliseconds when started a second time on a running modern system. Extremely long startup times are a problem of the packager, not the technology itself. If MS Edit takes five seconds for the first start it is more telling about the competence of the packager. Besides I just gave it a try: the initial launch of version 1.2.1 took a fraction of a second on my system.
Once the Blender snap is up I still might be interested an faster code on my Zen4 architecture.
	News -  The extraction shooter ARC Raiders is out and appears to work on Linux
By Xpander, 30 Oct 2025 at 4:05 pm UTC
	
	
	
	
By Xpander, 30 Oct 2025 at 4:05 pm UTC
Server Slam playtest worked flawlessly without any issues for me and with great performance (epic settings, static lighting, DLSS quality 2560x1440 100+ FPS with 5800X3D/RTX3080). Great to hear it still works on the launch. I will keep my eye on it and let the dust settle a bit and then probably grab it also.
	
	News -  The big Crusader Kings III: All Under Heaven expansion is out
By Psyringe, 30 Oct 2025 at 3:34 pm UTC
	
	
	
	
By Psyringe, 30 Oct 2025 at 3:34 pm UTC
Couldn't be happier! Good job Paradox!
	
	News -  The extraction shooter ARC Raiders is out and appears to work on Linux
By BigRob029, 30 Oct 2025 at 2:58 pm UTC
	
	
	
	
By BigRob029, 30 Oct 2025 at 2:58 pm UTC
I have been LOVING The Finals on Linux, but I certainly worry about the future. I have been following ARC Raiders since the initial trailer so I can't help but pick it up. However, their unresponsiveness to your emails is also disappointing.... huge youtube creators campaign, interviews at twitchcon, but a popular press blog cant get any love? 
Hopefully putting up some cash will help move the needle on some spreadsheet somewhere to show how powerful and passionate Linux gamers are/can be.
	Hopefully putting up some cash will help move the needle on some spreadsheet somewhere to show how powerful and passionate Linux gamers are/can be.
	News -  Ubuntu getting optimisations for modern processors with architecture variants
By The_Real_Bitterman, 30 Oct 2025 at 2:30 pm UTC
	
	
	
	
By The_Real_Bitterman, 30 Oct 2025 at 2:30 pm UTC
Nice to see more distros moving towards optimized builds for various x86 feature levels. So Windows can left behind in the dust even quicker. Considered how bad Windows already performs compared to Linux even without optimized CPU architecture packages.
Even though Linus hates this "abomination" of these feature levels as they are not really a thing from a CPU architecture point of view. Some do exposure only some "v3" Features, some with hybrid cores do even have "v3" only on one set of cores while the efficiency cores for example don't.
Anyway: Hope in the future Ubuntu does automatically install thoe v3 optimized packages on eligible hardware as Tumbleweed does. (Yes that is the whole reason I wrote this comment to say TW already does it)
	Even though Linus hates this "abomination" of these feature levels as they are not really a thing from a CPU architecture point of view. Some do exposure only some "v3" Features, some with hybrid cores do even have "v3" only on one set of cores while the efficiency cores for example don't.
Anyway: Hope in the future Ubuntu does automatically install thoe v3 optimized packages on eligible hardware as Tumbleweed does. (Yes that is the whole reason I wrote this comment to say TW already does it)
- Indiana Jones and the Great Circle will perform better on AMD GPUs with Mesa 26
- Cronos: The New Dawn now has a demo available on Steam
- From former Telltale Games veterans, Dispatch is out and Steam Deck Verified
- Inspired by 1930s cartoons, MOUSE: P.I. For Hire set for launch in 2026
- Roman city-builder Nova Roma from the devs of Kingdoms & Castles arrives in January
- > See more over 30 days here
 Support us on Patreon
 Support us on Patreon PayPal
 PayPal How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck