Confused on Steam Play and Proton? Be sure to check out our guide.
We do often include affiliate links to earn us some pennies. See more here.

NVIDIA today just released a big new stable driver for Linux with 450.57. It pulls in a whole bunch of big features from the recent 450.51 Beta.

Compared with the Beta, it looks like it's mostly the same plus a few extra fixes. However, it's worth a reminder now it's stable because everyone should be able to upgrade knowing it's a supported driver version. NVIDIA 450.57 is exciting for a few reasons. One of which is the inclusion of support for NVIDIA NGX, which brings things like DLSS to their Linux drivers.

There's also now Image Sharpening support for OpenGL and Vulkan, support for Vulkan direct-to-display on DisplayPort displays which are connected via DisplayPort Multi-Stream Transport (DP-MST), various VDPAU improvements, PRIME enhancements like support for PRIME Synchronization when using displays driven by the x86-video-amdgpu driver as PRIME display offload sinks along with "Reverse PRIME" support too.

On the bug fix side, one of the big ones is that is should be a smoother Wayland experience as NVIDIA fixed a bug that could cause a KDE Plasma session to crash when running under Wayland. They also fixed a bug that prevented X11 EGL displays from being reinitialized. Another KDE issue was also solved, as after some investigation the NVIDIA team found that KDE panels freezing when compositing was disabled was a problem in their driver so that was fixed too.

See the release notes here.

Article taken from GamingOnLinux.com.
27 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly checked on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly. You can also follow my personal adventures on Bluesky.
See more from me
The comments on this article are closed.
All posts need to follow our rules. For users logged in: please hit the Report Flag icon on any post that breaks the rules or contains illegal / harmful content. Guest readers can email us for any issues.
54 comments
Page: «2/3»
  Go to:

damarrin Jul 10, 2020
View PC info
  • Supporter Plus
The thing I’m missing from nvidia right now is async reprojection support so I can play HL Alyx at high settings.

The thing I’m missing from any and all AMD-gfx systems I have (2 of them) is being able to boot every time I turn on the computer and being able to keep using it after I’ve booted.

Decisions, decisions.


Last edited by damarrin on 10 July 2020 at 6:33 am UTC
jens Jul 10, 2020
  • Supporter
There is also a new version of the driver from the Vulkan Developement branch out: version 450.56.01, see
https://developer.nvidia.com/vulkan-driver

Very tempting to install this one from the installer, though I'm afraid that removing it and going back to a driver from repositories fails. Does anyone has experience with that? Fedora 32 here.
Shmerl Jul 10, 2020
I'm quite excited about the future of DLSS because lots of other industries are noticing the huge benefits to such technology

I'm not. It's just a way to work around the lack of compute power that's needed for higher resolutions, and as usual it's overhyped by Nvidia. In practice, DLSS can't replace proper GPU compute units and if your card is lower end, your quality will be lower, DLSS or not.

To put it differently, DLSS is just another marketing gimmick, not a technology that can actually improve games quality.

Instead of overhyped gimmicks, GPU makers should work on improving compute power if they are so insistent that higher resolutions are necessary. Otherwise it makes sense to stick to resolutions on which image quality can be properly maintained.


Last edited by Shmerl on 10 July 2020 at 7:10 am UTC
TheRiddick Jul 10, 2020
To put it differently, DLSS is just another marketing gimmick, not a technology that can actually improve games quality.

Have you even tried DLSS 2.0? (and 3.0 down the pipe will be better)

Your comments strongly suggest you've only ever read about it or seen a very early version of it. Its NOT a gimmick and like I said, many industries are moving towards using lower resolution images to enhance to higher resolution with impressive results.

And it does improve quality, again, you're clearly someone who has not been keeping up to date with DLSS and other similar machine deep learning image enhancing tech's (basically same as DLSS but without NVIDIA drm).

Instead of over-hyped gimmicks(not), GPU makers should work on improving compute power .

Why not do both? MEME

Why waste compute power? games don't need loads of GPU compute power, ray tracing uses some of the compute potential, still room under the hood, use it.


Last edited by TheRiddick on 10 July 2020 at 7:40 am UTC
Eike Jul 10, 2020
View PC info
  • Supporter Plus
In their own way Nvidia's Linux support is pretty good, and I've been mostly trouble free. There are however a few things which affect me and I also dislike, nominally having to re-install the driver after a kernel update (and I do those a lot), then there's the occasional issue with the driver refusing to install on latest kernels (my Nvidia system is still at kernel 5.5.19 cause of that).

The driver adapts automatically to a new kernel for me (thanks to DKMS I think).

*edit* I'm running kernel 5.6.14 on Debian Buster with backports.


Last edited by Eike on 10 July 2020 at 7:45 am UTC
dubigrasu Jul 10, 2020
So I've seen what NGX can do and the results are really stunning, mind blowing. Definitely a leap forward, at least from my amateur/consumer point of view.
But I wonder what this "support" means for us (Linux). AFAIK all the available software was Windows only. Granted, I didn't looked too much into it.
Is there any functionality (as of right now) available for Linux, or at least planned? I mean, the support was implemented for a reason, no?
Luke_Nukem Jul 10, 2020
In their own way Nvidia's Linux support is pretty good, and I've been mostly trouble free. There are however a few things which affect me and I also dislike, nominally having to re-install the driver after a kernel update (and I do those a lot), then there's the occasional issue with the driver refusing to install on latest kernels (my Nvidia system is still at kernel 5.5.19 cause of that).

The driver adapts automatically to a new kernel for me (thanks to DKMS I think).

*edit* I'm running kernel 5.6.14 on Debian Buster with backports.

DKMS will rebuild the driver from module source when required. This means that on rebooting to a new kernel, the first boot of that kernel may take a little longer while the module is rebuilt. Although Ubuntu and derivatives seem to rebuild all DKMS modules when the new kernel is installed rather than at boot (I don't know about other distros).

I've been running the Nvidia drivers on Ubuntu through manual install for a while now, if there's one thing I can say it's that Nvidia has been painless to install since... I dunno, 2002? That's when I started running Linux and Nvidia anyway - ATi was absolute bollocks on Linux then.
Shmerl Jul 10, 2020
Have you even tried DLSS 2.0? (and 3.0 down the pipe will be better)

Your comments strongly suggest you've only ever read about it

I don't have Nvidia cards. And yes, I've read various reviews which confirm what is really self explanatory. There is no magic replacement for compute power. If you increase resolution, reconstructed image can only be an approximation, no matter how much machine learning you'll throw at it. That's just how it works by definition.

So it doesn't increase quality, it only tries to mask its degradation due to compute power of the card not being adequate for a given resolution. That's a fake approach to quality and not something anyone should be cheering for. I'd stick to approach of matching given resolution with required compute power, not to gimmicks that degrade it.

Why waste compute power? games don't need loads of GPU compute power, ray tracing uses some of the compute potential, still room under the hood, use it.

Because I'm not buying some koolaid posing as quality.

Games that need more compute power can run at reasonable resolutions. Not in some overstretched mode with worse quality.


Last edited by Shmerl on 10 July 2020 at 8:09 am UTC
herbert Jul 10, 2020
Have you even tried DLSS 2.0? (and 3.0 down the pipe will be better)

Your comments strongly suggest you've only ever read about it

I don't have Nvidia cards. And yes, I've read various reviews which confirm what is really self explanatory. There is no magic replacement for compute power. If you increase resolution, reconstructed image can only be an approximation, no matter how much machine learning you'll throw at it. That's just how it works by definition.

So it doesn't increase quality, it only tries to mask its degradation due to compute power of the card not being adequate for a given resolution. That's a fake approach to quality and not something anyone should be cheering for. I'd stick to approach of matching given resolution with required compute power, not to gimmicks that degrade it.

Why waste compute power? games don't need loads of GPU compute power, ray tracing uses some of the compute potential, still room under the hood, use it.

Because I'm not buying some koolaid posing as quality.

Games that need more compute power can run at reasonable resolutions. Not in some overstretched mode with worse quality.
As he said you just missing the point. It does increase quality if you set lower settings.

Machine learning here is just kind of regression that predicts some pixels instead of calculating everything. And indeed it's not perfect like any type of interpolation.

Why do you want to waste energy power when you can have an almost as good render but with higher FPS ? What NVIDIA achieved is quite impressive and I can't imagine how long their neural network training must have taken.
Eike Jul 10, 2020
View PC info
  • Supporter Plus
There is no magic replacement for compute power. If you increase resolution, reconstructed image can only be an approximation, no matter how much machine learning you'll throw at it. That's just how it works by definition.

Yes, obviously. Every computer graphic trying to mimic reality is an approximation, by the way.

The technology is better at creating a higher resolution image then current technologies by what I've seen. If you want to buy a card that can compute a 8K 240Hz resolution with supersampling anti-aliasing, nobody is keeping you from doing so.

I don't understand you negativity. What are you fearing?


Last edited by Eike on 10 July 2020 at 8:41 am UTC
jens Jul 10, 2020
  • Supporter
I don't understand you negativity. What are you fearing?

I guess Nvidia taking over the world ;)

On a more serious note, I guess it is no secret that our @Shmerl has a very strong distaste of closed source technology coming from the likes of Nvidia or Microsoft. I guess he is either trying to use every argument to convince users to stay away from it, or his opinion is already that biased due to knowing where it comes from that he can't objectively reason anymore. Could also be both.

@Shmerl, please don't take it personally, I value your knowledge for everything concerning Open Source technology. To quote another user, please "define yourself by the things you love and not by the thing you hate", I would recommend you to stay away from topics with the green banner on it.


Last edited by jens on 10 July 2020 at 9:56 am UTC
dubigrasu Jul 10, 2020
"define yourself by the things you love and not by the thing you hate"
Nice, who said that? (I see is a quote from something?)
TheRiddick Jul 10, 2020
So it doesn't increase quality,

I don't think you understand whats going on one bit. But 'when' everyone is doing similar things as to what DLSS does, I'll watch you eat your own hat! :)

Also as someone pointed out, native everything would be great, but lets face it, unless a magical fairy comes down from the silicon heavens and unleashes a compute power revolution, then we aren't going to see CPU's or GPU's for the consumer handle future graphics very well without some way to 'optimize' performance at higher resolutions or fps.

In saying that DLSS2.0 has shown that in areas a 1440p image can look better then 2160p, its not universal but it CAN look decently better in areas.


Last edited by TheRiddick on 10 July 2020 at 10:36 am UTC
jens Jul 10, 2020
  • Supporter
"define yourself by the things you love and not by the thing you hate"
Nice, who said that? (I see is a quote from something?)

Thanks for the response, yeah, I guess some places could be a lot nicer when everyone (me including) would have that quote more often in the back of their minds!

I've read it somewhere here on GOL in another discussion, unfortunately not sure anymore where exactly that was and no idea where it originates from (I haven't looked for though).
Projectile Vomit Jul 10, 2020
I'm still wrestling with this damn Nvidia/Intel hybrid thing. I switched, recently, to Manjaro (KDe- I love KDe. Leave me alone.) and have never seen this hybrid thing until now. I tried switching to just the Nvidia 440 drivers, and rebooting did not give me the desired results. I am the guy who simply decided to reformat (after backing up everything using a liveUSB), when I mess up the graphics. I have never been very good at recovering a system from a graphics issue. So I reformatted with Manjaro (I get better results with my music production than from other distros, which may have something to do with the hybrid video drivers, as Nvidia is known not to play nice with audio production, but I'm not entirely sure). I am back at the hybrid drivers and, for now, I'm leaving them. Music production is a bit more important to me than games, at least on this computer (my only computer, at this time). I hope a switch that doesn't have me altering files and jumping through hoops comes along soon.

While Manjaro runs fine on my desktop (no hybrid), I had the same problem than you with my MSI laptop. I finally installed Linux Mint on it, and it works ok. I haven't tried making music on it though.

I was using Linux Mint until they dropped their KDe spin. Ever since the loss of Gnome 2.0, I have stuck with KDe.
melkemind Jul 10, 2020
View PC info
  • Supporter
Have you even tried DLSS 2.0? (and 3.0 down the pipe will be better)

Your comments strongly suggest you've only ever read about it
If you increase resolution, reconstructed image can only be an approximation, no matter how much machine learning you'll throw at it. That's just how it works by definition.

What you're saying might actually matter if we were talking about images of real things. These are computer-generated images in the first place. Using an A.I. to "approximate" them is meaningless to your eyes. If it looks right and runs faster on your machine, why would it matter?
Shmerl Jul 10, 2020
As he said you just missing the point. It does increase quality if you set lower settings.

I explained my point. It decreases quality in comparison with using high settings.

Why do you want to waste energy power when you can have an almost as good render but with higher FPS ?

Because "almost as good" is worse. This whole resolution race is just marketing. If GPU isn't ready for higher resolution no amount of tricks can compensate for lack of compute power. And instead of wasting die space on those trick ASICs, GPU makers can actually use it for proper compute units.
Shmerl Jul 10, 2020
I don't understand you negativity. What are you fearing?

I don't like solutions that lower quality and waste GPU die space, while at the same time are sold as some kind of super cool feature. That's just what Nvidia does. They know that competition is already head to head with them in compute power, so they start resorting to tricks like "hey, look we can bump resolution more than competition and it won't be so horrible still". But for that, they stuff the card with ASICs just for those tricks. The alternative (proper approach) is continuing to increase compute power.

Basically to sum up. DLSS is a marketing and market manipulation tool, it's not a good technology.


Last edited by Shmerl on 10 July 2020 at 3:59 pm UTC
Eike Jul 10, 2020
View PC info
  • Supporter Plus
As he said you just missing the point. It does increase quality if you set lower settings.

I explained my point. It decreases quality in comparison with using high settings.

Yeah, obviously.
And calculating only for the real resolution decreases quality in comparison with supersampling.
By the way, usual rendering decreases quality compared to raytracing.
So everything below raytracing is to be avoided.

Sure, the feature is a compromise, but every computer rendering is compromise.


Last edited by Eike on 10 July 2020 at 3:59 pm UTC
Shmerl Jul 10, 2020
Sure, the feature is a compromise, but every computer rendering is compromise.

Yes, but the way Nvidia does it is not good technologically. However since they have more money / resources, they know that once they push this approach (adding more ASICs to the GPU), others will need to either follow it even if it's bad (cheap and wrong way to address quality) or they'll need to invest a lot more money in proper compute advancement. So for them it's a sneaky way to get an edge, but for the end user it's a bad deal.

You should think beyond koolaid logic here. And it's not really about open source vs closed source. It's about technology progress. Nvidia has a lot of power over the market now, and they push garbage approaches due to that.


Last edited by Shmerl on 10 July 2020 at 4:06 pm UTC
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
The comments on this article are closed.