We do often include affiliate links to earn us some pennies. See more here.

While DLSS has been technically available in the NVIDIA drivers for Linux for some time now, the missing piece was support for Proton which will be landing tomorrow - June 22.

In one of their GeForce blog posts, they made it very clear:

Today we’re announcing DLSS is coming to Facepunch Studios’ massively popular multiplayer survival game, Rust, on July 1st, and is available now in Necromunda: Hired Gun and Chernobylite. Tomorrow, with a Linux graphics driver update, we’ll also be adding support for Vulkan API DLSS games on Proton.

This was revealed originally on June 1 along with the GeForce RTX 3080 Ti and GeForce RTX 3070 Ti announcements. At least now we have a date for part of this extra support for Linux and DLSS. This, as stated, will be limited to games that natively use Vulkan as their graphics API which will be a short list including DOOM Eternal, No Man’s Sky, and Wolfenstein: Youngblood. Support for running Windows games that use DirectX with DLSS in Proton will arrive "this Fall".

With that in mind then, it's likely we'll see the 470 driver land tomorrow, that is unless NVIDIA have a smaller driver coming first with this added in. We're excited for the 470 driver as a whole, since that will include support for async reprojection to help VR on Linux and hardware accelerated GL and Vulkan rendering with Xwayland.

Article taken from GamingOnLinux.com.
32 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly came back to check on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly.
See more from me
The comments on this article are closed.
46 comments
Page: «2/5»
  Go to:

damarrin Jun 22, 2021
View PC info
  • Supporter Plus
Does RT run well on older GPUs? If it's just a tick on the box it's a pretty useless thing to get hung up on.
3zekiel Jun 22, 2021
Quoting: Guest
Quoting: 3zekiel
Quoting: Guesti would be more excited if nvidia open sourced it. along with their drivers. rather than keeping everything behind proprietary, closed up source. especially considering they are not bothering adding support to older gpu's. which many of those older gpu's still offer amazing performance. like the 1080 ti., two generations old.
Supporting on pre RTX is not possible. DLSS heavily uses tensor cores, which are only present on RTX2000+ GPUs. The reasons why fidelity FX can work on older GPUs is because it is a good old upscale filter. It is NOT an equivalent of DLSS, even though they market it as such ... (We will see hands-on results, but I feel disappointment coming - Look also at the videos they gave during the presentation, you will see the blur and jaggies on the ones which are moving coming from "dumb" upscales). Upscale works well for static scenes, but as soon as you had movement, there is only so much you can do - and it will look bad -... Comparatively, Nvidia's solution uses a neural network to infer lost information/pixels, thus reconstructing much more precisely the image and movements, with little to no blur and jaggies.

Quoting: Guestits going to be interesting to see when AMD's alternative lands on linux. at least on windows their version will be cross compatible. their own demo was done on a 1060. software lockin's are extremely unethical.
Even if Nvidia wanted to port it over, they can not. AMD lacks the HW support for the feature. It is not a sw lock-in. It is just that they have an exclusive HW feature.
CUDA is a sw lock in on the other hand, since it theoretically could run on other GPU albeit it would likely then lose the advantage of being slimmer than openCL).

For DLSS, they could emulate it on older/amd GPUs, but it would most likely reduce performance instead of enhancing it (convolution and other inference methods are very heavy with no dedicated hw or customized ISA, and it would occupy normal cores for naught), which would make no sense.
there's no reason to keep it lock and key when they are specifically going out of their way to limit it to not only nvidia, but only rtx cards. even though older generation of nvidia's own cards are still high performing. but they wanted to leverage tensor cores for it, which i can understand. its why i mentioned:
Quoterather than keeping everything behind proprietary, closed up source. especially considering they are not bothering adding support to older gpu's.
it doesn't work on amd. it doesn't even work on pascal, maxwell, etc. what is nvidia afraid of? its only going to work on nvidia cards still. it will still be limited to rtx cards. why keep it closed source?

there's a lot of stuff in mesa that's only limited to amd, but its open source. there can exist a world where nvidia has nvidia only stuff, but still be open source. open source doesn't necessarily mean cross compatible or cross vendor. open source doesn't even prevent you from capitalizing from your software. you can still sell it if you want to. but standards, the software, can still be open.

this is why i'm excited for amd's version because overtime it should be coming to mesa. with the added bonus of it not limited to only amd or only to amd's navi architecture.

I did not follow it all, but Nvidia did open source the API. So anyone is free to implement a compatible solution now. What else do you want ?

For the Neural network + training sets, that is never going to get open sourced, at least not before years. This has far too high value value:
It is very complicated to obtain the training sets, and the training itself is done by virtuosos of neural network tweaking, and these guys don't come cheap. These network could in particular be exploited directly by Intel which has its XeSS incoming, and the cash + servers + hw to exploit it. So potentially they would be making Intel save a billion or two in R&D (that's what the tech jump cost Nvidia according to their presentation at that time)... I am sure you can understand that no CEO/CTO in their right mind would allow that. In fact, even AMD could just spin off some AI/tensor cores and use it. It is something that is often misunderstood in the silicon industry, the HW act as an enabler, but the value ends up being in SW. So for cutting edge feature, you can not expect them to be open sourced right away.

For the AMD feature, just use VKbasalt + CAS/fxaa/smaa... You will get essentially the same result. What I was trying to explain is that AMD is not doing something new, or that enables anything new. Thus they are open sourcing something with little to no value ... It is better than keeping closed source I guess, but you can not compare open sourcing a low value algo VS high value training set and NN. I know that AMD marketing made it look like some shiny new thing, but except for the integration straight into the engine (which is already done in some games), there really is nothing new. Upscale is upscale in the end, no magic.

For tensor cores themselves, I am pretty sure Nvidia could licence it (even open source it if it was their kick), since it is indeed not something very exclusive nowadays (everyone can implement convolution engines, and cores with IA/DSP oriented ISAs). I guess the secret sauce lies more in the interconnect and even more likely in the SW around it. Overall, tooling/specialized SW for HW is often very precious, and thus rarely shared. This is where most of the value in fact lies. That is standard in the silicon industry, and not specific to Nvidia at all. AMD just has nothing of the sort to protect, hence they go open source. If they came to be a market leader and/or had a concurrent network that was better than Nvidia's you can be 100% sure they would not open source it either. Once again, too much value there.

Now, on the other I think Nvidia is making a big mistake by keeping the core driver itself closed source. This has very low value, and only make people cringe. But well ... With the big announcement we have been teased since forever, one can always hope.
3zekiel Jun 22, 2021
Quoting: x_wingYou can still create an Open standard in order to implement it, it's not about of what your competence can do with their current hw but how you allow to evolve the industry with your technology. Nvidia strategy is simply anti-competitive, they don't want to be the best they just want to keep you tied to their brand.

The saddest part is they have been doing time after time the same stupid proprietary strategy that always end up in failure. Lets hope that once again they fail (and looking on how they have been pushing more titles and this support on Proton, they are definitely in fear).

CUDA did/does work, physx for a long time too. When you invest so much R&D in something cutting edge, you will try to monetize it to death, and if a method worked before, you will try again.

Now, they did open source the APIs as far as I can tell, so everyone should be able to implement a source compatible solution. I agree they could have made some standard APIs from the start though, the best would be to make it a Vulkan extension.
3zekiel Jun 22, 2021
Quoting: damarrinDoes RT run well on older GPUs? If it's just a tick on the box it's a pretty useless thing to get hung up on.

Nvidia used to allow RT to run on older GPUs, it did not run well (which is very predictable, lacking the hw acceleration needed to make it run decently) so they stopped. In fact, I am not 100% sure they stopped, maybe the enabling library is still out there and just no one cares anymore.
DLSS obviously no, since the lack of HW here is more critical: a network tends to be fine tuned for an architecture, here Nvidia's tensor cores, so redoing the tuning to make it run on CUDA cores, only to see that it reduces perfs would be like throwing money out of the windows and - bad PR since people would try it and say that it sucks -.
3zekiel Jun 22, 2021
Quoting: ShmerlI don't think they can prevent anyone from implementing tensor flow in hardware? They didn't invent it.

Tensor Flow is an IA framework from google. And you don't write an accelerator for one toolkit only.

But yes, everyone is free to implement a convolution engine and other bits to accelerate inference and/or training. Now the main issue is, with HW accelerated inference, you tend to need to fine tune the network for each accelerator architecture. So it is unlikely you will have s one size fit it all network you can deploy everywhere directly.
robredz Jun 22, 2021
Quoting: jgacasSame day AMD is releasing FSR, interesting...

Its going to be interesting especially if FSR benefits last gen Nvidia equally, or almost. DLSS is a fudge though to hide the brute force nededed to render 4K with full RT enabled. As Quake 2 RTX was as quick on Linux as Windows in the main with similar FPS we can hope . Can't wait to try Amid Evil with RT on in Linux.
robredz Jun 22, 2021
Well if this works out less reason to boot the Windows drive for some games, FSR seems to be a gaming equivalent of upscaling a DVD or a Blu Ray in a 4K player for a UHD panel, results will vary, but won't likely look as good as DLSS.Wonder how Metro Exodus will run in Linux with RT and DLSS?
Ehvis Jun 22, 2021
View PC info
  • Supporter Plus
Quoting: robredzWonder how Metro Exodus will run in Linux with RT and DLSS?

I don't think the Linux build supports DLSS. If it did, it could already have worked before 470.
rustybroomhandle Jun 22, 2021
Discussions about foss etc aside, this represents an interesting view into how things have changed.

Most hardware manufacturers don't even seem to know that Linux exists as a platform, and here's a major GPU maker not only supporting their features at a driver level, but actually contributing code to wine/proton to support the feature there.

Wine has been around for a long time, but it was not taken quite as seriously as nowadays.
x_wing Jun 22, 2021
Quoting: 3zekielCUDA did/does work, physx for a long time too. When you invest so much R&D in something cutting edge, you will try to monetize it to death, and if a method worked before, you will try again.

Physx was in life support for many years, at the end it was the same as their tessellation strategy. The discussion here is about bringing solutions and not gimmick features, which is what any user should look at. Unless you're a shareholder of Nvidia, this strategy cannot be appreciated (mainly from a Linux user pov).

Quoting: 3zekielNow, they did open source the APIs as far as I can tell, so everyone should be able to implement a source compatible solution. I agree they could have made some standard APIs from the start though, the best would be to make it a Vulkan extension.

Link? Unless you mean the sdk.


Last edited by x_wing on 22 June 2021 at 1:47 pm UTC
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
The comments on this article are closed.