Confused on Steam Play and Proton? Be sure to check out our guide.
We do often include affiliate links to earn us some pennies. See more here.

With Ray Tracing becoming ever more popular, NVIDIA have written up a technical post on bringing DirectX Ray Tracing to Vulkan to encourage more developers to do it.

The blog post, titled "Bringing HLSL Ray Tracing to Vulkan" mentions that porting content requires both the API calls (so DirectX to Vulkan) and the Shaders (HLSL to SPIR-V). Something that's not so difficult now, with the SPIR-V backend to Microsoft's open source DirectXCompiler (DXC).

Since last year, NVIDIA added ray tracing support to DXC's SPIR-V back-end too using their SPV_NV_ray_tracing extension and there's already titles shipping with it like Quake II RTX and Wolfenstein: Youngblood. While this is all NVIDIA-only for now, The Khronos Group is having discussions to get a cross-vendor version of the Vulkan ray tracing extension implemented and NVIDIA expect the work already done can be used with it which does sound good.

NVIDIA go on to give an example and sum it all up with this:

The NVIDIA VKRay extension, with the DXC compiler and SPIR-V backend, provides the same level of ray tracing functionality in Vulkan through HLSL as is currently available in DXR. You can now develop ray-tracing applications using DXR or NVIDIA VKRay with minimized shader re-writing to deploy to either the DirectX or Vulkan APIs.

See the full post here.

Eventually, with efforts like this and when Vulkan has proper cross-vendor ray tracing bits all wired up, it would give developers an easier job to get Vulkan ports looking as good as they can with DirectX. This makes the future of the Vulkan API sound ever-more exciting.

Article taken from GamingOnLinux.com.
22 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly checked on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly. You can also follow my personal adventures on Bluesky.
See more from me
The comments on this article are closed.
All posts need to follow our rules. For users logged in: please hit the Report Flag icon on any post that breaks the rules or contains illegal / harmful content. Guest readers can email us for any issues.
32 comments
Page: 1/2»
  Go to:

appetrosyan Feb 22, 2020
Don't quite get what they gain from this. Still, this means that we could (in theory) have RTX accelerated Quake 2 on Linux.
Liam Dawe Feb 22, 2020
Don't quite get what they gain from this. Still, this means that we could (in theory) have RTX accelerated Quake 2 on Linux.
We already do. That's the point. Quake II RTX is out and supports Linux.
Ehvis Feb 23, 2020
View PC info
  • Supporter Plus
I'm interested to see if someone will take this to put DXR -> Vulkan support in Wine. Probably not official since it uses a vendor extension, but I expect someone is itching to try.
Shmerl Feb 23, 2020
I wouldn't say ray tracing became more popular. It was always a known but not commonly used technique (due to how expensive it is). Nvidia are just trying to justify using a chunk of their die for ray tracing ASICs. Other GPU makers aren't sold on the need of doing that, except for marketing purposes to match Nvidia.

I.e. for Nvidia, the more space on the die they use for ray tracing, the less is left for regular compute units. Which requires them to go out of the way to convince everyone how useful ray tracing ASICs are.

It's not clear at all, that the above trade off is worth it. For some minor improvement of ray traced lighting (big improvement can't be achieved even with such ASICs), you need to pay with reduced general GPU performance.


Last edited by Shmerl on 23 February 2020 at 7:31 am UTC
Eike Feb 23, 2020
View PC info
  • Supporter Plus
I wouldn't say ray tracing became more popular.

It's just a matter of time.

That said, I avoided buying a GTX 2000, because at the moment, it feels more like an expensive gimmick.
Shmerl Feb 23, 2020
It's just a matter of time.

That said, I avoided buying a GTX 2000, because at the moment, it feels more like an expensive gimmick.

It is a gimmick. More of a marketing tool than a really useful feature. To achieve good quality real time ray tracing, you need really powerful hardware. And one that can be fit in a single GPU gives at best some minor enhancement to the lighting, and as I said above, it naturally comes at the cost of everything else.


Last edited by Shmerl on 23 February 2020 at 7:51 am UTC
Ehvis Feb 23, 2020
View PC info
  • Supporter Plus
It is a gimmick.

No, it's a solution the the rendering problems that rasterisers can't solve. It's just the first generation of hardware that attempts to use it in real time graphics. It's as much a gimmick as 3D rendering was with the first 3Dfx card.
Shmerl Feb 23, 2020
No, it's a solution the the rendering problems that rasterisers can't solve. It's just the first generation of hardware that attempts to use it in real time graphics. It's as much a gimmick as 3D rendering was with the first 3Dfx card.

It can't solve it adequately this way. I explained above why. Unless you are proposing to have a whole dedicated device alongside your GPU, cramming more and more compute units into ray tracing ASICs to make it actually useful will cost more and more general GPU performance.

It's the same reason GPU was separated from the CPU in the first place.


Last edited by Shmerl on 23 February 2020 at 9:28 am UTC
Eike Feb 23, 2020
View PC info
  • Supporter Plus
No, it's a solution the the rendering problems that rasterisers can't solve. It's just the first generation of hardware that attempts to use it in real time graphics. It's as much a gimmick as 3D rendering was with the first 3Dfx card.

I'd say it's as much a gimmick as 3D rendering was before the first 3Dfx card.

Man, Decent on a 3Dfx was so amazing...
Eike Feb 23, 2020
View PC info
  • Supporter Plus
It can't solve it adequately this way. I explained above why. Unless you are proposing to have a whole dedicated device alongside your GPU, cramming more and more compute units into ray tracing ASICs to make it actually useful will cost more and more general GPU performance.

To turn it downside up: Do you know another promising way to go to make graphics rendered in realtime "photorealistic"(*)?

(*) A term abused for decades...
kaiman Feb 23, 2020
It is a gimmick. More of a marketing tool than a really useful feature. To achieve good quality real time ray tracing, you need really powerful hardware.
I remember viewing an impressive demonstration by SGI at CeBIT, ca. 20 years ago: the rotating earth viewed from space, and then it zoomed in down to street level. Back then it was inconceivable that consumer grade hardware would deliver that in the foreseeable future, if ever. Nowadays, every smartphone could do it, likely in better quality, too. So yeah, real time ray tracing might be a gimmick now, but give it some time and it will be ubiquitous.

Though I'll concede one thing: better graphics (and graphic effects) don't automatically make better games. I'd rather have great gameplay with mediocre visuals than great visuals with mediocre gameplay. So I am skeptical about the usefulness of ray tracing as it is implemented by NVIDIA today, as it's just a bit of extra eye candy. It certainly wouldn't be a decisive feature when shopping for a new GPU; on the contrary, I'd rather not have it if it makes the package cheaper.
ElectroDD Feb 23, 2020
What I see is nvidia marketing BS is not working as intendend with developpers and manufacturers.
Plus, the industry saw how bad nvidia manages its proprietary technologies...
You're locked in, it costs big money and there is alternatives supported by Microsoft, Intel, AMD.
So in case nvidia decides to scrap their technology for whatever reason, you must change the entire ecosystem.
When you look at hairwork ( i think ? ), g-sync, and also the demo of agnostic API raytracing, RTX technology looks like a gimmick.
Ray-tracing is the holy grail, but RTX technology is a gimmick.
There are 2 way to solve the issue and AMD from the little I know has already fixed part of the problem for one approach.
1) Develop and market a kind of daughter board. Like SLI, one card for regular 3D and the other one dedicated for ray-tracing.
2) make a more complex architecture with chiplets design. This way, you can make a multi-chips gpu. AMD already has solved part of the problems for chip communication. Looks like they are not ready yet for that... Their APUs are still monolithic dies but there are hints that they are going to go full chiplets even on the GPU side.

Nvidia has the performance crown, but looks like to me they are getting inteled by amd more and more. Let's see how things will go, but lately nvidia is firing preemptive marketing BS all around since vega and they've seen AMD not making marketing BS lately on their CPU and gpu side. They deliver what they say, unlike nvidia and intel.
Shmerl Feb 23, 2020
To turn it downside up: Do you know another promising way to go to make graphics rendered in realtime "photorealistic"(*)?

Make some kind of LPU (Lighting Processing Unit) that only has ray tracing ASICs and can work in parallel with everything else without hindering regular GPU performance.


Last edited by Shmerl on 23 February 2020 at 11:04 am UTC
Shmerl Feb 23, 2020
There's actually quite a lot of a video card that isn't used at any given time, so while adding some dedicated raytracing pathways may reduce area dedicated to other features, I don't think the impact is of the magnitude that you might be thinking.

If general GPU compute units can handle ray tracing - then fine, but apparently they aren't good enough for it (yet).
ElectroDD Feb 23, 2020
RTX itself is proprietary, sure, but nvidia are very keen to get the approach into core Vulkan. That would then make it cross-vendor, royalty free. No being locked into nvidia, though nvidia would definitely still have a competitive advantage (seeing as it would work how their graphics cards are designed, they should have better performance in theory).

That's a loss for nvidia.
From what I remember, nvidia was not fond of vulkan API and never was really fond of anything open even as little as just cross-vendor.
Last example, g-sync... They went as far as manipulating the branding from monitor manufacturer when they lost the battle against AMD.
1xok Feb 23, 2020
How is raytracing actually implemented in the Linux version of Quake2 or is it switched off there? Can anyone comment on this?

I cannot try it, I only have a GTX 970.
tuubi Feb 23, 2020
View PC info
  • Supporter Plus
How is raytracing actually implemented in the Linux version of Quake2 or is it switched off there? Can anyone comment on this?

I cannot try it, I only have a GTX 970.
Here's some reading for you:
https://www.gamingonlinux.com/index.php?module=search&q=quake+II+rtx
Shmerl Feb 23, 2020
Last example, g-sync... They went as far as manipulating the branding from monitor manufacturer when they lost the battle against AMD.

I'd say they failed overall. Example: https://www.lg.com/us/monitors/lg-27GL850-gaming-monitor

* NVIDIA® G-SYNC® Compatible
* Adaptive-Sync (FreeSync™)

Adaptive sync is mentioned.
appetrosyan Feb 23, 2020
Don't quite get what they gain from this. Still, this means that we could (in theory) have RTX accelerated Quake 2 on Linux.
We already do. That's the point. Quake II RTX is out and supports Linux.

Thanks! I hand idea. I would like to say that I'd give it a try, but I have an old rx 480.
appetrosyan Feb 23, 2020
There's actually quite a lot of a video card that isn't used at any given time, so while adding some dedicated raytracing pathways may reduce area dedicated to other features, I don't think the impact is of the magnitude that you might be thinking.

If general GPU compute units can handle ray tracing - then fine, but apparently they aren't good enough for it (yet).

Indeed, and different vendor approaches to their compute units will definitely be worth keeping an eye on.

I'm of the opinion myself that despite nvidia pushing their own rtx extensions, eventually it will all collapse back into generic compute units in the end - maybe some differences to current designs to make them more efficient for raytracing type work, but compute units nonetheless.
That will make raytracing just be another software package, like Radeon Rays.

Another possibility is that the tensor cores become the new CUDA cores.
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
The comments on this article are closed.