With Ray Tracing becoming ever more popular, NVIDIA have written up a technical post on bringing DirectX Ray Tracing to Vulkan to encourage more developers to do it.
The blog post, titled "Bringing HLSL Ray Tracing to Vulkan" mentions that porting content requires both the API calls (so DirectX to Vulkan) and the Shaders (HLSL to SPIR-V). Something that's not so difficult now, with the SPIR-V backend to Microsoft's open source DirectXCompiler (DXC).
Since last year, NVIDIA added ray tracing support to DXC's SPIR-V back-end too using their SPV_NV_ray_tracing extension and there's already titles shipping with it like Quake II RTX and Wolfenstein: Youngblood. While this is all NVIDIA-only for now, The Khronos Group is having discussions to get a cross-vendor version of the Vulkan ray tracing extension implemented and NVIDIA expect the work already done can be used with it which does sound good.
NVIDIA go on to give an example and sum it all up with this:
The NVIDIA VKRay extension, with the DXC compiler and SPIR-V backend, provides the same level of ray tracing functionality in Vulkan through HLSL as is currently available in DXR. You can now develop ray-tracing applications using DXR or NVIDIA VKRay with minimized shader re-writing to deploy to either the DirectX or Vulkan APIs.
See the full post here.
Eventually, with efforts like this and when Vulkan has proper cross-vendor ray tracing bits all wired up, it would give developers an easier job to get Vulkan ports looking as good as they can with DirectX. This makes the future of the Vulkan API sound ever-more exciting.
Don't quite get what they gain from this. Still, this means that we could (in theory) have RTX accelerated Quake 2 on Linux.We already do. That's the point. Quake II RTX is out and supports Linux.
I.e. for Nvidia, the more space on the die they use for ray tracing, the less is left for regular compute units. Which requires them to go out of the way to convince everyone how useful ray tracing ASICs are.
It's not clear at all, that the above trade off is worth it. For some minor improvement of ray traced lighting (big improvement can't be achieved even with such ASICs), you need to pay with reduced general GPU performance.
Last edited by Shmerl on 23 February 2020 at 7:31 am UTC
I wouldn't say ray tracing became more popular.
It's just a matter of time.
That said, I avoided buying a GTX 2000, because at the moment, it feels more like an expensive gimmick.
It's just a matter of time.
That said, I avoided buying a GTX 2000, because at the moment, it feels more like an expensive gimmick.
It is a gimmick. More of a marketing tool than a really useful feature. To achieve good quality real time ray tracing, you need really powerful hardware. And one that can be fit in a single GPU gives at best some minor enhancement to the lighting, and as I said above, it naturally comes at the cost of everything else.
Last edited by Shmerl on 23 February 2020 at 7:51 am UTC
It is a gimmick.
No, it's a solution the the rendering problems that rasterisers can't solve. It's just the first generation of hardware that attempts to use it in real time graphics. It's as much a gimmick as 3D rendering was with the first 3Dfx card.
No, it's a solution the the rendering problems that rasterisers can't solve. It's just the first generation of hardware that attempts to use it in real time graphics. It's as much a gimmick as 3D rendering was with the first 3Dfx card.
It can't solve it adequately this way. I explained above why. Unless you are proposing to have a whole dedicated device alongside your GPU, cramming more and more compute units into ray tracing ASICs to make it actually useful will cost more and more general GPU performance.
It's the same reason GPU was separated from the CPU in the first place.
Last edited by Shmerl on 23 February 2020 at 9:28 am UTC
No, it's a solution the the rendering problems that rasterisers can't solve. It's just the first generation of hardware that attempts to use it in real time graphics. It's as much a gimmick as 3D rendering was with the first 3Dfx card.
I'd say it's as much a gimmick as 3D rendering was before the first 3Dfx card.
Man, Decent on a 3Dfx was so amazing...
It can't solve it adequately this way. I explained above why. Unless you are proposing to have a whole dedicated device alongside your GPU, cramming more and more compute units into ray tracing ASICs to make it actually useful will cost more and more general GPU performance.
To turn it downside up: Do you know another promising way to go to make graphics rendered in realtime "photorealistic"(*)?
(*) A term abused for decades...
It is a gimmick. More of a marketing tool than a really useful feature. To achieve good quality real time ray tracing, you need really powerful hardware.I remember viewing an impressive demonstration by SGI at CeBIT, ca. 20 years ago: the rotating earth viewed from space, and then it zoomed in down to street level. Back then it was inconceivable that consumer grade hardware would deliver that in the foreseeable future, if ever. Nowadays, every smartphone could do it, likely in better quality, too. So yeah, real time ray tracing might be a gimmick now, but give it some time and it will be ubiquitous.
Though I'll concede one thing: better graphics (and graphic effects) don't automatically make better games. I'd rather have great gameplay with mediocre visuals than great visuals with mediocre gameplay. So I am skeptical about the usefulness of ray tracing as it is implemented by NVIDIA today, as it's just a bit of extra eye candy. It certainly wouldn't be a decisive feature when shopping for a new GPU; on the contrary, I'd rather not have it if it makes the package cheaper.
Plus, the industry saw how bad nvidia manages its proprietary technologies...
You're locked in, it costs big money and there is alternatives supported by Microsoft, Intel, AMD.
So in case nvidia decides to scrap their technology for whatever reason, you must change the entire ecosystem.
When you look at hairwork ( i think ? ), g-sync, and also the demo of agnostic API raytracing, RTX technology looks like a gimmick.
Ray-tracing is the holy grail, but RTX technology is a gimmick.
There are 2 way to solve the issue and AMD from the little I know has already fixed part of the problem for one approach.
1) Develop and market a kind of daughter board. Like SLI, one card for regular 3D and the other one dedicated for ray-tracing.
2) make a more complex architecture with chiplets design. This way, you can make a multi-chips gpu. AMD already has solved part of the problems for chip communication. Looks like they are not ready yet for that... Their APUs are still monolithic dies but there are hints that they are going to go full chiplets even on the GPU side.
Nvidia has the performance crown, but looks like to me they are getting inteled by amd more and more. Let's see how things will go, but lately nvidia is firing preemptive marketing BS all around since vega and they've seen AMD not making marketing BS lately on their CPU and gpu side. They deliver what they say, unlike nvidia and intel.
To turn it downside up: Do you know another promising way to go to make graphics rendered in realtime "photorealistic"(*)?
Make some kind of LPU (Lighting Processing Unit) that only has ray tracing ASICs and can work in parallel with everything else without hindering regular GPU performance.
Last edited by Shmerl on 23 February 2020 at 11:04 am UTC
There's actually quite a lot of a video card that isn't used at any given time, so while adding some dedicated raytracing pathways may reduce area dedicated to other features, I don't think the impact is of the magnitude that you might be thinking.
If general GPU compute units can handle ray tracing - then fine, but apparently they aren't good enough for it (yet).
RTX itself is proprietary, sure, but nvidia are very keen to get the approach into core Vulkan. That would then make it cross-vendor, royalty free. No being locked into nvidia, though nvidia would definitely still have a competitive advantage (seeing as it would work how their graphics cards are designed, they should have better performance in theory).
That's a loss for nvidia.
From what I remember, nvidia was not fond of vulkan API and never was really fond of anything open even as little as just cross-vendor.
Last example, g-sync... They went as far as manipulating the branding from monitor manufacturer when they lost the battle against AMD.
I cannot try it, I only have a GTX 970.
How is raytracing actually implemented in the Linux version of Quake2 or is it switched off there? Can anyone comment on this?Here's some reading for you:
I cannot try it, I only have a GTX 970.
https://www.gamingonlinux.com/index.php?module=search&q=quake+II+rtx
Last example, g-sync... They went as far as manipulating the branding from monitor manufacturer when they lost the battle against AMD.
I'd say they failed overall. Example: https://www.lg.com/us/monitors/lg-27GL850-gaming-monitor
* NVIDIA® G-SYNC® Compatible
* Adaptive-Sync (FreeSync™)
Adaptive sync is mentioned.
Don't quite get what they gain from this. Still, this means that we could (in theory) have RTX accelerated Quake 2 on Linux.We already do. That's the point. Quake II RTX is out and supports Linux.
Thanks! I hand idea. I would like to say that I'd give it a try, but I have an old rx 480.
There's actually quite a lot of a video card that isn't used at any given time, so while adding some dedicated raytracing pathways may reduce area dedicated to other features, I don't think the impact is of the magnitude that you might be thinking.
If general GPU compute units can handle ray tracing - then fine, but apparently they aren't good enough for it (yet).
Indeed, and different vendor approaches to their compute units will definitely be worth keeping an eye on.
I'm of the opinion myself that despite nvidia pushing their own rtx extensions, eventually it will all collapse back into generic compute units in the end - maybe some differences to current designs to make them more efficient for raytracing type work, but compute units nonetheless.
That will make raytracing just be another software package, like Radeon Rays.
Another possibility is that the tensor cores become the new CUDA cores.
See more from me