Support us on Patreon to keep GamingOnLinux alive. This ensures all of our main content remains free for everyone. Just good, fresh content! Alternatively, you can donate through PayPal. You can also buy games using our partner links for GOG and Humble Store.
We do often include affiliate links to earn us some pennies. See more here.
tagline-image
Nvidia are talking about Vulkan a little more now, which is really good to see. Looks like they will have a little bit of support for it on "day-zero" too.

I hope people aren't expecting Vulkan to come along and instantly blow away OpenGL, even Nvidia are now keeping people's expectations in check.

They don't ease you into it, as the blog post is very developer orientated, and not really meant for idiots like me to read over, but it's very interesting anyway.
QuoteNVIDIA believes strongly that Vulkan supplements OpenGL, and that both APIs have their own strengths.

Vulkan’s strengths lie in the explicit control and multi-threading capabilities that by design allow us to push more commands to the GPU in less CPU time and have finer-grained cost control. OpenGL, however, continues to provide easier to use access to the hardware. This is especially important for applications that are not CPU-limited. Current NVIDIA technologies such as “bindless”, NV_command_list, and the “AZDO” techniques for core OpenGL, can achieve excellent single-thread performance.


I see what they are saying here, but I have yet to see any game developer use AZDO on Linux with OpenGL. In fact, we have seen nothing but game developers complain about OpenGL. For AAA titles, or just heavy titles in general Vulkan sounds like a good fit, but for smaller indie games, OpenGL will probably remain king for being easier to use. That's what I am taking away from this.

QuoteThere is a new level of complexity to Vulkan, that didn’t really exist in OpenGL before.

Don't be scared by that quote, as with all new things it will take time to learn.

They are also making the transition to Vulkan easier with stuff like this:
QuoteStarting with a new API can involve a lot of work as common utilities may not yet be available. NVIDIA will therefore provide a few Vulkan extensions from day zero, so that you as developer can enjoy less obstacles on your path to Vulkan. We will support consuming GLSL shader strings directly and not having to use SPIR-V. Furthermore we leverage our industry leading OpenGL driver and allow you to run Vulkan inside an OpenGL context and presenting Vulkan Images within it. This allows you to use your favorite windowing and user-interface libraries and some of our samples will make use of it to compare OpenGL and Vulkan seamlessly.


To be clear, when they say "NVIDIA will therefore provide a few Vulkan extensions from day zero", they are talking specifically about using it inside OpenGL:

@gamingonlinux using Vulkan inside OpenGL and using GLSL directly inside Vulkan are the extensions I meant

— Christoph Kubisch (@pixeljetstream) January 15, 2016


@gamingonlinux So far NVIDIA has a really good track record on providing driver with OpenGL version release, intend to keep it for Vulkan :)

— Christoph Kubisch (@pixeljetstream) January 15, 2016



There's a fair bit more to the post, so I do suggest giving it a look over. Go read the developer post here, and get interested.

I am more excited than I have ever been to be a Linux gamer, and I am getting more excited as the days go by. I can't wait to see Vulkan actually be used in a game. It's not going to be a silver bullet though remember, developers still have to learn how to use it and get the best out of it. We could be looking at quite a while before we see the first game with it, and it will probably be Valve with something like Dota 2 which they already had a demo of a while ago with Vulkan. Article taken from GamingOnLinux.com.
Tags: NVIDIA, Vulkan
0 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly checked on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly.
See more from me
The comments on this article are closed.
30 comments
Page: «3/3
  Go to:

SketchStick Jan 15, 2016
If they need something simpler for most developers, I don't see the advantage of using OpenGL instead of a Vulkan-based rendering library.

Vulkan is appealing because it has a good chance of solving the current driver situation. I guarantee you there will be plenty of libraries popping up in no time and even if there wasn't, they could easily develop an OpenGL-styled library of their own to bridge the gap and have it work far more consistently across different drivers.
kit89 Jan 15, 2016
OpenGL has a fair amount of hurdles that make it fundamentally slow. The two major ones from my experience is the state baggage and it realtime processing of commands. When an OpenGL command is made the CPU instantly goes off and starts communicating with the GPU. It's like a postman being given a letter and going to the house straight away to deliver it.

Vulkan on the other hand batches, you issue and command and it waits, it waits till the buffer is full and only starts processing them when you've told it to. This is like giving the postman a large box of letters and telling them to go deliver them all.

The two processing styles have advantages and disadvantages. But for games that issue a lot of commands the round-trip approach of OpenGL starts to add up in relation to time. To make things worse OpenGL retains a lot of state, uploading new geometry will deny other commands from being processed until the upload is complete, this happens as it's likely that the next command will be impacted by the new geometry.

Vulkan gets round this by using Command Buffers. You can upload geometry in one buffer while issuing commands in another buffer. If a command is dependent on the geometry add it to the Command Buffer that is uploading the geometry. This approach allows the developer to inform the GPU what is dependent on what. Side Note: This part is usually when a driver vendor would step in and start taking shortcuts.

I look forward to hearing more. :)
TobiSGD Jan 15, 2016
Quoting: pete910So basically, we will end up with dev's coding to suite NV hardware again ?

Wasn't Kronos making sure that all the tools are available out the door on release of the final spec?
I think you missed the point of these extensions. There are solely meant for making the progression from OpenGL to Vulkan, for example when porting an existing engine, easier. They are not meant to be used in finished programs, it really wouldn't make much sense to have a clean Vulkan rendering path for AMD and Intel (and consoles, and mobile devices, and whichever device also supports Vulkan) and a separate rendering path for Nvidia when there is no obvious benefit in performance or features and the Nvidia driver can use the clean Vulkan rendering path also. No sane developer would increase their workload without such benefits, and when the major engines have a clean Vulkan path there is absolutely no need anymore for these extensions anyways, since their purpose, to ease up porting, is fulfilled.


Last edited by TobiSGD on 15 January 2016 at 6:41 pm UTC
Shmerl Jan 15, 2016
Quoting: GuestThere may be multiple queues with today's hardware (one for graphics, one or two for compute, one for DMA transfers), but it's important to remember that command buffer creation is not command buffer submission.

And Vulkan can address using multiple queues in parallel as well, while OpenGL can't if I understood correctly. Can't there be more than one queue for graphics? Even if single GPU wouldn't have it, multi-GPU scenario for sure would. And Vulkan address it as well.


Last edited by Shmerl on 15 January 2016 at 6:55 pm UTC
Donkey Jan 15, 2016
The main benefit with Vulkan should come from the developer being in control of synchronization. This means a thread does not need to stall due to a queue/buffer being locked and can continue work on other things. Later on it can come back and check the lock again. This is a much more efficient approach than OpenGL where most commands will completely stall the thread while claiming the synchronization lock which often leads to a context switch between threads, which is crazy expensive!

Looking forward to spending a few weekends playing around with Vulkan :)


Last edited by Donkey on 15 January 2016 at 8:50 pm UTC
sarmad Jan 15, 2016
nVidia is only saying that to assure current users of OpenGL that they won't be abandoned. But down the road there won't be a need for OpenGL once there is stable libraries/engines built on top of Vulcan to provide simplicity.
STiAT Jan 15, 2016
OpenGL will still provide a lot of applications with 3d acceleration where it's not that important, as they perfectly well state in example in Window Managers - that's a direct hint. The benefit of Vulkan backends is questionable to the amount of work required for Window Managers. Even just having one OpenGL context, it's pretty much enough for a lot of use cases (going to two threads in the backend anyway, but ok, they left that fact out, but it's just one rendering thread, that's true).

So OpenGL is here to stay. Vulkan has just a different audience, and NVidia is here to say that OpenGL will live on.

What's really cool is this one:
QuoteFurthermore we leverage our industry leading OpenGL driver and allow you to run Vulkan inside an OpenGL context and presenting Vulkan Images within it. This allows you to use your favorite windowing and user-interface libraries and some of our samples will make use of it to compare OpenGL and Vulkan seamlessly.

Will be very interesting how this will be implemented, since it hints to making the window managers believe they have a gl context there, properly updating it while it's actually vulkan doing it. No idea how they want to achieve that, but it sounds interesting.
Guest Jan 15, 2016
Quoting: kit89OpenGL has a fair amount of hurdles that make it fundamentally slow. The two major ones from my experience is the state baggage and it realtime processing of commands. When an OpenGL command is made the CPU instantly goes off and starts communicating with the GPU. It's like a postman being given a letter and going to the house straight away to deliver it.

Vulkan on the other hand batches, you issue and command and it waits, it waits till the buffer is full and only starts processing them when you've told it to. This is like giving the postman a large box of letters and telling them to go deliver them all.

The two processing styles have advantages and disadvantages. But for games that issue a lot of commands the round-trip approach of OpenGL starts to add up in relation to time. To make things worse OpenGL retains a lot of state, uploading new geometry will deny other commands from being processed until the upload is complete, this happens as it's likely that the next command will be impacted by the new geometry.

Vulkan gets round this by using Command Buffers. You can upload geometry in one buffer while issuing commands in another buffer. If a command is dependent on the geometry add it to the Command Buffer that is uploading the geometry. This approach allows the developer to inform the GPU what is dependent on what. Side Note: This part is usually when a driver vendor would step in and start taking shortcuts.

I look forward to hearing more. :)

Good explanation thanks.

I suppose for games with lots of individual items like RTS games doing commands in batch rather than a singular stream could speed up swarm games or games with individual point lights really well.
Purple Library Guy Jan 15, 2016
Quoting: Samsai
Quoting: runeIf the game is rather demanding in the first place, then you will definitely notice a difference (if it's a DirectX game). Rewriting an engine, and optimizing the code takes time, and time is money. I guess that they (Feral, Aspyr, etc.) can not afford to spend that much time optimizing the code.

Unless you have optimized code, you can not compare DirectX to OpenGL. I don't believe that the games we're getting now are 100% optimized, not even close.
In a purely theoretical world (read perfect) OpenGL and DirectX might perform the same but, as we have seen, that is not typically the case in our practical world. Code is never 100% optimized. Currently ports seem to choke especially when it comes to multithreading due to technical differences between OpenGL and DirectX.

Simply put, you cannot write DirectX in OpenGL and expect it to perform well
I suppose one issue is that we never see it working the other way. Lots of things written in DirectX get half-assed ports to OpenGL. But if something was written in OpenGL in the first place there is no point porting it to DirectX because OpenGL is already cross-platform. So we don't see what the performance of half-assed ports from OpenGL to DirectX would look like, performance-wise.
etonbears Jan 16, 2016
I think NVidia are following the same logic that Microsoft applied with D3D12. Microsoft allow you to write a hybrid D3D11/D3D12 application, where you take advantage of D3D12 API to construct commands on multiple threads/cores of the CPU to get over the draw-call limit that both D3D and OpenGL had been suffering from. The rest of the application continues to be D3D11.

For a game whose performance suffers from draw-call limits, this is a lot less risk and a lot less work than redesigning your renderer, and possibly other parts of your engine to conform to a new development model.

But single-threaded draw-call preparation is far from the only reason why Linux games run slowly. The biggest problem is that a capable PCIe bus GPU is going to be stalled by almost ANY interaction with the CPU because the bus and CPU will both introduce latency. Submitting a draw call therefore has a cost, changing state or state blocks has a cost, and having to send or receive data to synchronise CPU and GPU memory has a much greater cost. These are costs that you can't really avoid if your engine splits processing between CPU and GPU as has normally been the case in the past.

The AZDO API calls added to OpenGL 4.x ( particularly multi-draw indirect and bindless textures/buffers ) along with compute shaders and OpenCL kernels, would in theory allow you to rewrite an engine such that after initial setup, almost all of the work is internal to the GPU. Trouble is, to make that change is a lot more work than you might think. There is only a small body of reference papers describing recent experiments in avoiding CPU interaction, so such a conversion is risky and slow to implement, which is why we have not really seen much benefit from AZDO yet.

I'm guessing that we will have the same issue with Vulkan. It will improve command submission and allow almost complete control over the GPU, but at an even greater development risk and cost than AZDO. I have to say that, apart from increasing draw-call submission, I think it will also be quite a while before game developers really work out the best way to use the benefits of Vulkan.
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
The comments on this article are closed.