I'm mirv. I have been using GNU/Linux for quite a long time, and programming OpenGL since GL1.2. In my (limited) spare time, I help out on the xoreos project, and will one day hopefully release my very own game...maybe.
I'm writing this as a follow-up to the editorial by Liam here. The idea is to go into more detail into why performance is often worse with games ported to GNU/Linux, with an emphasis on graphics.
Firstly, I'll address the issue about porting. This is a big thing, and the term "port" is really the important part. It implies the game is adapted to running on GNU/Linux after running on another platform first, likely some variant of Windows. It's been optimised to run under (likely) Windows, and there's a good chance it wasn't using OpenGL. Leaving aside having to replace any possible proprietary middleware, and leaving aside any OS overhead of threading, process context switching, desktop environments and so on, graphics is where most of the difference will probably occur.
I should stress this point actually: the focus here is really on graphics APIs, but physics, networking, audio, etc, can all play their part - just fortunately it's rare that any of those are a potential issue. Even on Windows, physics is something that can cause a good deal of difference in performance across platforms, and most games with heavy use of it will have fallback options, or the ability to reduce physics-based effects to mitigate this problem.
I won't go into driver workarounds, but they can add a lot of performance to a game. And it's horrible to have to do that. It makes drivers more complex, and that can lead to higher maintenance, more difficult testing, and strange issues cropping up at weird times. Vulkan should help overcome this particular aspect, and will close performance-related gaps here.
One more thing before getting onto the meat of the subject: audio, networking, input handling, etc, can all take quite a bit of effort to port to different platforms, and can possibly tie into a game's engine design. The more tightly coupled it is, the more difficult it can be to port across to a different system. In some cases, an exact port isn't even possible (e.g networking with Company of Heroes 2, and Dawn of War 2).
To understand some of the graphics based porting difficulties, it does help to understand where DirectX and OpenGL came from. OpenGL itself was never originally intended for games. It's still a 3D graphics rendering API - not a gaming API. DirectX has perhaps had more of a focus on entertainment, especially during its early years, so there's a bit of difference in design there. Leaving aside politics in the history of each, OpenGL had roots in CAD (Computer Aided Design) and hardware at the time: the focus was on speed for very dynamic data sets. Lots of triangles, changing very quickly. Originally OpenGL mapped very well to hardware architectures, so state-based, fixed-pipeline rendering, dedicated rendering context and thread. Industrial software investment puts a lot of pressure to keep backwards compatibility; OpenGL could not do a "clean-cut" new version, and that core design of single context and thread has stuck with OpenGL since its creation.
Internally of course, drivers can do a lot of threaded tasks, but it's always presented to the user/developer as a single-threaded queue of commands. Commands are pushed in, and run in the order in which they're submitted (from the user perspective). There are possible ways to cheat with this: use different OpenGL contexts and share backend data, but that can have all kinds of issues - some drivers handle this better than others (coincidentally, this is what the Witcher 2 originally did, which resulted in a lot of performance problems). DirectX, which could change quite a bit between versions, could eventually allow for more thread-friendly command submission. This means you can assemble the graphical resources and get them ready in multiple threads (again, from a user perspective). When that's an assumption made during game engine design, it can be pretty difficult to try and turn it into single-threaded design. Most will likely write some kind of thread-safe queue that is regularly processed in a "graphics thread" to mimic multi-threaded design when porting from DirectX to OpenGL. That helped VP with the Witcher 2 - they saw quite impressive performance gains when they did this, but it's all still overhead on top of overhead when compared to the original Windows version. So performance may be a little less.
As mentioned before, OpenGL was originally state-based. It was basically one big state machine. Much of recent work has been to remove that, or to otherwise present all state information up-front when describing data, so it's less of a problem with more recent OpenGL versions. The issue is really one of checking for correct state: lots of continuous overhead in making sure everything is ok for rendering, and of course it's all "single-threaded" so must wait sequentially for commands, and must make sure everything is setup ok in order to proceed. As mentioned, this is less of a problem with recent versions because it can align much closer to how (recent) DirectX versions do things: allow the state to be processed and validated when creating an object, thereby not needing to do it continuously for every single command on every single frame. This is all something that comes down to engine design again though: if you need to change things, then states need to be re-validated, and it's simply not feasible to rip apart an entire game engine and redesign it to minimise this kind of impact.
When it comes to data handling, or rather data manipulation, different APIs can perform it in different ways. In one, you might simply be able to modify some memory and all is ok. In another, you might have to point to a copy and say "use that when you can instead and free the original then". This is not a one way is better than the other discussion - it's important only that they require different methods of handling it. Actually, OpenGL can have a lot of different methods, and knowing the "best" way for a particular scenario takes some experience to get right. When dealing with porting a game across though, there may not be a lot of options: the engine does things a certain way, so that way has to be faked if there's no exact translation. Guess what? That can affect OpenGL state, and require re-validation of an entire rendering pipeline, stalling command submission to the GPU, a.k.a less performance than the original game. It's again not really feasible to rip apart an entire game engine and redesign it just for that: take the performance hit and carry on.
Note that some decisions are based around _porting_ a game. If one could design from the ground up with OpenGL, then OpenGL would likely give better performance...but it might also be more difficult to develop and test for. So there's a bit of a trade-off there, and most developers are probably going to be concerned with getting it running on Windows first, GNU/Linux second. This includes engine developers.
I haven't mentioned another problem that affects porting either: shaders. GPUs are programmable these days, and shader code & compilers can make quite a difference. There are tools to automatically convert from HLSL -> GLSL (DirectX to OpenGL shaders), but they suffer similar problems to above: they convert behaviour, which doesn't mean the most optimal rendering path. That's before we take into account driver maturity in such matters (let's face it, Microsoft put in quite a good deal of effort in that arena). Fortunately things are improving in this area, but it's still another area that will mean ports generally give less performance.
About Vulkan. No, it will not magically make games run better. It cannot magically even make porting easier. I'm just going to post this link. While it's about implementing OpenGL on top of Vulkan, same rules apply for DirectX. An engine design may simply not be compatible with efficient methods of using Vulkan. That's really up to porting houses to look at and decide, but Vulkan does not necessarily mean better performance when porting a game. It likely will with recent games, as a lot of things can line up nicely then, but older titles may be better off using OpenGL.
As a final note, and this last paragraph doesn't apply so much to smaller indie teams, modern graphics APIs are much more complex than they were a decade ago. Getting experience with multiple graphics APIs across multiple platforms, to say nothing of testing on each one, can be very difficult where large, complex game engines are concerned. While I believe Vulkan will help bring comparable performance across platforms, it's simply not feasible to put in equal development time across all platforms. It's often much more economically viable to simply hire people experienced on the matter - that's why porting companies exist.
I'm writing this as a follow-up to the editorial by Liam here. The idea is to go into more detail into why performance is often worse with games ported to GNU/Linux, with an emphasis on graphics.
Firstly, I'll address the issue about porting. This is a big thing, and the term "port" is really the important part. It implies the game is adapted to running on GNU/Linux after running on another platform first, likely some variant of Windows. It's been optimised to run under (likely) Windows, and there's a good chance it wasn't using OpenGL. Leaving aside having to replace any possible proprietary middleware, and leaving aside any OS overhead of threading, process context switching, desktop environments and so on, graphics is where most of the difference will probably occur.
I should stress this point actually: the focus here is really on graphics APIs, but physics, networking, audio, etc, can all play their part - just fortunately it's rare that any of those are a potential issue. Even on Windows, physics is something that can cause a good deal of difference in performance across platforms, and most games with heavy use of it will have fallback options, or the ability to reduce physics-based effects to mitigate this problem.
I won't go into driver workarounds, but they can add a lot of performance to a game. And it's horrible to have to do that. It makes drivers more complex, and that can lead to higher maintenance, more difficult testing, and strange issues cropping up at weird times. Vulkan should help overcome this particular aspect, and will close performance-related gaps here.
One more thing before getting onto the meat of the subject: audio, networking, input handling, etc, can all take quite a bit of effort to port to different platforms, and can possibly tie into a game's engine design. The more tightly coupled it is, the more difficult it can be to port across to a different system. In some cases, an exact port isn't even possible (e.g networking with Company of Heroes 2, and Dawn of War 2).
To understand some of the graphics based porting difficulties, it does help to understand where DirectX and OpenGL came from. OpenGL itself was never originally intended for games. It's still a 3D graphics rendering API - not a gaming API. DirectX has perhaps had more of a focus on entertainment, especially during its early years, so there's a bit of difference in design there. Leaving aside politics in the history of each, OpenGL had roots in CAD (Computer Aided Design) and hardware at the time: the focus was on speed for very dynamic data sets. Lots of triangles, changing very quickly. Originally OpenGL mapped very well to hardware architectures, so state-based, fixed-pipeline rendering, dedicated rendering context and thread. Industrial software investment puts a lot of pressure to keep backwards compatibility; OpenGL could not do a "clean-cut" new version, and that core design of single context and thread has stuck with OpenGL since its creation.
Internally of course, drivers can do a lot of threaded tasks, but it's always presented to the user/developer as a single-threaded queue of commands. Commands are pushed in, and run in the order in which they're submitted (from the user perspective). There are possible ways to cheat with this: use different OpenGL contexts and share backend data, but that can have all kinds of issues - some drivers handle this better than others (coincidentally, this is what the Witcher 2 originally did, which resulted in a lot of performance problems). DirectX, which could change quite a bit between versions, could eventually allow for more thread-friendly command submission. This means you can assemble the graphical resources and get them ready in multiple threads (again, from a user perspective). When that's an assumption made during game engine design, it can be pretty difficult to try and turn it into single-threaded design. Most will likely write some kind of thread-safe queue that is regularly processed in a "graphics thread" to mimic multi-threaded design when porting from DirectX to OpenGL. That helped VP with the Witcher 2 - they saw quite impressive performance gains when they did this, but it's all still overhead on top of overhead when compared to the original Windows version. So performance may be a little less.
As mentioned before, OpenGL was originally state-based. It was basically one big state machine. Much of recent work has been to remove that, or to otherwise present all state information up-front when describing data, so it's less of a problem with more recent OpenGL versions. The issue is really one of checking for correct state: lots of continuous overhead in making sure everything is ok for rendering, and of course it's all "single-threaded" so must wait sequentially for commands, and must make sure everything is setup ok in order to proceed. As mentioned, this is less of a problem with recent versions because it can align much closer to how (recent) DirectX versions do things: allow the state to be processed and validated when creating an object, thereby not needing to do it continuously for every single command on every single frame. This is all something that comes down to engine design again though: if you need to change things, then states need to be re-validated, and it's simply not feasible to rip apart an entire game engine and redesign it to minimise this kind of impact.
When it comes to data handling, or rather data manipulation, different APIs can perform it in different ways. In one, you might simply be able to modify some memory and all is ok. In another, you might have to point to a copy and say "use that when you can instead and free the original then". This is not a one way is better than the other discussion - it's important only that they require different methods of handling it. Actually, OpenGL can have a lot of different methods, and knowing the "best" way for a particular scenario takes some experience to get right. When dealing with porting a game across though, there may not be a lot of options: the engine does things a certain way, so that way has to be faked if there's no exact translation. Guess what? That can affect OpenGL state, and require re-validation of an entire rendering pipeline, stalling command submission to the GPU, a.k.a less performance than the original game. It's again not really feasible to rip apart an entire game engine and redesign it just for that: take the performance hit and carry on.
Note that some decisions are based around _porting_ a game. If one could design from the ground up with OpenGL, then OpenGL would likely give better performance...but it might also be more difficult to develop and test for. So there's a bit of a trade-off there, and most developers are probably going to be concerned with getting it running on Windows first, GNU/Linux second. This includes engine developers.
I haven't mentioned another problem that affects porting either: shaders. GPUs are programmable these days, and shader code & compilers can make quite a difference. There are tools to automatically convert from HLSL -> GLSL (DirectX to OpenGL shaders), but they suffer similar problems to above: they convert behaviour, which doesn't mean the most optimal rendering path. That's before we take into account driver maturity in such matters (let's face it, Microsoft put in quite a good deal of effort in that arena). Fortunately things are improving in this area, but it's still another area that will mean ports generally give less performance.
About Vulkan. No, it will not magically make games run better. It cannot magically even make porting easier. I'm just going to post this link. While it's about implementing OpenGL on top of Vulkan, same rules apply for DirectX. An engine design may simply not be compatible with efficient methods of using Vulkan. That's really up to porting houses to look at and decide, but Vulkan does not necessarily mean better performance when porting a game. It likely will with recent games, as a lot of things can line up nicely then, but older titles may be better off using OpenGL.
As a final note, and this last paragraph doesn't apply so much to smaller indie teams, modern graphics APIs are much more complex than they were a decade ago. Getting experience with multiple graphics APIs across multiple platforms, to say nothing of testing on each one, can be very difficult where large, complex game engines are concerned. While I believe Vulkan will help bring comparable performance across platforms, it's simply not feasible to put in equal development time across all platforms. It's often much more economically viable to simply hire people experienced on the matter - that's why porting companies exist.
Some you may have missed, popular articles from the last month:
Finally! Concise, simple and to the point articles rich in information and not diluted with long unnecessary grandiose speech, and these days who has the time to mule over 60 articles a day from their RSS feed?
When it's not to the point I just stop reading the RSS source all together since the information sucks or is highly bias. (eg: Phoronix is a much better read than LXER, LXER is next to trash/rubbish to me.)
Mostly. Vulkan is still catching up on multi-GPU support for example.
Last edited by Shmerl on 28 October 2016 at 12:54 am UTC
This also can apply to Wine + CSMT for some OpenGL games.
Yes OGL was born out of and is driven by the CAD market where there was accurate over fast.
How are we even defining "performance"? Are we talking maximum frames per second? minimum? average? compared to Windows? If a Windows version churns out 120 FPS and the Linux version gets 80, I'm fine with that. The problem is when the Linux version gets 100 but occasionally dips down to 40 (or even 30). Is that also part of OpenGL's limitations? Will Vulkan help at all with that?
In the major journals you can often read Linux would have a worse game performance. But the games would run as well under Linux, if they were developed directly for it. This is not the case for economic reasons. We always need a port.
I think that with a game like Doom 2016 the differences would be hardly measurable, because it was developed for modern OpenGL/Vulkan. Bethesda could make an in-house port relative easily. That this does not happen, however, shows how complicated the gaming world is. There are many things that play a role. Technical and economic.
But it's not modern warfare level for them so they don't care.
Funny considering how they use AWS (Xen), Linux and Windows technologies extensively.
Last edited by [email protected] on 28 October 2016 at 9:36 am UTC
Actually I'm a software developer myself, but graphics is way out of my comfort zone. Despite this I spend hours reading technical articles and discussions on this stuff... go figure.
You are oversimplifying. Or maybe you just skimmed the article, because that's not what mirv says.
The discussion isn't that fun. I believe that could end with ignoring the bad behaviour.
Then I have some questions to the topic.
Will games made with Vulkan need any rewriting, or could it just be copied to Linux i.e?
What if Microsoft started to coorperate with the rest of the world, and not only a part of it? What would the gaming world be like if they started taking part of a project like Vulkan, instead of hold on to the windows-only DirectX?
EDIT: Or is the competition needed to improve the technologies?
Last edited by mikaelbrun on 28 October 2016 at 12:50 pm UTC