Support us on Patreon to keep GamingOnLinux alive. This ensures all of our main content remains free for everyone. Just good, fresh content! Alternatively, you can donate through PayPal. You can also buy games using our partner links for GOG and Humble Store.
We use affiliate links to earn us some pennies. Learn more.
tagline-image
I'm really excited to try Magicka 2, as I thought the first one was quite fun. The developer has been keeping everyone up to date on progress in their Steam forum. Sadly, for AMD GPU users it looks like it won't run as well as on Nvidia hardware.

A developer wrote this on their official Steam forum:
QuoteWe've had discussions on how to support non-XInput controllers and we found that just using SDLs input subsystem would probably solve that. Since there are controller emulators already we consider it a 'would be nice' feature.

I took the liberty of replacing the Nvidia 660GTX in my Linux machine for our Radeon 270X and ran some tests. On Ubuntu I tested both the open source drivers and fglrx and both worked fine. I think the open source drivers have slightly better performance. Switching drivers around kinda broke my setup though so I installed Debian 8 and did some tests there and only had issues with decals getting a slight rectangular outline.

Overall the biggest issue in my tests with AMD cards is that the performance on Linux & MacOSX feels like it's halved compared to a corresponding Nvidia card. I've added graphics settings to control how many decals/trees/foliage the game draws that helps a bit but it would've been better if this was not necessary.


It's fantastic to see them actually implement something to help with the issue though, and I'm sure many AMD GPU users will be pretty happy about that. It's not all doom and gloom, since that developer mentioned it will even work on the open source driver for AMD users, so that's great too. They must be one of a very few developers who are testing their port so thoroughly, it's quite refreshing to see.

I just want to get my hands on it already! Article taken from GamingOnLinux.com.
0 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly checked on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly. You can also follow my personal adventures on Bluesky.
See more from me
The comments on this article are closed.
All posts need to follow our rules. For users logged in: please hit the Report Flag icon on any post that breaks the rules or contains illegal / harmful content. Guest readers can email us for any issues.
52 comments Subscribe
Page: «2/3»
  Go to:

lucinos 15 Oct 2015
Developers should always try for open source drivers. That would give the best choice for best support. But if high performance is realistic possible only on closed source nvidia then that is what it is and it is better than nothing.

Our hope for high performance open competition on all (operating systems and hardware) platforms is Vulkan. So for high performance engines Vulkan should soon be the right choice. (only disadvantage is that Vulkan is not released yet unfortunately but will be very soon). Closed source drivers and system specific APIs are not open competition.


Last edited by lucinos on 15 Oct 2015 at 7:27 pm UTC
Guest 15 Oct 2015
>> My understanding is that Nvidia hacks around a lot of common mistakes and bad practices in OpenGL code. <<

This quote is just waaaaay wrong. The driver cannot change the way your app uses OpenGL. If you do things in a bad style then it will perform badly. This the driver cannot change. You should use OpenGL in the "approaching zero driver overhead" way. And no Nvidia doesnt break the spec, they have been described to not be "spec purists" but that's a completely different thing. This quote is just totally disconnected from reality in terms of what is done and where.
Guest 15 Oct 2015
So you consider not having bad performance as a violation to the spec? Wow that is just wrong. Ofc some things are extremely modified by the driver but it doesnt break the spec. Using glVertex3f etc aka OpenGL 1.0 is the worst possible use pf OpenGL yet it performs pretty good on Nvidia. That is just insane to blame Nvidia for doing things faster.
Guest 15 Oct 2015
Yes, I work with OpenGL and OpenCL on AMD and Nvidia. I know "a bit of OpenGL".
Guest 15 Oct 2015
Can you be more specific? Details about function calls and references to the spec. I would want to test this out.
Imants 15 Oct 2015
Am I understanding things correct. NVidia is making bad opengl code preform better on they're cards buy violating opengl specification? So if programmers would know better opengl and programmed as it should be done code would work as fast as it needs on any drivers? So in the end it is people knowledge witch is lacking and NVidia is just patching it up.


Last edited by Imants on 15 Oct 2015 at 8:10 pm UTC
tuubi 15 Oct 2015
  • Supporter Plus
>> My understanding is that Nvidia hacks around a lot of common mistakes and bad practices in OpenGL code. <<

This quote is just waaaaay wrong. The driver cannot change the way your app uses OpenGL. If you do things in a bad style then it will perform badly. This the driver cannot change. You should use OpenGL in the "approaching zero driver overhead" way.
Nvidia's driver can and does detect and catch OpenGL call patterns and outright errors that can severely affect performance. Sometimes this is game or application specific and sometimes based on heuristics. There's tons of "magic" like this in their drivers (and to lesser extent in AMD's as well), accumulated over the years. This is pretty much common knowledge. I won't do the research for you though if this is the first time you've heard of it.

I am a bit curious. Why would you think it can't be done? OpenGL is a rather messy group of specs, at least up until the latest versions. AMD's and Nvidia's blobs implement these specs as they see fit, without official validation or compliance testing available, let alone enforced by Khronos or anyone else. The oss drivers all (?) share mesa's implementation, but the proprietary drivers roll their own.

I'm not calling Nvidia evil or anything, drivers that magically make bad code work fine is just good business for them even if I think it's bad for the spec, bad for developers and - in the end - bad for end users. This is similar to (but not the same as) how MS crippled the web until a few years back with IE's liberal and wilful perversion of the web standards.
alexThunder 15 Oct 2015
A game being "ported" isn't some magical milestone, it means it compiles and runs on Linux, that IS when the proper testing begins, and they can only test one GPU at a time unless they can do both at the same time there is always going to be one that is first.

If they only have one machine available for development (and testing). Otherwise you just run the same build on different machines and see if it works or where errors occur.

Am I understanding things correct. NVidia is making bad opengl code preform better on they're cards buy violating opengl specification? So if programmers would know better opengl and programmed as it should be done code would work as fast as it needs on any drivers? So in the end it is people knowledge witch is lacking and NVidia is just patching it up.

Yes.

[...]and when given the choice this vendor's driver devs choose sanity (to make things work) vs. absolute GL spec purity.

But what's the point then in having standards (especially open ones)?

Besides, if you're writing GLSL Shaders and do mistakes (according to the Khronos specification), but Nvidia's compiler accepts your code (because it's sort of guessing what you meant), this may lead to unpredictable behaviour. Nvidia's driver try to adress suboptimal code - sometimes successfully, sometimes breaking the program/game.

It's a difference, if you try to fix some bad practice or code that isn't supposed to work.


Last edited by alexThunder on 15 Oct 2015 at 8:28 pm UTC
Guest 15 Oct 2015
This is just insane, the way you explain how the driver "guesses" and "fixes" mistakes etc. It's so far from reality I can't even figure out where to start.

It
Doesn't
Work
Like
That.

We are talking about guessing major things, yet in reality, where I work. The driver cannot even "guess" that I'm working with restrict pointers (which is true for 99.99999999% of the times)

Consider this extremely simple for loop:

for(int i = 0; i < 10000; i++)
A[i] += B[i];

The A vector is added the B vector.

This loop can be fast or slow depending on how the pointers are marked. If you mark them as restrict it will run maybe 8 times faster.

This is an extremely simple, low level "guess" that the driver doesnt do. You are talking abot these magical, incredible, hocus pocus fixes the driver does, yet a simple thing like this is not done.

Why? Well because it's impossible. That's why. Just like all this pseudo-programming bullshit
alexThunder 15 Oct 2015
I wonder. Is there something specific you want to tell us?
tuubi 15 Oct 2015
  • Supporter Plus
Why? Well because it's impossible. That's why. Just like all this pseudo-programming bullshit
Obviously it is impossible as proven beyond any doubt by your most enlightening anecdote. If they don't detect every programming mistake or lost optimization opportunity imaginable, surely it is impossible to hack around anything at all. The rest of us are simply talking out of our asses. Glad that's settled then.
Guest 15 Oct 2015
At least I have some real examples and not just unconfirmed bullshit. You are just rehashing things you have heard from others, but since you don't understand the topic you just spew out tons of bullshit. You might have some things more or less correct, but the way you explain it just marks you with this gigantic neon lit sign "incompetent".

The mirv example was well documented and if this is true then yes thats a good and specific example. But when you explain in terms of " magic" and such it's just completely obvious you dont know anything about software development.
alexThunder 15 Oct 2015
What "real" examples (you have) are you referring to?
tuubi 15 Oct 2015
  • Supporter Plus
The mirv example was well documented and if this is true then yes thats a good and specific example. But when you explain in terms of " magic" and such it's just completely obvious you dont know anything about software development.
Damn. Busted. I wonder how our software design business lasted for ten years before anyone found out I have no idea what I'm doing. Please don't tell our clients. :'(
alexThunder 15 Oct 2015
The mirv example was well documented and if this is true then yes thats a good and specific example. But when you explain in terms of " magic" and such it's just completely obvious you dont know anything about software development.
Damn. Busted. I wonder how our software design business lasted for ten years before anyone found out I have no idea what I'm doing. Please don't tell our clients. :'(

So you're working for Microsoft? ( ͡° ͜ʖ ͡°)
Guest 15 Oct 2015
The restrict pointer example is a real world example of how limited optimization is. We are basically discussing compiler optimization passes now. And this is not something I'm going to do on a random forum filled with ppl who never wrote a single line of OpenGL yet are experts on the Nvidia driver.

If the driver was this magical optimizer that fixes bad behavior automagically then why are ppl discussing "approaching zero driver o erhead" a lot now?

http://gdcvault.com/play/1020791/

See this talk, you will se that in order to fix bad behavior you simply dont use Nvidia - you rewrite the code! (Which ofc, would be blatantly clear if you knew anything about the limitations a compiler faces)
Guest 15 Oct 2015
The mirv example was well documented and if this is true then yes thats a good and specific example. But when you explain in terms of " magic" and such it's just completely obvious you dont know anything about software development.
Damn. Busted. I wonder how our software design business lasted for ten years before anyone found out I have no idea what I'm doing. Please don't tell our clients. :'(

So you write OpenGL there? No? Right.

Language? Hello kitty script?
alexThunder 15 Oct 2015
But that wasn't the example "you have", was it? And why does it show, how limited optimization is? We're talking about things from like simple type conversions to replacing whole shaders. You just provided a for-loop which doesn't tell us anything.

In general, you're not doing much more then keep telling us, how stupid we all are. Other than insults, you didn't contribute much so far.

See this talk, you will se that in order to fix bad behavior you simply dont use Nvidia - you rewrite the code! (Which ofc, would be blatantly clear if you knew anything about the limitations a compiler faces)

In how far is this different from what I suggested in the first place?


Last edited by alexThunder on 15 Oct 2015 at 10:28 pm UTC
Guest 15 Oct 2015
Whatever man, this whole thing is retarded. Impossible to get through when you dont know shit.

The loop was an extremely simple counter argument shpwing that given a very simple code, the compiler cannot magically optimize things. You need to manually mark things. This is a counter argument to this whole "Nvidia compiler automagically fixes everything". " you can write shitty code and it gets optimized boom bang yeah!". Yet back in reality land, the compiler cannot even figure out the for loop could run much faster if marked restrict.
alexThunder 15 Oct 2015
And how does this example show, that compilers don't do any optimizations elsewhere?

(Yes, it's a trap)


Last edited by alexThunder on 15 Oct 2015 at 10:42 pm UTC
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
The comments on this article are closed.