We do often include affiliate links to earn us some pennies. See more here.
tagline-image
I'm really excited to try Magicka 2, as I thought the first one was quite fun. The developer has been keeping everyone up to date on progress in their Steam forum. Sadly, for AMD GPU users it looks like it won't run as well as on Nvidia hardware.

A developer wrote this on their official Steam forum:
QuoteWe've had discussions on how to support non-XInput controllers and we found that just using SDLs input subsystem would probably solve that. Since there are controller emulators already we consider it a 'would be nice' feature.

I took the liberty of replacing the Nvidia 660GTX in my Linux machine for our Radeon 270X and ran some tests. On Ubuntu I tested both the open source drivers and fglrx and both worked fine. I think the open source drivers have slightly better performance. Switching drivers around kinda broke my setup though so I installed Debian 8 and did some tests there and only had issues with decals getting a slight rectangular outline.

Overall the biggest issue in my tests with AMD cards is that the performance on Linux & MacOSX feels like it's halved compared to a corresponding Nvidia card. I've added graphics settings to control how many decals/trees/foliage the game draws that helps a bit but it would've been better if this was not necessary.


It's fantastic to see them actually implement something to help with the issue though, and I'm sure many AMD GPU users will be pretty happy about that. It's not all doom and gloom, since that developer mentioned it will even work on the open source driver for AMD users, so that's great too. They must be one of a very few developers who are testing their port so thoroughly, it's quite refreshing to see.

I just want to get my hands on it already! Article taken from GamingOnLinux.com.
0 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly came back to check on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly.
See more from me
The comments on this article are closed.
52 comments
Page: «3/6»
  Go to:

lucinos Oct 15, 2015
Developers should always try for open source drivers. That would give the best choice for best support. But if high performance is realistic possible only on closed source nvidia then that is what it is and it is better than nothing.

Our hope for high performance open competition on all (operating systems and hardware) platforms is Vulkan. So for high performance engines Vulkan should soon be the right choice. (only disadvantage is that Vulkan is not released yet unfortunately but will be very soon). Closed source drivers and system specific APIs are not open competition.


Last edited by lucinos on 15 October 2015 at 7:27 pm UTC
Guest Oct 15, 2015
>> My understanding is that Nvidia hacks around a lot of common mistakes and bad practices in OpenGL code. <<

This quote is just waaaaay wrong. The driver cannot change the way your app uses OpenGL. If you do things in a bad style then it will perform badly. This the driver cannot change. You should use OpenGL in the "approaching zero driver overhead" way. And no Nvidia doesnt break the spec, they have been described to not be "spec purists" but that's a completely different thing. This quote is just totally disconnected from reality in terms of what is done and where.
Guest Oct 15, 2015
So you consider not having bad performance as a violation to the spec? Wow that is just wrong. Ofc some things are extremely modified by the driver but it doesnt break the spec. Using glVertex3f etc aka OpenGL 1.0 is the worst possible use pf OpenGL yet it performs pretty good on Nvidia. That is just insane to blame Nvidia for doing things faster.
Guest Oct 15, 2015
Yes, I work with OpenGL and OpenCL on AMD and Nvidia. I know "a bit of OpenGL".
Guest Oct 15, 2015
Can you be more specific? Details about function calls and references to the spec. I would want to test this out.
Imants Oct 15, 2015
Am I understanding things correct. NVidia is making bad opengl code preform better on they're cards buy violating opengl specification? So if programmers would know better opengl and programmed as it should be done code would work as fast as it needs on any drivers? So in the end it is people knowledge witch is lacking and NVidia is just patching it up.


Last edited by Imants on 15 October 2015 at 8:10 pm UTC
tuubi Oct 15, 2015
View PC info
  • Supporter Plus
Quoting: alex>> My understanding is that Nvidia hacks around a lot of common mistakes and bad practices in OpenGL code. <<

This quote is just waaaaay wrong. The driver cannot change the way your app uses OpenGL. If you do things in a bad style then it will perform badly. This the driver cannot change. You should use OpenGL in the "approaching zero driver overhead" way.
Nvidia's driver can and does detect and catch OpenGL call patterns and outright errors that can severely affect performance. Sometimes this is game or application specific and sometimes based on heuristics. There's tons of "magic" like this in their drivers (and to lesser extent in AMD's as well), accumulated over the years. This is pretty much common knowledge. I won't do the research for you though if this is the first time you've heard of it.

I am a bit curious. Why would you think it can't be done? OpenGL is a rather messy group of specs, at least up until the latest versions. AMD's and Nvidia's blobs implement these specs as they see fit, without official validation or compliance testing available, let alone enforced by Khronos or anyone else. The oss drivers all (?) share mesa's implementation, but the proprietary drivers roll their own.

I'm not calling Nvidia evil or anything, drivers that magically make bad code work fine is just good business for them even if I think it's bad for the spec, bad for developers and - in the end - bad for end users. This is similar to (but not the same as) how MS crippled the web until a few years back with IE's liberal and wilful perversion of the web standards.
alexThunder Oct 15, 2015
Quoting: liamdaweA game being "ported" isn't some magical milestone, it means it compiles and runs on Linux, that IS when the proper testing begins, and they can only test one GPU at a time unless they can do both at the same time there is always going to be one that is first.

If they only have one machine available for development (and testing). Otherwise you just run the same build on different machines and see if it works or where errors occur.

Quoting: ImantsAm I understanding things correct. NVidia is making bad opengl code preform better on they're cards buy violating opengl specification? So if programmers would know better opengl and programmed as it should be done code would work as fast as it needs on any drivers? So in the end it is people knowledge witch is lacking and NVidia is just patching it up.

Yes.

Quoting: alex[...]and when given the choice this vendor's driver devs choose sanity (to make things work) vs. absolute GL spec purity.

But what's the point then in having standards (especially open ones)?

Besides, if you're writing GLSL Shaders and do mistakes (according to the Khronos specification), but Nvidia's compiler accepts your code (because it's sort of guessing what you meant), this may lead to unpredictable behaviour. Nvidia's driver try to adress suboptimal code - sometimes successfully, sometimes breaking the program/game.

It's a difference, if you try to fix some bad practice or code that isn't supposed to work.


Last edited by alexThunder on 15 October 2015 at 8:28 pm UTC
Guest Oct 15, 2015
This is just insane, the way you explain how the driver "guesses" and "fixes" mistakes etc. It's so far from reality I can't even figure out where to start.

It
Doesn't
Work
Like
That.

We are talking about guessing major things, yet in reality, where I work. The driver cannot even "guess" that I'm working with restrict pointers (which is true for 99.99999999% of the times)

Consider this extremely simple for loop:

for(int i = 0; i < 10000; i++)
A[i] += B[i];

The A vector is added the B vector.

This loop can be fast or slow depending on how the pointers are marked. If you mark them as restrict it will run maybe 8 times faster.

This is an extremely simple, low level "guess" that the driver doesnt do. You are talking abot these magical, incredible, hocus pocus fixes the driver does, yet a simple thing like this is not done.

Why? Well because it's impossible. That's why. Just like all this pseudo-programming bullshit
alexThunder Oct 15, 2015
I wonder. Is there something specific you want to tell us?
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
The comments on this article are closed.
Buy Games
Buy games with our affiliate / partner links: