While doing some comparative benchmarks between my RX 470 and GTX 1060 on a Ryzen 1700 CPU and an i7-2700k CPU, I encountered odd behaviour with Shadow of Mordor.
On 1080p high preset this benchmark is almost exclusively CPU-bound on both a Ryzen 1700 (3,75GHz) and an i7-2700k (4,2GHz). So when I got 30 to 40% better performance on the i7 compared to the Ryzen with the GTX 1060, I was shocked and began to investigate what was causing such a performance drop with Ryzen.
Interesting to note is that, on Ryzen, the performance of the GTX 1060 and the RX 470 was identical in CPU-bound parts of the benchmark, even though AMD’s open source driver (Mesa 17.2-git in this case) still has a significantly higher CPU overhead than Nvidia's proprietary driver. So this pointed to a driver-independent bottleneck on the game side itself.
With that information, I started suspecting a thread allocation problem, either from the Linux kernel (4.12rc1) or from the game (if it forces the scheduling through CPU affinity).
You see, Ryzen has a specific architecture, quite different from Intel's i5 and i7. Ryzen is a bit like some sort of CPU Lego, with the CCX being the base building block. A CCX (core complex) comprises 4 CPU cores with SMT (simultaneous multithreading) and the associated memory caches (level 1 to 3). So a mainstream Ryzen CPU is made of 2 CCXes linked with AMD’s infinity fabric (a high speed communication channel). Even the 4 cores Ryzen are made this way (on these cpus, two cores are disabled in each CCX).
If you’re interested in the subject, you can find more in-depth information here: Anandtech.com review of Ryzen
So how does this all relate to Shadow of Mordor? Well, AMD’s architecture is made to scale efficiently to high core numbers (up to 32), but it has a drawback: communication between CPU cores that are not on the same CCX is slower because it has to go through the Infinity Fabric.
On a lot of workloads this won’t be a problem because threads don’t need to communicate much (for example in video encoding, or serving web pages) but in games threads often need to synchronize with each other. So it’s better if threads that are interdependent are scheduled on the same CCX.
This is not happening with Shadow of Mordor, so performance takes a huge hit, as you can see in the graph below.
This graph shows the FPS observed on a Ryzen 1700 @ 3,75GHz and an RX 470 during the automated benchmark of Shadow of Mordor. The blue line shows the FPS with the default scheduling and the red line with the game forced onto the first CCX. The yellow line shows the performance increase (in %) going from default to manual scheduling.
As you can see, manual scheduling roughly yelds a 30% performance improvement in CPU-bound parts of the benchmark. Quite nice, eh?
So how does one manually schedule Shadow of Mordor on a Ryzen CPU?
It’s quite simple really. Just edit the launch options of the game in Steam like this:
This command will force the game on logical cores 0-7 which are all located on the first CCX.
Note: due to SMT, there are twice the amount of logical cores as real physical cores. This is because SMT allows two threads to run simultaneously on each physical core (though not both at full speed).
The above command is for an 8 core / 16 threads Ryzen CPU (model 1700 and higher).
On 6 core Ryzen (models 1600/1600X), the command would be
Caveat: on a 4 core Ryzen limiting the game to the first CCX will only give it 2 cores / 4 threads to work with. This may prove insufficient and counter-productive compared to running the game with the default scheduling. You’ll have to try it for yourself to see what option gives the best performance.
Due to its specific architecture, Ryzen needs special care in thread scheduling from the OS and games. If you think a game does not have the performance level it should have you can try forcing the scheduling on the first CCX and see if it improves performance. In my (admittedly limited) experience though, Shadow of Mordor is the only game where manual scheduling mattered. The Linux scheduler does a pretty good job usually.
On 1080p high preset this benchmark is almost exclusively CPU-bound on both a Ryzen 1700 (3,75GHz) and an i7-2700k (4,2GHz). So when I got 30 to 40% better performance on the i7 compared to the Ryzen with the GTX 1060, I was shocked and began to investigate what was causing such a performance drop with Ryzen.
Interesting to note is that, on Ryzen, the performance of the GTX 1060 and the RX 470 was identical in CPU-bound parts of the benchmark, even though AMD’s open source driver (Mesa 17.2-git in this case) still has a significantly higher CPU overhead than Nvidia's proprietary driver. So this pointed to a driver-independent bottleneck on the game side itself.
With that information, I started suspecting a thread allocation problem, either from the Linux kernel (4.12rc1) or from the game (if it forces the scheduling through CPU affinity).
You see, Ryzen has a specific architecture, quite different from Intel's i5 and i7. Ryzen is a bit like some sort of CPU Lego, with the CCX being the base building block. A CCX (core complex) comprises 4 CPU cores with SMT (simultaneous multithreading) and the associated memory caches (level 1 to 3). So a mainstream Ryzen CPU is made of 2 CCXes linked with AMD’s infinity fabric (a high speed communication channel). Even the 4 cores Ryzen are made this way (on these cpus, two cores are disabled in each CCX).
If you’re interested in the subject, you can find more in-depth information here: Anandtech.com review of Ryzen
So how does this all relate to Shadow of Mordor? Well, AMD’s architecture is made to scale efficiently to high core numbers (up to 32), but it has a drawback: communication between CPU cores that are not on the same CCX is slower because it has to go through the Infinity Fabric.
On a lot of workloads this won’t be a problem because threads don’t need to communicate much (for example in video encoding, or serving web pages) but in games threads often need to synchronize with each other. So it’s better if threads that are interdependent are scheduled on the same CCX.
This is not happening with Shadow of Mordor, so performance takes a huge hit, as you can see in the graph below.
This graph shows the FPS observed on a Ryzen 1700 @ 3,75GHz and an RX 470 during the automated benchmark of Shadow of Mordor. The blue line shows the FPS with the default scheduling and the red line with the game forced onto the first CCX. The yellow line shows the performance increase (in %) going from default to manual scheduling.
As you can see, manual scheduling roughly yelds a 30% performance improvement in CPU-bound parts of the benchmark. Quite nice, eh?
So how does one manually schedule Shadow of Mordor on a Ryzen CPU?
It’s quite simple really. Just edit the launch options of the game in Steam like this:
taskset -c 0-7 %command%
This command will force the game on logical cores 0-7 which are all located on the first CCX.
Note: due to SMT, there are twice the amount of logical cores as real physical cores. This is because SMT allows two threads to run simultaneously on each physical core (though not both at full speed).
The above command is for an 8 core / 16 threads Ryzen CPU (model 1700 and higher).
On 6 core Ryzen (models 1600/1600X), the command would be
taskset -c 0-5 %command%
and on a 4 core Ryzen (models 1400/1500X) taskset -c 0-3 %command%
Caveat: on a 4 core Ryzen limiting the game to the first CCX will only give it 2 cores / 4 threads to work with. This may prove insufficient and counter-productive compared to running the game with the default scheduling. You’ll have to try it for yourself to see what option gives the best performance.
Due to its specific architecture, Ryzen needs special care in thread scheduling from the OS and games. If you think a game does not have the performance level it should have you can try forcing the scheduling on the first CCX and see if it improves performance. In my (admittedly limited) experience though, Shadow of Mordor is the only game where manual scheduling mattered. The Linux scheduler does a pretty good job usually.
Some you may have missed, popular articles from the last month:
Could you give more information on the difference between PTHREAD_SCOPE_PROCESS and PTHREAD_SCOPE_SYSTEM? I've read the manual but still don't clearly understand what difference it makes in practice.
On this specific issue, I thought the solution would be more to separate each CCX as different NUMA nodes so the scheduler could take into account the additional cost of having interdependent threads on different CCXes?
In this case, I don't know for sure.
IIRC the linux kernel was patched by AMD in version 4.9 so it should be aware of Ryzen's topology and schedule threads accordingly. And indeed, the scheduling is working fine generally.
At some point I even tried to do all sorts of manual scheduling with Dirt3 on Wine and each single time the result was slightly to much worse than letting the scheduler do its job.
That's why I was surprised to see this performance hit with Shadow of Mordor. At this point I pretty much expected manual scheduling to be useless.
So it may be a problem with the game forcing scheduling in a manner that doesn't work well with Ryzen but we would need a confirmation from Feral to be sure.
I just did a quick test where I scheduled the game only on even logical cores (so at most one thread per real core) and I obtained a similar result as with the default scheduling. So it seems that it is indeed an issue with the CCXes and not SMT. This would make sense as I have no such issue on the i7 2700k.
My memory has a XMP profile of 3000 MHz at CL15, to run it at 2666 I need to set a CL of 16 but that makes it run perfectly fine. If you have 4 ram modules you might need to use only 2 to get higher speeds. With current BIOS's it's hard to obtain higher frequencies with 4 modules of dual ranked dimms.
Also try to set a higher SOC voltage, that can give you much better OC capabilities. Up to 1.10 Volts are fine. AMD engineers recommends that, here's a interesting video where AMD engineers explain how to do OC on Ryzen boards: https://youtu.be/vZgpHTaQ10k
Last edited by Egonaut on 28 May 2017 at 1:30 pm UTC
I did a quick run of benchmarks to simulate the performance of a 4C and 6C Ryzen cpus by restricting the cores the game has access to.
Keep in mind it's only a simulation so a real 4C/6C Ryzen cpu could behave differently.
First Shadow of Mordor:
View cookie preferences.
Accept & Show Accept All & Don't show this again Direct Link
4C/6C performance is lower than 8C (to be honest I was expecting 6C performance to be similar to 8C so it's a surprise here) while the worst by far is with default scheduling.
All in all the performance hit with a 4C is significant but not something that would make the game unplayable.
Now Hitman:
View cookie preferences.
Accept & Show Accept All & Don't show this again Direct Link
This is quite different from Shadow of Mordor:
- the default scheduling is fine and the CCXes don't seem to create any problem
- there is very little difference between 4C, 6C and 8C. And the benchmark is mostly cpu bound.
I personally think what you see with Hitman will be closer to the average experience you'll get with a 4C Ryzen cpu. But I would need to do a lot more benchmarks to confirm this.
Don't forget also that more and more titles will use Vulkan which will reduce the cpu overhead so things should get better too.
On the flip side, please note the Ryzen 1400 has half the level 3 cache of all other Ryzen cpus (8MB vs 16MB) so this could adversely affect performance.
If you don't want to have to bother with all these things, you might indeed be better off with an intel cpu, though intel has no competition at the R5 1400 price (Core i3 are useless now that there is the pentium G4560). But with a Ryzen cpu, you have a very good upgrade path down the road, just by changing the cpu alone.
Honestly, I think there are four possibilities:
- you're on a tight budget: get the pentium G4560 and a good GPU. Try to grab a second hand 7700 / 7700k in a couple of years.
- you have a medium budget: get the Ryzen R5 1600 and overclock it. You will be able to upgrade to more cores / better single thread performance down the road (Ryzen 2)
- you have a high budget and want the best performance now: get a 7700k and overclock it
- you have a high budget and want some future proofing: get a Ryzen 7 1700 and overclock it
If you opt for a Ryzen cpu, make sure to get good DDR4 (at least 2666Mhz, single rank if you can get that information).
Also I would target higher frequencies if the budget allows it, but 2666 Mhz should be the lowest target.
On the other hand a process scope is not so much fun if you run other applications at the same time since all the process scope threads in your application will now be seen as a single entity when it competes for CPU time against all the other threads from the other applications. These days the inter thread intra process priority is handled with cgroups but that is not something that I've tried.
This should be a developers fix mainly because what is happening here is that you have a application where several different threads will perform write+read to shared memory, at the same time you have lot's of applications where none of the threads share memory for anyting (like say a web server) so there is no way the kernel might know which of your threads are sharing memory or not so that is not much it can do. So this behaviour should be apparent on any OS as well.
Last edited by F.Ultra on 29 May 2017 at 1:38 pm UTC
The Windows auto boost of the interactive thread is perhaps nice for a game but creates havoc for system daemon writers like me (and thankfully it can be user disabled [but only on the system as a whole]) with latencies going completely haywire every time a end user would move a window around and so on.
Yes this particular scenario of a web server such as Apache works this way by forking but it's not mandatory nor is it always desirable. You can avoid heavy overhead IPC:s (and do note that such IPC have a much heavier overhead than the Infinity channel have on a Ryzen type of cpu architecture) and so on by using threads instead of forked processes even when most of the work load is not shared so still, there is no way for the kernel/system to know what to do with threads and a cpu architecture such as Ryzen more than give the developers tools for managing this on their own because it's only them that know if thread A will often share data with thread B but not with thread C and thus pin them so that threads A and B are always running on the same CCX while thread C can roam free wherever there are idle process time.
The problem here is not that the threads have access to a shared memory pool, the problem is that some of these threads actively do work on the same memory locations which if they run on different CCX:es will cause massive amounts of copying over the Infinity channel.
IF this was "solved" by the kernel then the consequences would be that all threads created by a application would be pinned to schedule on the same CCX leaving quite a few cores running completely idle and all this just to handle a particular work load where the threads do heavy work with the same memory.
40% boost they've managed apparently
Last edited by pete910 on 30 May 2017 at 5:40 pm UTC