AMD came out with a bang at CES with a whole bunch of new products, including some impressive looking X3D processors added to the Ryzen 7000 lineup.
Here's the list of the new X3D AM5 processors, which AMD claim are the "fastest gaming processors in the world" with "up to" 14% faster performance over the previous generation. These will release in February 2023:
Model | Cores/Threads | Boost / Base Frequency2 | Total Cache | TDP |
AMD Ryzen 9 7950X3D | 16C/32T | Up to 5.7 GHz / 4.2 GHz | 144MB | 120W |
AMD Ryzen 9 7900X3D | 12C/24T | Up to 5.6 GHz / 4.4 GHz | 140MB | 120W |
AMD Ryzen 7 7800X3D | 8C/16T | Up to 5.0 GHz / TBD | 104MB | 120W |
AMD also expanded their list of their normal Ryzen 7000 processors with new 65w variants that are due to launch January 10 2023. These include:
Model | Cores / Threads | Boost / Base Frequency2 | Total Cache | TDP | Cooler | SEP (USD) | |
AMD Ryzen 9 7900 | 12C/24T | Up to 5.4 GHz / 3.7 GHz | 76MB | 65W | Wraith Prism | $429 | |
AMD Ryzen 7 7700 | 8C/16T | Up to 5.3 GHz / 3.8 GHz | 40MB | 65W | Wraith Prism | $329 | |
AMD Ryzen 5 7600 | 6C/12T | Up to 5.1 GHz / 3.8 GHz | 38MB | 65W | Wraith Stealth | $229 |
On top of that there's also a whole bunch of new mobile processors too due out in February 2023:
Model | Cores/Threads | Boost / Base Frequency2 | Total Cache | TDP |
AMD Ryzen 9 7945HX | 16C/32T | Up to 5.4 GHz / 2.5 GHz | 80MB | 55-75W+ |
AMD Ryzen 9 7845HX | 12C/24T | Up to 5.2 GHz / 3.0 GHz | 76MB | 45-75W+ |
AMD Ryzen 7 7745HX | 8C/16T | Up to 5.1 GHz / 3.6 GHz | 40MB | 45-75W+ |
AMD Ryzen 5 7645HX | 6C/12T | Up to 5.0 GHz / 4.0 GHz | 38MB | 45-75W+ |
More mobile processors, this time for thin and light laptops due out in March 2023:
Model | Cores/Threads | Boost/Base Frequency2 | Cache | TDP |
Ryzen 9 7940HS | 8C/16T | Up to 5.2 GHz / 4.0 GHz | 40MB | 35-45W |
Ryzen 7 7840HS | 8C/16T | Up to 5.1GHz / 3.8GHz | 40MB | 35-45W |
Ryzen 5 7640HS | 6C/12T | Up to 5.0GHz / 4.3 GHz | 38MB | 35-45W |
The mobile announcements kept going with Zen 3+ mobile chips announced they said for long battery life due out during January 2023:
Model | Cores/Threads | Boost/Base Frequency2 | Cache | TDP |
AMD Ryzen 7 7735HS | 8C/16T | Up to 4.75 GHz / 3.2 GHz | 20MB | 35W |
AMD Ryzen 5 7535HS | 6C/12T | Up to 4.55 GHz / 3.3 GHz | 19MB | 35W |
AMD Ryzen 7 7735U | 8C/16T | Up to 4.75 GHz / 2.7 GHz | 20MB | 15-28W |
AMD Ryzen 5 7535U | 6C/12T | Up to 4.55 GHz / 2.9 GHz | 19MB | 15-28W |
AMD Ryzen 3 7335U | 4C/8T | Up to 4.3 GHz / 3.0 GHz | 10MB | 15-28W |
Plus these plain Zen 3 mobile chips also due January 2023:
Model | Cores/Threads | Boost/Base Frequency2 | Cache | TDP |
AMD Ryzen 7 7730U | 8C/16T | Up to 4.5 GHz / 2.0 GHz | 20MB | 15W |
AMD Ryzen 5 7530U | 6C/12T | Up to 4.5 GHz / 2.0 GHz | 19MB | 15W |
AMD Ryzen 3 7330U | 6C/12T | Up to 4.3 GHz / 2.3 GHz | 10MB | 15W |
Finally their PRO mobile chips coming in February 2023:
Model | Cores/Threads | Boost/Base Frequency2 | Cache | TDP |
AMD Ryzen 7 PRO 7730U | 8C/16T | Up to 4.5 GHz / 2.0 GHz | 20MB | 15W |
AMD Ryzen 5 PRO 7530U | 6C/12T | Up to 4.5 GHz / 2.0 GHz | 19MB | 15W |
AMD Ryzen 3 PRO 7330U | 6C/12T | Up to 4.3 GHz / 2.3 GHz | 10MB | 15W |
That's quite a lot to take in and those new desktop X3D have my mouth watering a bit. I only upgraded to a 5800X in the last year, I don't exactly need another upgrade but gosh they sound great. I need a PSU and GPU update next though…
Will you be looking to grab any of these? Do let me know in the comments.
You can see the full event video below:
Direct Link
This is also probably a good place to note that AMD have confirmed overheating and thermal throttling issues with their new Radeon RX 7900 XTX, which YouTuber der8auer has been detailing across two videos. The issue affects AMD's reference cards, not the ones designed by their partners. The statement AMD released:
We are working to determine the root cause of the unexpected throttling experienced by some while using the AMD Radeon RX 7900 XTX graphics cards made by AMD. Based on our observations to-date, we believe the issue relates to the thermal solution used in the AMD reference design and appears to be present in a limited number of the cards sold. We are committed to solving this issue for impacted cards.
I wish they also included the max TDP, like Intel in the previous announcementBut TDP numbers are nearly useless, all manufacturers use their own incomprehensible algorithms to conjure up their magical numbers. Gamers Nexus has addressed this many times over the last couple of years if you want to dive into this particular swamp.
I think same thing is happening with the GPUs now, we are always beta testers. Maybe after one year of launch it is time to buy.
I wish they also included the max TDP, like Intel in the previous announcementBut TDP numbers are nearly useless, all manufacturers use their own incomprehensible algorithms to conjure up their magical numbers. Gamers Nexus has addressed this many times over the last couple of years if you want to dive into this particular swamp.
Thanks, I didn't know. Will have a look!
It was also about time they recognized the design error on the 7900 XTX, and start applying RMAs.
They have and are doing !
https://wccftech.com/amd-confirms-radeon-rx-7900-xtx-throttling-issue-related-to-thermal-solution-used-in-reference-design/
I feel no pressure to upgrade from my 5800X3d/6900XT for 1440P/144, but as always, I'm tempted.
there will now be a scheduler problem that is worse than on Alder LakeWell, a potential problem at least. If it even is a problem at launch I expect it to be addressed quickly, much like Alder Lake was.
there will now be a scheduler problem that is worse than on Alder LakeWell, a potential problem at least. If it even is a problem at launch I expect it to be addressed quickly, much like Alder Lake was.
How? I cannot think of how a scheduler can know which thread needs larger L3 vs higher clock frequency.
I wonder if they still plan to make the long rumoured AM4 5600 X3D. Would be sure nice for a cost efficient upgrade on AM4 systems.
It seems as though they've moved on this from idea. 5800X3D was the last AM4 Desktop chip that we'll get. AM5 is all they care about going forward.
I feel no pressure to upgrade from my 5800X3d/6900XT for 1440P/144, but as always, I'm tempted.
with the generational stagnation in performance, your probably good to run that system for another 5 -7 years before you absolutely reach a point where games need too many of the settings turning down to justify.
for context (although its not exactly how things work like for like) your card has 35TFLOPS and the Steamdeck, xboxone (which is still a huge target platform) have less than 3 TFLOPS (1.8 for the steamdeck) and they run games just good enough at 720p/1080p. Even an old 1660Ti has 5.5 Tlops and can play most every game on medium/high at 60fps. Sure if you want ultra settings pegged at a constant 144hz at 1440p then you probably will be looking at a yearly or bi-yearly update.. seems excessive but it's your money and energy wise cards are not getting lower TDP's. a 4090 is supposed to be 85TFLOPS (not sure if that's accurate but possibly) that said its a 450w card!
edit* just checking and the xbox series x and the PS5 are under 12TFLOPS (again not an exact marker for performance). But im not sure if sony or MS are going to put a 450w GPU plus competing CPU in a small form factor console. So with your hardware you are definitely covered for a loooong time.
Perhaps lower raytracing performance will be the issue with your hardware though.
Last edited by Lofty on 6 January 2023 at 5:08 pm UTC
How? I cannot think of how a scheduler can know which thread needs larger L3 vs higher clock frequency.
That's my thought too, but I think this so called "big little" issue exists for a while (started with ARM?) and may be there was some work for Linux on that front before in regards to asymmetric cache?
I also wonder what will happen if the scheduler will be unaware of any of that and scheduling will be random. I.e. what will perform better, 7950X or 7950X3D? Some thorough benchmarks comparing them will be for sure needed.
Last edited by Shmerl on 9 January 2023 at 7:07 am UTC
How? I cannot think of how a scheduler can know which thread needs larger L3 vs higher clock frequency.
That's my thought too, but I think this so called "big little" issue exists for a while (started with ARM?) and may be there was some work for Linux on that front before in regards to asymmetric cache?
I also wonder what will happen if the scheduler will be unaware of any of that and scheduling will be random. I.e. what will perform better, 7950X or 7950X3D? Some thorough benchmarks comparing them will be for sure needed.
ARM big.LITTLE is a easy problem in this complex problem space. There the scheduler have the "simple" choice of "should this thread run on a high performing CPU or on a low performing CPU", it does this by doing some metrics (which still after all these years are far from perfect). Alder Lake have very similar design but there they added metric collection in the hardware (Intel Thread Director) buf AFAIK this is still far from perfect even in Windows 11 where compile jobs sometimes gets scheduled to the e-cores and thus takes 55min to complete instead of 17.
The problem space here is that both cores are high performance, just in different ways. I mean trying to determine if your thread/application would benefit from a higher clock or a larger cache is something that takes endless long benchmarks in numerous runs for application developers today (to determine which cpu to recommend to the enterprise to run the system on).
My guess is that MS (at this point in time AMD have only talked to Microsoft AFAIK) will simply (if they will do anything at all) try to detect if the application is a game or not and if so run it on the larger cache cores while running everything else on the higher boost cores.
To really benefit here the app/game developers would have to bench this individually and set the thread affinity but the number of combinations in combination with this probably going to be a niche cpu I have a hard time seeing this been done.
I think the most telling of all is that neither AMD nor Intel have any plans what so ever to implement either of these strategies on the server market.
The problem space here is that both cores are high performance, just in different ways. I mean trying to determine if your thread/application would benefit from a higher clock or a larger cache is something that takes endless long benchmarks in numerous runs for application developers today (to determine which cpu to recommend to the enterprise to run the system on).
I wonder if AI can help with scheduler for that. It feels like prediction problem based on some moving sample input.
AMD are now adding AI chips to some of their APUs, so may be this can even be hardware accelerated in the future.
Last edited by Shmerl on 9 January 2023 at 8:12 pm UTC
The problem space here is that both cores are high performance, just in different ways. I mean trying to determine if your thread/application would benefit from a higher clock or a larger cache is something that takes endless long benchmarks in numerous runs for application developers today (to determine which cpu to recommend to the enterprise to run the system on).
I wonder if AI can help with scheduler for that. It feels like prediction problem based on some moving sample input.
AMD are now adding AI chips to some of their APUs, so may be this can even be hardware accelerated in the future.
Possibly
Edit: The 7900 XTX is lovely, other than the fact I can't see my cursor under X11. Still don't need a CPU upgrade though.
Good to know! Are games working well? And does the cursor work in KDE Wayland session?
Last edited by Shmerl on 24 January 2023 at 9:55 pm UTC
In every DX11/DX12/Vulkan game I've tried thus far, yes*. Before commenting back I fired up the oldest game I currently have installed for shits and giggles, GTA IV.
!GTA4_borked
I installed plasma for you but no, the cursor is not visible there either (enabling the software cursor works). I'm not the only one so I'll double check a bug hasn't been filed already.
*Even the Witcher 3 enhanced seemed to run fine under DX12 but appeared to have a floor texture bug. More investigation needed.
Edit: I really notice the GPU performance uplift in Valheim (vulkan). Not the typical open world but in areas with lots of terraforming and buildings.
Edit 2: Until mutter supports VRR I've switched to a Plasma wayland session as my default. Freesync is working as expected. I should also note this isn't exactly stock Fedora. It's Fedora 37 with Rawhide's (so Fedora 38 currently) no-debug kernel (currently 6.2.0-0.rc4.31.fc38.x86_64).
Last edited by drlamb on 25 January 2023 at 8:51 pm UTC
See more from me