Today at the AMD "together we advance_gaming" event, AMD revealed their new RDNA3 architecture along with the RX 7900 XTX and RX 7900 XT GPUs. Both of these new cards will be available on December 13th, and AMD threw plenty of shade at NVIDIA of the power use and connector issues during the event talking about how "easy" it is to upgrade to it and noting the power use.
Specifications:
AMD Radeon RX 7900 XT | AMD Radeon RX 7900 XTX | |
Memory | 20 GB - GDDR6 Infinity Cache - 80 MB Ray Accelerators - 84 |
24 GB - GDDR6 Infinity Cache - 96 MB Ray Accelerators - 96 |
Speed | Base Frequency - 1500 MHz Boost Frequency - Up to 2400 MHz Game Frequency - 2000 MHz |
Base Frequency - 1900 MHz Boost Frequency - Up to 2500 MHz Game Frequency - 2300 MHz |
Connections | DisplayPort 2.1 HDMI 2.1 USB Type-C |
DisplayPort 2.1 HDMI 2.1 USB Type-C |
Rendering | HDMI 4K Support 4K H264 Decode 4K H264 Encode H265/HEVC Decode H265/HEVC Encode AV1 Decode AV1 Encode |
HDMI 4K Support 4K H264 Decode 4K H264 Encode H265/HEVC Decode H265/HEVC Encode AV1 Decode AV1 Encode |
Power | Typical Board Power (Desktop) - 300 W Minimum PSU Recommendation - 750 W |
Typical Board Power (Desktop) - 355 W Minimum PSU Recommendation - 800 W |
Dimension | Length - 276 mm Slot Size - 2.5 slots |
Length - 287 mm Slot Size - 2.5 slots |
Pricing | $899 | $999 |
They also teased FSR3, which will be due out next year but didn't go into much detail on it. According to AMD FSR3 is "expected to deliver up to 2X more FPS compared to AMD FSR 2 in select games".
- AMD RDNA 3 Architecture – Featuring an advanced chiplet design, new compute units and second-generation AMD Infinity Cache technology, AMD RDNA 3 architecture delivers up to 54% more performance per watt than the previous-generation AMD RDNA 2 architecture. New compute units share resources between rendering, AI and raytracing to make the most effective use of each transistor for faster, more efficient performance than the previous generation.
- Chiplet Design – The world’s first gaming GPU with a chiplet design delivers up to 15% higher frequencies at up to 54% better power efficiency. It includes the new 5nm 306mm Graphics Compute Die (GCD) with up to 96 compute units that provide the core GPU functionality. It also includes six of the new 6nm Memory Cache Die (MCD) at 37.5mm, each with up to 16MB of second-generation AMD Infinity Cache technology.
- Ultra-Fast Chiplet Interconnect – Unleashing the benefits of second-generation AMD Infinity Cache technology, the new chiplets leverage AMD Infinity Links and high-performance fanout packaging to deliver up to 5.3TB/s of bandwidth.
- Expanded Memory and Wider Memory Bus – To meet the growing requirements of today’s demanding titles, the new graphics cards feature up to 24GB of high-speed GDDR6 memory running at 20Gbps over a 384-bit memory bus.
Based on the pricing, they seem like pretty great value to me. Having a flagship under $1K is a very good move when compared to what NVIDIA are offering. If the performance is in any way comparable, it should sell quite well.
From the press release: “These new graphics cards are designed by gamers for gamers. As we were developing the new cards, we not only incorporated feedback from our customers, but we built in the features and capabilities we wanted to use,” said Scott Herkelman, senior vice president & general manager, Graphics Business Unit at AMD. “We also realized that we needed to do something different to continue pushing the envelope of the technology, and I’m proud of what the team has accomplished with AMD RDNA 3 and the Radeon RX 7900 Series graphics cards. I can’t wait for gamers to experience the powerhouse performance, incredibly vivid visuals and amazing new features these new graphics cards offer.”
Full event can be seen below:
Direct Link
Also, it's still fun to see the Steam Deck picture on such events. AMD made the APU so it's only natural for them to highlight it but nice to see it again like this for a device that's helping to do so much for Linux gaming as a whole.
Quoting: ShmerlThat's not how I remember it. AMD had asynchronous compute and focus on parallelized workloads way before Nvidia and their hardware is better at it from what I know. May be something changed in the last generation, but I doubt AMD is planning to not compete in that area, it wouldn't make sense.Yes and no, ATI IS better at parallelized workloads, which would make them much faster cards...but the nvidia cards while far less efficient have far more cores. The 4090 for instance uses 128 stream processors (similar to the 96 compute units on the 7900xtx) and 16384 general purpose tensor cores, compared to the roughly equivalent 6144 stream processors on the 7900 XTX. Historically ATI has been more efficient per-core, by quite a bit actually, nvidia literally just throws more hardware at the problem.
Direct comparison
NV | ATI
Transistors 76.3B | 58B
Cores 128 | 96
compute cores 16384 | 6528* (ATI has multiple types, NV groups them as cuda cores, this is combined)
processing pow 82.6TF | 61TF
so you can kinda see just from this with around a third of the cores it's cores are MUCH more efficient and can do single precision operations pushing it to around ~75% the power of the NV card for around half the price, definitely the "most bang for the buck" out of the options on the market right now. Yeah it's not a direct comparison, there's more internal differences, but it shows why ATI gets such good performance for so much lower of a price.
I'm not really arguing for or against (I own Ati/Nv/and intel cards myself, I like ATI for HTPCs in particular), just analyzing that from previous card generations and the stats basically aligning the same in this card generation I expect to see about the same results. I'm basically calling right now you'll see pretty close in benchmarks at 1080/1440p high settings/current or last gen engine tech, possibly with ATI being faster, 4k/high settings without any newer vk1.3 features I think will be a dead heat, but as you scale to 8k/VR/newer features you'll start to see the drop off where NV pulls ahead.
Last edited by raptor85 on 3 November 2022 at 10:48 pm UTC
I don't think 8K is a relevant use case still, it's more of a red herring. VR on the other hand is where this can be interesting, but uptake of VR is still pretty low.
I doubt even 4K has become relevant already. But at least if you care about higher refresh rates.
Last edited by Shmerl on 3 November 2022 at 10:45 pm UTC
Quoting: ShmerlNot sure if comparing tensor cores in the same mix really is helpful. Those are only useful for specialized workloads. So it's like comparing apples and oranges. But sure, number of cores is one way to mitigate weaker cores. That's what Intel is doing now in CPUs.sorry, that was a mistype, it's cuda cores, I had tensor on the brain, been playing with AI lately.
(also I wouldn't say the tensor cores aren't really used, anything using DLSS2 is using them, so pretty much any modern AAA game)
Last edited by raptor85 on 3 November 2022 at 10:55 pm UTC
Quoting: jordicomaAnd I don't understand the AI accelerator part.It's for people doing machine learning, they use GPU:s to handle the vast amounts of data so AMD, nVidia (and now Intel) are implementing new functions on their cards to improve the performance in that area. Nothing that helps the rest of us who only use these for displaying graphics and playing games.
Quoting: CyborgZetaWait, Tesla? Tesla cars have AMD GPUs in them?The revamped S and X models have RDNA2 GPU:s in them yes: link
Last edited by F.Ultra on 4 November 2022 at 12:18 am UTC
Quoting: F.UltraIt's for people doing machine learning, they use GPU:s to handle the vast amounts of data so AMD, nVidia (and now Intel) are implementing new functions on their cards to improve the performance in that area. Nothing that helps the rest of us who only use these for displaying graphics and playing games.
Nothing stops games from using AI you know, for more realistic behaviors and simulation, not for graphics :)
It feels like a lot of games are running after better graphics but really almost no one is trying to improve world simulation quality.
Quoting: ShmerlQuoting: F.UltraIt's for people doing machine learning, they use GPU:s to handle the vast amounts of data so AMD, nVidia (and now Intel) are implementing new functions on their cards to improve the performance in that area. Nothing that helps the rest of us who only use these for displaying graphics and playing games.
Nothing stops games from using AI you know, for more realistic behaviors and simulation, not for graphics :)
It feels like a lot of games are running after better graphics but really almost no one is trying to improve world simulation quality.
True, I don't know if any of that game AI uses any of the ML functionality of the GPU:s yet though.
edit: AFAIK also those ML extensions have to do with the learning part of the AI research and not the end result of said research but then again I have basically zero knowledge of ML so should just shut up :)
Last edited by F.Ultra on 4 November 2022 at 12:21 am UTC
Quoting: F.Ultraedit: AFAIK also those ML extensions have to do with the learning part of the AI research and not the end result of said research but then again I have basically zero knowledge of ML so should just shut up :)
Well, imagine a game that adapts to the player in some way. So learning part can be useful for that I suppose.
Wondering if FSR3 will be exclusive to RDNA3.. seems like it should run just fine on RDNA2 as well, unsure about RDNA1 cards...
Last edited by TheRiddick on 4 November 2022 at 2:42 am UTC
Quoting: TheRiddick$999USD TAKE MY MONEY!!!!!! GIVE IT TO ME!
https://www.youtube.com/watch?v=Yx1PCWkOb3Y&t=13s
:)
Quoting: TheRiddickWondering if FSR3 will be exclusive to RDNA3.. seems like it should run just fine on RDNA2 as well, unsure about RDNA1 cards...
I doubt it should be limited, but may be it can use something from it for better performance? Not sure if they use any AMD specific Vulkan extensions.
Though for a $1000 card, I don't see any point in using FSR.
Last edited by Shmerl on 4 November 2022 at 2:48 am UTC
See more from me