Check out our Monthly Survey Page to see what our users are running.
We do often include affiliate links to earn us some pennies. See more here.

Today at the AMD "together we advance_gaming" event, AMD revealed their new RDNA3 architecture along with the RX 7900 XTX and RX 7900 XT GPUs. Both of these new cards will be available on December 13th, and AMD threw plenty of shade at NVIDIA of the power use and connector issues during the event talking about how "easy" it is to upgrade to it and noting the power use. 


Pictured: RX 7900 XTX

Specifications:

  AMD Radeon RX 7900 XT AMD Radeon RX 7900 XTX
Memory 20 GB - GDDR6
Infinity Cache - 80 MB
Ray Accelerators - 84
24 GB - GDDR6
Infinity Cache - 96 MB
Ray Accelerators - 96
Speed Base Frequency - 1500 MHz
Boost Frequency - Up to 2400 MHz
Game Frequency - 2000 MHz
Base Frequency - 1900 MHz
Boost Frequency - Up to 2500 MHz
Game Frequency - 2300 MHz
Connections DisplayPort 2.1
HDMI 2.1
USB Type-C
DisplayPort 2.1
HDMI 2.1
USB Type-C
Rendering HDMI 4K Support
4K H264 Decode
4K H264 Encode
H265/HEVC Decode
H265/HEVC Encode
AV1 Decode
AV1 Encode
HDMI 4K Support
4K H264 Decode
4K H264 Encode
H265/HEVC Decode
H265/HEVC Encode
AV1 Decode
AV1 Encode
Power Typical Board Power (Desktop) - 300 W
Minimum PSU Recommendation - 750 W
Typical Board Power (Desktop) - 355 W
Minimum PSU Recommendation - 800 W
Dimension Length - 276 mm
Slot Size - 2.5 slots
Length - 287 mm
Slot Size - 2.5 slots
Pricing $899 $999

They also teased FSR3, which will be due out next year but didn't go into much detail on it. According to AMD FSR3 is "expected to deliver up to 2X more FPS compared to AMD FSR 2 in select games".

  • AMD RDNA 3 Architecture – Featuring an advanced chiplet design, new compute units and second-generation AMD Infinity Cache technology, AMD RDNA 3 architecture delivers up to 54% more performance per watt than the previous-generation AMD RDNA 2 architecture. New compute units share resources between rendering, AI and raytracing to make the most effective use of each transistor for faster, more efficient performance than the previous generation.
  • Chiplet Design – The world’s first gaming GPU with a chiplet design delivers up to 15% higher frequencies at up to 54% better power efficiency. It includes the new 5nm 306mm Graphics Compute Die (GCD) with up to 96 compute units that provide the core GPU functionality. It also includes six of the new 6nm Memory Cache Die (MCD) at 37.5mm, each with up to 16MB of second-generation AMD Infinity Cache technology.
  • Ultra-Fast Chiplet Interconnect – Unleashing the benefits of second-generation AMD Infinity Cache technology, the new chiplets leverage AMD Infinity Links and high-performance fanout packaging to deliver up to 5.3TB/s of bandwidth.
  • Expanded Memory and Wider Memory Bus – To meet the growing requirements of today’s demanding titles, the new graphics cards feature up to 24GB of high-speed GDDR6 memory running at 20Gbps over a 384-bit memory bus.

Based on the pricing, they seem like pretty great value to me. Having a flagship under $1K is a very good move when compared to what NVIDIA are offering. If the performance is in any way comparable, it should sell quite well.

From the press release: “These new graphics cards are designed by gamers for gamers. As we were developing the new cards, we not only incorporated feedback from our customers, but we built in the features and capabilities we wanted to use,” said Scott Herkelman, senior vice president & general manager, Graphics Business Unit at AMD. “We also realized that we needed to do something different to continue pushing the envelope of the technology, and I’m proud of what the team has accomplished with AMD RDNA 3 and the Radeon RX 7900 Series graphics cards. I can’t wait for gamers to experience the powerhouse performance, incredibly vivid visuals and amazing new features these new graphics cards offer.”

Full event can be seen below:

YouTube Thumbnail
YouTube videos require cookies, you must accept their cookies to view. View cookie preferences.
Accept Cookies & Show   Direct Link

Also, it's still fun to see the Steam Deck picture on such events. AMD made the APU so it's only natural for them to highlight it but nice to see it again like this for a device that's helping to do so much for Linux gaming as a whole.

Article taken from GamingOnLinux.com.
Tags: AMD, Hardware, Misc
24 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly checked on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly. You can also follow my personal adventures on Bluesky.
See more from me
The comments on this article are closed.
All posts need to follow our rules. For users logged in: please hit the Report Flag icon on any post that breaks the rules or contains illegal / harmful content. Guest readers can email us for any issues.
73 comments
Page: «2/4»
  Go to:

raptor85 Nov 3, 2022
That's not how I remember it. AMD had asynchronous compute and focus on parallelized workloads way before Nvidia and their hardware is better at it from what I know. May be something changed in the last generation, but I doubt AMD is planning to not compete in that area, it wouldn't make sense.
Yes and no, ATI IS better at parallelized workloads, which would make them much faster cards...but the nvidia cards while far less efficient have far more cores. The 4090 for instance uses 128 stream processors (similar to the 96 compute units on the 7900xtx) and 16384 general purpose tensor cores, compared to the roughly equivalent 6144 stream processors on the 7900 XTX. Historically ATI has been more efficient per-core, by quite a bit actually, nvidia literally just throws more hardware at the problem.

Direct comparison

               NV     |    ATI
Transistors    76.3B  |    58B
Cores          128    |    96
compute cores  16384  |    6528*  (ATI has multiple types, NV groups them as cuda cores, this is combined)
processing pow 82.6TF |    61TF


so you can kinda see just from this with around a third of the cores it's cores are MUCH more efficient and can do single precision operations pushing it to around ~75% the power of the NV card for around half the price, definitely the "most bang for the buck" out of the options on the market right now. Yeah it's not a direct comparison, there's more internal differences, but it shows why ATI gets such good performance for so much lower of a price.

I'm not really arguing for or against (I own Ati/Nv/and intel cards myself, I like ATI for HTPCs in particular), just analyzing that from previous card generations and the stats basically aligning the same in this card generation I expect to see about the same results. I'm basically calling right now you'll see pretty close in benchmarks at 1080/1440p high settings/current or last gen engine tech, possibly with ATI being faster, 4k/high settings without any newer vk1.3 features I think will be a dead heat, but as you scale to 8k/VR/newer features you'll start to see the drop off where NV pulls ahead.


Last edited by raptor85 on 3 November 2022 at 10:48 pm UTC
Shmerl Nov 3, 2022
Not sure if comparing tensor cores in the same mix really is helpful. Those are only useful for specialized workloads. So it's like comparing apples and oranges. But sure, number of cores is one way to mitigate weaker cores. That's what Intel is doing now in CPUs.

I don't think 8K is a relevant use case still, it's more of a red herring. VR on the other hand is where this can be interesting, but uptake of VR is still pretty low.

I doubt even 4K has become relevant already. But at least if you care about higher refresh rates.


Last edited by Shmerl on 3 November 2022 at 10:45 pm UTC
raptor85 Nov 3, 2022
Not sure if comparing tensor cores in the same mix really is helpful. Those are only useful for specialized workloads. So it's like comparing apples and oranges. But sure, number of cores is one way to mitigate weaker cores. That's what Intel is doing now in CPUs.
sorry, that was a mistype, it's cuda cores, I had tensor on the brain, been playing with AI lately.

(also I wouldn't say the tensor cores aren't really used, anything using DLSS2 is using them, so pretty much any modern AAA game)


Last edited by raptor85 on 3 November 2022 at 10:55 pm UTC
CyborgZeta Nov 3, 2022
Wait, Tesla? Tesla cars have AMD GPUs in them?
F.Ultra Nov 4, 2022
View PC info
  • Supporter
And I don't understand the AI accelerator part.
It's for people doing machine learning, they use GPU:s to handle the vast amounts of data so AMD, nVidia (and now Intel) are implementing new functions on their cards to improve the performance in that area. Nothing that helps the rest of us who only use these for displaying graphics and playing games.

Wait, Tesla? Tesla cars have AMD GPUs in them?
The revamped S and X models have RDNA2 GPU:s in them yes: link


Last edited by F.Ultra on 4 November 2022 at 12:18 am UTC
Shmerl Nov 4, 2022
It's for people doing machine learning, they use GPU:s to handle the vast amounts of data so AMD, nVidia (and now Intel) are implementing new functions on their cards to improve the performance in that area. Nothing that helps the rest of us who only use these for displaying graphics and playing games.

Nothing stops games from using AI you know, for more realistic behaviors and simulation, not for graphics :)

It feels like a lot of games are running after better graphics but really almost no one is trying to improve world simulation quality.
F.Ultra Nov 4, 2022
View PC info
  • Supporter
It's for people doing machine learning, they use GPU:s to handle the vast amounts of data so AMD, nVidia (and now Intel) are implementing new functions on their cards to improve the performance in that area. Nothing that helps the rest of us who only use these for displaying graphics and playing games.

Nothing stops games from using AI you know, for more realistic behaviors and simulation, not for graphics :)

It feels like a lot of games are running after better graphics but really almost no one is trying to improve world simulation quality.

True, I don't know if any of that game AI uses any of the ML functionality of the GPU:s yet though.

edit: AFAIK also those ML extensions have to do with the learning part of the AI research and not the end result of said research but then again I have basically zero knowledge of ML so should just shut up :)


Last edited by F.Ultra on 4 November 2022 at 12:21 am UTC
Shmerl Nov 4, 2022
edit: AFAIK also those ML extensions have to do with the learning part of the AI research and not the end result of said research but then again I have basically zero knowledge of ML so should just shut up :)

Well, imagine a game that adapts to the player in some way. So learning part can be useful for that I suppose.
TheRiddick Nov 4, 2022
$999USD TAKE MY MONEY!!!!!! GIVE IT TO ME!


Wondering if FSR3 will be exclusive to RDNA3.. seems like it should run just fine on RDNA2 as well, unsure about RDNA1 cards...


Last edited by TheRiddick on 4 November 2022 at 2:42 am UTC
Shmerl Nov 4, 2022
$999USD TAKE MY MONEY!!!!!! GIVE IT TO ME!


https://www.youtube.com/watch?v=Yx1PCWkOb3Y&t=13s

:)

Wondering if FSR3 will be exclusive to RDNA3.. seems like it should run just fine on RDNA2 as well, unsure about RDNA1 cards...

I doubt it should be limited, but may be it can use something from it for better performance? Not sure if they use any AMD specific Vulkan extensions.

Though for a $1000 card, I don't see any point in using FSR.


Last edited by Shmerl on 4 November 2022 at 2:48 am UTC
Mrowl Nov 4, 2022
Just happy to see Steam Deck up on these press conference stages. Really helps to make Destiny 2 and MW2 not being playable on Steam OS look ridiculous.
TheRiddick Nov 4, 2022
Though for a $1000 card, I don't see any point in using FSR.

FSR is a good option to keep native resolution while tweaking the internal render resolution.

Have you ever tried to run a 4k screen at 1440p or 1080p resolution? its vomit looking.. MUCH better solution to just upscale the render resolution rather then rely on pure resolution switching!

4k+FSR at 8k native screen will look allot better then running 4kres solution on 8k screen.. LCD's etc are just shit at resolutions outside native; They just can't compete with how well CRT tube screens could do it.


Last edited by TheRiddick on 4 November 2022 at 6:16 am UTC
Shmerl Nov 4, 2022
FSR is a good option to keep native resolution while tweaking the internal render resolution.

Have you ever tried to run a 4k screen at 1440p or 1080p resolution?

I see. But what's the point of having a 4K screen if you want to run something at lower resolution? It's like overpaying for something you aren't using.

And I agree, better to always run LCD at native resolution. But then I want the game to run at it. That's why I'm not buying 4K screens (yet). GPUs can't sustain that with high enough framerates for demanding games.


Last edited by Shmerl on 4 November 2022 at 6:32 am UTC
TheRiddick Nov 4, 2022
Some games just won't run at high fps at 4k, they just can't... while many will. If your buying a monitor based on the shittiest performing game you own then you're forever going to be stuck at 1080/1440p...

Fact is many games will run at above 60fps on my 6800xt at 4k, but some just can't, especially if I mod them.

Non the less I would only use FSR2 or similar, not a big fan of FSR1 but I know some people really like it, especially if you can't hit 60fps even at native, its a good temp solution until prices come down with cards.

I bought this LG C1 OLED because it was good price and OLED is GOD!
Shmerl Nov 4, 2022
If your buying a monitor based on the shittiest performing game you own then you're forever going to be stuck at 1080/1440p...

I'd measure it by games I care about to work well enough. I.e. for instance I wanted to play Cyberpunk 2077 with decent framerate. That means I'm not going to play it on 4K screen. I get around 80-90 fps on 2560x1440 with 6800 XT. I can imagine framerate tanking way too much on 4K. And for these games I want good image quality, not upscaling. So this is a metric that shows that 4K just isn't ready for me to buy into.

I agree though, it depends on the types of games. Some not very demanding games can have acceptable framerate on 4K with such GPU.


Last edited by Shmerl on 4 November 2022 at 6:43 am UTC
TheRiddick Nov 4, 2022
CP2077 at 4k with max settings and RT struggles even on NVIDIA cards. Sometimes you just gotta be super conservative, plus CP has some serious optimization issues in many areas of the game.

I did play it all way through with my 6800xt at 4k but had things dialed down and in certain busy city areas I hit 40fps at times, it was pretty bad. But 60-70% of the game was well over 60fps.

Even the 4090 can't get that great fps with CP at 4k and DLSS3, its like 80-90fps on the outskirts... you know area of the game that isn't too taxing!

Good thing about CP is that it does have all the upscaling tech now. I plan to go back to it sometime, maybe when DLC releases.
ShadMessa Nov 4, 2022
I'm not to impressed. It's to expensive. Yes it's better than nvidia price, but still to expensive.
And I don't understand the AI accelerator part.
too impressed. too expensive. NOT (to) please make the difference.
Shmerl Nov 4, 2022
Haven't used ray tracing in it (besides periodic checks if it's working), since radv doesn't work with CP2077 yet for it. Supposedly pipeline callables aren't yet properly implemented.

But that's exactly the point. I want to play it on native resolution and preferably best settings (not counting RT at least). But I also want high enough framerate, preferably something higher than 60. So it hitting 90 on 2560x1440 without any upscaling is pretty good. 4K just wouldn't work well enough for such scenario.

My point is, if you go for 4K, it means you want a better image quality. If you need to upscale becasue GPU can't cope with it kind of undermines the whole idea of that since it goes in the opposite direction image quality wise.


Last edited by Shmerl on 4 November 2022 at 6:50 am UTC
TheRiddick Nov 4, 2022
The AI part might be more exciting once FSR3.0 comes out.

I'd assume they will be using it for that, also it can be used to speed up RT performance as shown by many custom RT projects on YT where they use software ways to speed things up, which can be run on hardware and accelerated with AI algorithms...

Just going to take time for these great things to come to the market/games.. NVIDIA sort of kicks things off but then they go all proprietary on everyone and it takes AMD and Intel to expand the tech possibilities.

But that's exactly the point. I want to play it on native resolution and preferably best settings (not counting RT at least). But I also want high enough framerate, preferably something higher than 60. So it hitting 90 on 2560x1440 without any upscaling is pretty good. 4K just wouldn't work well enough for such scenario.

Then you should probably avoid running it through Linux/DXVK then... just saying. Translation layers and Linux drivers are always going to be a big issue for next-gen gaming and reaching those performance demands.

If you run CP2077 on windows it performs allot better overall (I tried both and decided to keep playing it on Windows rather then DXVK under Linux which was giving me too many woes!)

Another good example is KCD Kingdom Come Deliverance.
I get +20fps under Windows vs Linux.. So under Linux I get 50fps but under Windows I get 70fps.... if you buying a card based on Linux performance then you should just stay away from certain games for the time being... Drivers not there yet... I don't give two shits what anyone says, their JUST NOT THERE! yet...

(The fact 10bit colour is still a no no, and ray tracing is still experimental in most cases kind of says it all)


Last edited by TheRiddick on 4 November 2022 at 7:00 am UTC
Shmerl Nov 4, 2022
Then you should probably avoid running it through Linux/DXVK then... just saying. Translation layers and Linux drivers are always going to be a big issue for next-gen gaming and reaching those performance demands.

Well, we are here gaming on Linux, aren't we? So Windows isn't my consideration :) I obviously evaluate it all from perspective of Linux gaming and choose hardware with that in mind.


Last edited by Shmerl on 4 November 2022 at 7:02 am UTC
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
The comments on this article are closed.