AMD came out of the gates swinging wildly at Computex 2021 with new chips, new tech and lots more new including: AMD 3D chiplet technology, AMD Ryzen 5000 G-Series desktop APUs, next-gen gaming laptops with their new AMD Radeon 6000M Series Mobile Graphics and their DLSS competitor in FidelityFX Super Resolution.
There's quite a lot to unpack here and we're still going through it, so we will update the article if we missed anything vital. The big one is no doubt the FidelityFX Super Resolution, an open source spatial upscaling technology that can be compared with NVIDIA DLSS (which is coming to Proton!). Being open source is quite exciting though! Although not yet, AMD said "in due course" it will be under the GPUOpen branch and under the MIT license.
With the FidelityFX Super Resolution tech AMD are betting big, with it clearly firing shots at NVIDIA with it being fully cross-platform across DirectX 11 & 12, Vulkan, and even NVIDIA GPUs too. AMD say when it's released "FSR can be ported onto multiple platforms without restriction.".
Direct Link
AMD continue pushing the boundaries of their processor tech, with the introduction of AMD 3D chiplet technology. What could be a real breakthrough in packaging technology combines AMD's innovative chiplet architecture with 3D stacking they claim "provides over 200 times the interconnect density of 2D chiplets and more than 15 times the density compared to existing 3D packaging solutions" which they've been collaborating on with TSMC. They showed it in a real-world application too as they did this 3D bonding with a 5000 Series processor prototype. AMD claim they're going to begin production with these 3D chiplets by the end of this year.
We're finally seeing AMD bring their next-generation APUs to the desktop for system builders too with the AMD Ryzen 5000 G-Series desktop APUs. They've split them between consumer models and business models, here's the consumer models that we care about (click to enlarge each image):
The AMD Ryzen 5000 G-Series desktop APUs will be available "later this year".
On top of that AMD also announced the new AMD Radeon 6000M Series Mobile Graphics, based on RDNA2 they say it gives "up to 1.5x" higher performance or "up to 43 percent" lower power at the same performance as the RDNA architecture. It also brings over their AMD Infinity Cache and Ray Tracing to next-gen laptops.
Model |
Compute Units & Ray Accelerators |
GDDR6 |
Game Clock9 (MHz) |
Memory Interface |
Infinity Cache |
AMD Radeon RX 6800M
|
40 |
12 GB |
2300Mhz @ 145W |
192-bit |
96 MB |
AMD Radeon RX 6700M
|
36 |
10 GB |
2300Mhz @ 135W |
160-bit |
80 MB |
AMD Radeon RX 6600M
|
28 |
8 GB |
2177Mhz @ 100W |
128-bit
|
32 MB |
"At Computex, we highlighted the growing adoption of our high-performance computing and graphics technologies as AMD continues setting the pace of innovation for the industry," said Dr. Su. "With the launches of our new Ryzen and Radeon processors and the first wave of AMD Advantage notebooks, we continue expanding the ecosystem of leadership AMD products and technologies for gamers and enthusiasts. The next frontier of innovation in our industry is taking chip design into the third dimension. Our first application of 3D chiplet technology at Computex demonstrates our commitment to continue pushing the envelope in high-performance computing to significantly enhance user experiences. We are proud of the deep partnerships we have cultivated across the ecosystem to power the products and services that are essential to our daily lives."
If you want to catch the whole thing, you can watch it in the below video:
Direct Link
But that's not how you game. You typically only have your one gaming PC and that's what you use. If you can play about with DLSS on/off and 4K vs 1080p, then you'll find your sweet spot.
So FSR is huge news for me. It's DLSS-like enough that I'm excited to try it. Because I went AMD last year, and I ain't going back to Nvidia any time soon. FSR is going to be open source for goodness sake. Absolutely love what AMD is doing right now.
If only GPUs were actually available...
Quoting: scaineIf only GPUs were actually available...
Supposedly the situation should get better later this year. Still waiting to get Sapphire Pulse RX 6800 XT myself.
Quoting: subBut the point is, that with implementations like M1 more than competitive horse power is now available at high efficiency.
Still don't get it why does this is a win for nvidia? The Apple M1 is not a Cortex design. Thus nvidia has no properties there and would still need to develop something. The M1 is ARMv8 instruction compatible but is otherwise Apple own design. It is like saying Intel is the go to CPU supplier because AMD Ryzen shown good performance. You could do a fast general purpose CPU with every modern instruction set.
Quoting: GuestQuoting: ShmerlI think for an anti-feature, it's good enough to end DLSS. Because it works everywhere and will also get better over time. The other side of it - it's general purpose. While AI/ML is more limited to specific use cases. So it is better in some cases and worse in others. Everything is a trade off.
This sounds like it was written by someone who doesn't understand either of these features. Hardware-assisted AI/ML being "limited" to specific use cases while an upscaling tech being "general purpose"? Give me a break...
Honestly, you sound like you don't know how Machine Learning and Artificial Intelligence works. It's typically incredibly focused. Using ML/AI in an implementation of anything doesn't magically make it versatile or flexible.
However, that said, the point of this technology is to enable fast frame rates of complex scenes (potentially with ray tracing thrown in) at high resolutions. That's a pretty focused target. So I'm not really sure why an upscaler like FSR is somehow more "general purpose" than DLSS.
But I agree with Shmerl that FSR can be used anywhere - mobiles, consoles, Windows, Linux, on AMD, Nvidia, ARM, Intel, you name it. So I sincerely hope it does well. DLSS can do well too... for all I care. But I don't care, since I doubt I'll ever buy Nvidia again, unless they start competing with AMD on the open source front. And DLSS doing well only benefits Nvidia.
As someone else has already pointed out, it's G-Sync vs Freesync all over again.
(as an aside, I actively avoided buying a monitor advertising g-sync support recently - it looked like a nice piece of hardware, but I knew that if I bought it, I'd be paying for a feature that would forever be locked from me. This sums up my view on Nvidia right now. Hopefully they change that view by following AMD's example in future)
Quoting: scaineHowever, that said, the point of this technology is to enable fast frame rates of complex scenes (potentially with ray tracing thrown in) at high resolutions. That's a pretty focused target. So I'm not really sure why an upscaler like FSR is somehow more "general purpose" than DLSS
I think if I understood correctly FSR makes generic assumptions, while DLSS has to be trained on specific input (unless I misunderstood the idea). Generic assumption sounds like a broader approach to me. Trained neural network can produce better result for what it's trained for, but will be close to useless for other cases. Something that's more generic is probably in between. So both are a trade off I think.
Last edited by Shmerl on 2 June 2021 at 9:30 pm UTC
Quoting: ShmerlQuoting: scaineHowever, that said, the point of this technology is to enable fast frame rates of complex scenes (potentially with ray tracing thrown in) at high resolutions. That's a pretty focused target. So I'm not really sure why an upscaler like FSR is somehow more "general purpose" than DLSS
I think if I understood correctly FSR makes generic assumptions, while DLSS has to be trained on specific input (unless I misunderstood the idea). Generic assumption sounds like a broader approach to me. Trained neural network can produce better result for what it's trained for, but will be close to useless for other cases. Something that's more generic is probably in between. So both are a trade off I think.
Yeah, but I think in this case, DLSS is trained on the textures of the game itself, so in terms of what it does, I don't think it's necessarily any less useful than FSR? I might be wrong.
Just being Nvidia-only is good enough for me to fully get behind FSR. And the AMD showcase video for it was pretty impressive given how young the technology is (dunno what user "sub" was talking about above, claiming that FSR doesn't look as good - not only is there barely any difference, the whole point of these technologies is that they won't look as good, but you'll get 100%+ FPS out of them at high-res, and if you can only tell the difference in a side-by-side video, then that's clear "good enough").
Quoting: scaineYeah, but I think in this case, DLSS is trained on the textures of the game itself, so in terms of what it does, I don't think it's necessarily any less useful than FSR? I might be wrong.
What I mean is, what if you have a game or any kind of use case on which it wasn't trained? Will it still fare better than FSR?
Last edited by Shmerl on 2 June 2021 at 9:47 pm UTC
Quoting: ShmerlApparently the current iteration of DLSS (2.0) uses a generic algorithm and doesn't require per-game training anymore. So I guess any game could implement support, but it would still only work on a GPU with Nvidia's tensor cores or whatever they call them.Quoting: scaineYeah, but I think in this case, DLSS is trained on the textures of the game itself, so in terms of what it does, I don't think it's necessarily any less useful than FSR? I might be wrong.
What I mean is, what if you have a game or any kind of use case on which it wasn't trained? Will it still fare better than FSR?
Quoting: tuubiApparently the current iteration of DLSS (2.0) uses a generic algorithm and doesn't require per-game training anymore. So I guess any game could implement support, but it would still only work on a GPU with Nvidia's tensor cores or whatever they call them.
Why do they need tensor cores if it's a generic algorithm and not some trained neural network? I thought those are focused on AI application.
In that sense I don't see it being too different from FSR. They'll just keep competing on who nails the better algorithm. However from cross GPU usage perspective DLSS is already DOA like another CUDA, so it's not even a choice and I agree with what @scaine said above about it.
Last edited by Shmerl on 2 June 2021 at 10:41 pm UTC
Quoting: scainePersonally, I think a more honest name for it would be Artificial Instinct. 'Cause like, instinct is this stuff animals do without actually being smart or thinking or figuring it out, where through evolution they became really good at some particular task they need to be good at to survive. Machine Learning AI seems to be this forced evolution thing where you let the algorithms survive that are good at some specific task, until you've evolved a black box that can do that specialized thing really well but has no general reasoning ability. So it's instinct, but for some (marketing) reason we call it "intelligence".Quoting: GuestQuoting: ShmerlI think for an anti-feature, it's good enough to end DLSS. Because it works everywhere and will also get better over time. The other side of it - it's general purpose. While AI/ML is more limited to specific use cases. So it is better in some cases and worse in others. Everything is a trade off.
This sounds like it was written by someone who doesn't understand either of these features. Hardware-assisted AI/ML being "limited" to specific use cases while an upscaling tech being "general purpose"? Give me a break...
Honestly, you sound like you don't know how Machine Learning and Artificial Intelligence works. It's typically incredibly focused.
See more from me