AMD came out of the gates swinging wildly at Computex 2021 with new chips, new tech and lots more new including: AMD 3D chiplet technology, AMD Ryzen 5000 G-Series desktop APUs, next-gen gaming laptops with their new AMD Radeon 6000M Series Mobile Graphics and their DLSS competitor in FidelityFX Super Resolution.
There's quite a lot to unpack here and we're still going through it, so we will update the article if we missed anything vital. The big one is no doubt the FidelityFX Super Resolution, an open source spatial upscaling technology that can be compared with NVIDIA DLSS (which is coming to Proton!). Being open source is quite exciting though! Although not yet, AMD said "in due course" it will be under the GPUOpen branch and under the MIT license.
With the FidelityFX Super Resolution tech AMD are betting big, with it clearly firing shots at NVIDIA with it being fully cross-platform across DirectX 11 & 12, Vulkan, and even NVIDIA GPUs too. AMD say when it's released "FSR can be ported onto multiple platforms without restriction.".
Direct Link
AMD continue pushing the boundaries of their processor tech, with the introduction of AMD 3D chiplet technology. What could be a real breakthrough in packaging technology combines AMD's innovative chiplet architecture with 3D stacking they claim "provides over 200 times the interconnect density of 2D chiplets and more than 15 times the density compared to existing 3D packaging solutions" which they've been collaborating on with TSMC. They showed it in a real-world application too as they did this 3D bonding with a 5000 Series processor prototype. AMD claim they're going to begin production with these 3D chiplets by the end of this year.
We're finally seeing AMD bring their next-generation APUs to the desktop for system builders too with the AMD Ryzen 5000 G-Series desktop APUs. They've split them between consumer models and business models, here's the consumer models that we care about (click to enlarge each image):
The AMD Ryzen 5000 G-Series desktop APUs will be available "later this year".
On top of that AMD also announced the new AMD Radeon 6000M Series Mobile Graphics, based on RDNA2 they say it gives "up to 1.5x" higher performance or "up to 43 percent" lower power at the same performance as the RDNA architecture. It also brings over their AMD Infinity Cache and Ray Tracing to next-gen laptops.
Model |
Compute Units & Ray Accelerators |
GDDR6 |
Game Clock9 (MHz) |
Memory Interface |
Infinity Cache |
AMD Radeon RX 6800M
|
40 |
12 GB |
2300Mhz @ 145W |
192-bit |
96 MB |
AMD Radeon RX 6700M
|
36 |
10 GB |
2300Mhz @ 135W |
160-bit |
80 MB |
AMD Radeon RX 6600M
|
28 |
8 GB |
2177Mhz @ 100W |
128-bit
|
32 MB |
"At Computex, we highlighted the growing adoption of our high-performance computing and graphics technologies as AMD continues setting the pace of innovation for the industry," said Dr. Su. "With the launches of our new Ryzen and Radeon processors and the first wave of AMD Advantage notebooks, we continue expanding the ecosystem of leadership AMD products and technologies for gamers and enthusiasts. The next frontier of innovation in our industry is taking chip design into the third dimension. Our first application of 3D chiplet technology at Computex demonstrates our commitment to continue pushing the envelope in high-performance computing to significantly enhance user experiences. We are proud of the deep partnerships we have cultivated across the ecosystem to power the products and services that are essential to our daily lives."
If you want to catch the whole thing, you can watch it in the below video:
Direct Link
You can't match DLSS without an AI/ML approach.
And it unfortunately shows in these videos.
The parts with FSR simply don't look as good.
This doesn't mean it's not good for anything.
In particular, as it is available across manufacturers
and a large variety of GPU models.
But it did not kill DLSS imho.
Nvidia has two game changers for gaming platforms on their side.
ARM (see Apple's M1 performance and efficiency) and DLSS.
In the long run it might get much harder again for AMD
when it comes to gaming and consoles.
Good for AMD that the current gen platforms have just been released.
I'm 100 % sure, were they still in the concept phase,
it would've been an instant design win for Nvidia.
I don't get it why not on ALL polaris cards? after all a RX 480 is the same architecture as a RX 580. I suppose if they release the code this has great chances of being backported to older gpus.Well, Joshua Ashton has just implemented Vulkan ray tracing on cards that don't have the hardware for it, so who can say?
Get itch. Scratch itch.
Ah, Liam's got an article up about that, now.
Last edited by CatKiller on 1 June 2021 at 10:44 am UTC
Last edited by Shmerl on 1 June 2021 at 4:21 pm UTC
Nvidia has two game changers for gaming platforms on their side.
ARM (see Apple's M1 performance and efficiency) and DLSS.
DLSS sure but I don't see how ARM is going to help them with PC gaming performance
Nvidia has two game changers for gaming platforms on their side.
ARM (see Apple's M1 performance and efficiency) and DLSS.
Don't get it. Why would a game console developer NOW pick Nvidia? because they own ARM? Why they haven't picked an ARM earlier? Mali was always an option or Tegra with or without Nvidia owning the "specification" comapny. With a licence everbody can design an ARM. If Nvidia changes this they can throw away ARM again everybody will switch to RISC V.
DLSS but could be the same as raytracing. If some vendor asks I have no doubt any gpu manufacturer will deliver no matter if AMD, Qualcomm, Apple or Intel.
Nvidia has two game changers for gaming platforms on their side.
ARM (see Apple's M1 performance and efficiency) and DLSS.
Don't get it. Why would a game console developer NOW pick Nvidia? because they own ARM? Why they haven't picked an ARM earlier? Mali was always an option or Tegra with or without Nvidia owning the "specification" comapny. With a licence everbody can design an ARM. If Nvidia changes this they can throw away ARM again everybody will switch to RISC V.
DLSS but could be the same as raytracing. If some vendor asks I have no doubt any gpu manufacturer will deliver no matter if AMD, Qualcomm, Apple or Intel.
As for ARM. Indeed, they could've licensed it before.
But the point is, that with implementations like M1 more than competitive horse power is now available at high efficiency.
More important DLSS looks like a real game changer to me.
I think you're wrong thinking this is easy to copy by competitors.
Nvidia has a aggregated a lot of know how in their software department plus they have silicons out, that provide hardware acceleration for AI operations (Tensor Cores).
Also I don't know if and how much of the DLSS stuff is patented - I guess it's a lot.
Stuff like upscaling is actually an ideal category for AI/DL,
that cannot simply be matched with a classical scaling and filtering algorithm.
It shows. FSR doesn't look that good in comparison. Plus it's slower.
There are ingame videos from games with DLSS upscaled from 1080p to 2160p running almost twice as fast and looking absolutely credible like they were rendered in the higher resolution.
I bet AMD wanted to go the same way but could not (lacking hardware support and probably patenting).
Rumor has it Nintendo will update their Switch (Switch Pro) to a version leveraging DLSS.
It's clever. Exactly what I would Nintendo expecting to do.
And you can get it from just one vendor: Nvidia.
Same like the AMD situation with the current (and previous) SONY and Microsoft consoles.
It will be "good enough" in up-scaling the image and most people will not see the differences, like MP3 and FLAC. The difference is there, but most gamers will not care.
Last edited by Shmerl on 1 June 2021 at 8:04 pm UTC
Last edited by Calinou on 1 June 2021 at 9:40 pm UTC
But that's not how you game. You typically only have your one gaming PC and that's what you use. If you can play about with DLSS on/off and 4K vs 1080p, then you'll find your sweet spot.
So FSR is huge news for me. It's DLSS-like enough that I'm excited to try it. Because I went AMD last year, and I ain't going back to Nvidia any time soon. FSR is going to be open source for goodness sake. Absolutely love what AMD is doing right now.
If only GPUs were actually available...
If only GPUs were actually available...
Supposedly the situation should get better later this year. Still waiting to get Sapphire Pulse RX 6800 XT myself.
But the point is, that with implementations like M1 more than competitive horse power is now available at high efficiency.
Still don't get it why does this is a win for nvidia? The Apple M1 is not a Cortex design. Thus nvidia has no properties there and would still need to develop something. The M1 is ARMv8 instruction compatible but is otherwise Apple own design. It is like saying Intel is the go to CPU supplier because AMD Ryzen shown good performance. You could do a fast general purpose CPU with every modern instruction set.
I think for an anti-feature, it's good enough to end DLSS. Because it works everywhere and will also get better over time. The other side of it - it's general purpose. While AI/ML is more limited to specific use cases. So it is better in some cases and worse in others. Everything is a trade off.
This sounds like it was written by someone who doesn't understand either of these features. Hardware-assisted AI/ML being "limited" to specific use cases while an upscaling tech being "general purpose"? Give me a break...
Honestly, you sound like you don't know how Machine Learning and Artificial Intelligence works. It's typically incredibly focused. Using ML/AI in an implementation of anything doesn't magically make it versatile or flexible.
However, that said, the point of this technology is to enable fast frame rates of complex scenes (potentially with ray tracing thrown in) at high resolutions. That's a pretty focused target. So I'm not really sure why an upscaler like FSR is somehow more "general purpose" than DLSS.
But I agree with Shmerl that FSR can be used anywhere - mobiles, consoles, Windows, Linux, on AMD, Nvidia, ARM, Intel, you name it. So I sincerely hope it does well. DLSS can do well too... for all I care. But I don't care, since I doubt I'll ever buy Nvidia again, unless they start competing with AMD on the open source front. And DLSS doing well only benefits Nvidia.
As someone else has already pointed out, it's G-Sync vs Freesync all over again.
(as an aside, I actively avoided buying a monitor advertising g-sync support recently - it looked like a nice piece of hardware, but I knew that if I bought it, I'd be paying for a feature that would forever be locked from me. This sums up my view on Nvidia right now. Hopefully they change that view by following AMD's example in future)
However, that said, the point of this technology is to enable fast frame rates of complex scenes (potentially with ray tracing thrown in) at high resolutions. That's a pretty focused target. So I'm not really sure why an upscaler like FSR is somehow more "general purpose" than DLSS
I think if I understood correctly FSR makes generic assumptions, while DLSS has to be trained on specific input (unless I misunderstood the idea). Generic assumption sounds like a broader approach to me. Trained neural network can produce better result for what it's trained for, but will be close to useless for other cases. Something that's more generic is probably in between. So both are a trade off I think.
Last edited by Shmerl on 2 June 2021 at 9:30 pm UTC
However, that said, the point of this technology is to enable fast frame rates of complex scenes (potentially with ray tracing thrown in) at high resolutions. That's a pretty focused target. So I'm not really sure why an upscaler like FSR is somehow more "general purpose" than DLSS
I think if I understood correctly FSR makes generic assumptions, while DLSS has to be trained on specific input (unless I misunderstood the idea). Generic assumption sounds like a broader approach to me. Trained neural network can produce better result for what it's trained for, but will be close to useless for other cases. Something that's more generic is probably in between. So both are a trade off I think.
Yeah, but I think in this case, DLSS is trained on the textures of the game itself, so in terms of what it does, I don't think it's necessarily any less useful than FSR? I might be wrong.
Just being Nvidia-only is good enough for me to fully get behind FSR. And the AMD showcase video for it was pretty impressive given how young the technology is (dunno what user "sub" was talking about above, claiming that FSR doesn't look as good - not only is there barely any difference, the whole point of these technologies is that they won't look as good, but you'll get 100%+ FPS out of them at high-res, and if you can only tell the difference in a side-by-side video, then that's clear "good enough").
Yeah, but I think in this case, DLSS is trained on the textures of the game itself, so in terms of what it does, I don't think it's necessarily any less useful than FSR? I might be wrong.
What I mean is, what if you have a game or any kind of use case on which it wasn't trained? Will it still fare better than FSR?
Last edited by Shmerl on 2 June 2021 at 9:47 pm UTC
Apparently the current iteration of DLSS (2.0) uses a generic algorithm and doesn't require per-game training anymore. So I guess any game could implement support, but it would still only work on a GPU with Nvidia's tensor cores or whatever they call them.Yeah, but I think in this case, DLSS is trained on the textures of the game itself, so in terms of what it does, I don't think it's necessarily any less useful than FSR? I might be wrong.
What I mean is, what if you have a game or any kind of use case on which it wasn't trained? Will it still fare better than FSR?
Apparently the current iteration of DLSS (2.0) uses a generic algorithm and doesn't require per-game training anymore. So I guess any game could implement support, but it would still only work on a GPU with Nvidia's tensor cores or whatever they call them.
Why do they need tensor cores if it's a generic algorithm and not some trained neural network? I thought those are focused on AI application.
In that sense I don't see it being too different from FSR. They'll just keep competing on who nails the better algorithm. However from cross GPU usage perspective DLSS is already DOA like another CUDA, so it's not even a choice and I agree with what @scaine said above about it.
Last edited by Shmerl on 2 June 2021 at 10:41 pm UTC
Personally, I think a more honest name for it would be Artificial Instinct. 'Cause like, instinct is this stuff animals do without actually being smart or thinking or figuring it out, where through evolution they became really good at some particular task they need to be good at to survive. Machine Learning AI seems to be this forced evolution thing where you let the algorithms survive that are good at some specific task, until you've evolved a black box that can do that specialized thing really well but has no general reasoning ability. So it's instinct, but for some (marketing) reason we call it "intelligence".I think for an anti-feature, it's good enough to end DLSS. Because it works everywhere and will also get better over time. The other side of it - it's general purpose. While AI/ML is more limited to specific use cases. So it is better in some cases and worse in others. Everything is a trade off.
This sounds like it was written by someone who doesn't understand either of these features. Hardware-assisted AI/ML being "limited" to specific use cases while an upscaling tech being "general purpose"? Give me a break...
Honestly, you sound like you don't know how Machine Learning and Artificial Intelligence works. It's typically incredibly focused.
See more from me