Support us on Patreon to keep GamingOnLinux alive. This ensures all of our main content remains free for everyone. Just good, fresh content! Alternatively, you can donate through PayPal. You can also buy games using our partner links for GOG and Humble Store.
We do often include affiliate links to earn us some pennies. See more here.

AMD came out of the gates swinging wildly at Computex 2021 with new chips, new tech and lots more new including: AMD 3D chiplet technology, AMD Ryzen 5000 G-Series desktop APUs, next-gen gaming laptops with their new AMD Radeon 6000M Series Mobile Graphics and their DLSS competitor in FidelityFX Super Resolution.

There's quite a lot to unpack here and we're still going through it, so we will update the article if we missed anything vital. The big one is no doubt the FidelityFX Super Resolution, an open source spatial upscaling technology that can be compared with NVIDIA DLSS (which is coming to Proton!). Being open source is quite exciting though! Although not yet, AMD said "in due course" it will be under the GPUOpen branch and under the MIT license.

With the FidelityFX Super Resolution tech AMD are betting big, with it clearly firing shots at NVIDIA with it being fully cross-platform across DirectX 11 & 12, Vulkan, and even NVIDIA GPUs too. AMD say when it's released "FSR can be ported onto multiple platforms without restriction.".

YouTube Thumbnail
YouTube videos require cookies, you must accept their cookies to view. View cookie preferences.
Accept Cookies & Show   Direct Link

AMD continue pushing the boundaries of their processor tech, with the introduction of AMD 3D chiplet technology. What could be a real breakthrough in packaging technology combines AMD's innovative chiplet architecture with 3D stacking they claim "provides over 200 times the interconnect density of 2D chiplets and more than 15 times the density compared to existing 3D packaging solutions" which they've been collaborating on with TSMC. They showed it in a real-world application too as they did this 3D bonding with a 5000 Series processor prototype. AMD claim they're going to begin production with these 3D chiplets by the end of this year.

We're finally seeing AMD bring their next-generation APUs to the desktop for system builders too with the AMD Ryzen 5000 G-Series desktop APUs. They've split them between consumer models and business models, here's the consumer models that we care about (click to enlarge each image):

The AMD Ryzen 5000 G-Series desktop APUs will be available "later this year".

On top of that AMD also announced the new AMD Radeon 6000M Series Mobile Graphics, based on RDNA2 they say it gives "up to 1.5x" higher performance or "up to 43 percent" lower power at the same performance as the RDNA architecture. It also brings over their AMD Infinity Cache and Ray Tracing to next-gen laptops.

Model

Compute Units & Ray Accelerators

GDDR6

Game Clock9 (MHz)

Memory Interface

Infinity Cache

AMD Radeon RX 6800M

 

40

12 GB

2300Mhz

@ 145W

192-bit

96 MB

AMD Radeon RX 6700M

 

36

10 GB

2300Mhz

@ 135W

160-bit

80 MB

AMD Radeon RX 6600M

 

28

8 GB

2177Mhz

@ 100W

128-bit

 

32 MB

"At Computex, we highlighted the growing adoption of our high-performance computing and graphics technologies as AMD continues setting the pace of innovation for the industry," said Dr. Su. "With the launches of our new Ryzen and Radeon processors and the first wave of AMD Advantage notebooks, we continue expanding the ecosystem of leadership AMD products and technologies for gamers and enthusiasts. The next frontier of innovation in our industry is taking chip design into the third dimension. Our first application of 3D chiplet technology at Computex demonstrates our commitment to continue pushing the envelope in high-performance computing to significantly enhance user experiences. We are proud of the deep partnerships we have cultivated across the ecosystem to power the products and services that are essential to our daily lives."

If you want to catch the whole thing, you can watch it in the below video:

YouTube Thumbnail
YouTube videos require cookies, you must accept their cookies to view. View cookie preferences.
Accept Cookies & Show   Direct Link
Article taken from GamingOnLinux.com.
21 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly checked on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly. You can also follow my personal adventures on Bluesky.
See more from me
The comments on this article are closed.
All posts need to follow our rules. For users logged in: please hit the Report Flag icon on any post that breaks the rules or contains illegal / harmful content. Guest readers can email us for any issues.
32 comments
Page: 1/2»
  Go to:

dpanter Jun 1, 2021
DLSS is an important killer feature! Well, it just got killed.
sub Jun 1, 2021
AMD fan here. Yet, IMHO it turned out as expected.
You can't match DLSS without an AI/ML approach.
And it unfortunately shows in these videos.
The parts with FSR simply don't look as good.

This doesn't mean it's not good for anything.
In particular, as it is available across manufacturers
and a large variety of GPU models.

But it did not kill DLSS imho.

Nvidia has two game changers for gaming platforms on their side.
ARM (see Apple's M1 performance and efficiency) and DLSS.

In the long run it might get much harder again for AMD
when it comes to gaming and consoles.
Good for AMD that the current gen platforms have just been released.
I'm 100 % sure, were they still in the concept phase,
it would've been an instant design win for Nvidia.
CatKiller Jun 1, 2021
View PC info
  • Supporter Plus
I don't get it why not on ALL polaris cards? after all a RX 480 is the same architecture as a RX 580. I suppose if they release the code this has great chances of being backported to older gpus.
Well, Joshua Ashton has just implemented Vulkan ray tracing on cards that don't have the hardware for it, so who can say?

Get itch. Scratch itch.

Ah, Liam's got an article up about that, now.


Last edited by CatKiller on 1 June 2021 at 10:44 am UTC
Shmerl Jun 1, 2021
I think for an anti-feature, it's good enough to end DLSS. Because it works everywhere and will also get better over time. The other side of it - it's general purpose. While AI/ML is more limited to specific use cases. So it is better in some cases and worse in others. Everything is a trade off.


Last edited by Shmerl on 1 June 2021 at 4:21 pm UTC
Mar2ck Jun 1, 2021
Nvidia has two game changers for gaming platforms on their side.
ARM (see Apple's M1 performance and efficiency) and DLSS.

DLSS sure but I don't see how ARM is going to help them with PC gaming performance
doomwarriorx Jun 1, 2021
Nvidia has two game changers for gaming platforms on their side.
ARM (see Apple's M1 performance and efficiency) and DLSS.

Don't get it. Why would a game console developer NOW pick Nvidia? because they own ARM? Why they haven't picked an ARM earlier? Mali was always an option or Tegra with or without Nvidia owning the "specification" comapny. With a licence everbody can design an ARM. If Nvidia changes this they can throw away ARM again everybody will switch to RISC V.

DLSS but could be the same as raytracing. If some vendor asks I have no doubt any gpu manufacturer will deliver no matter if AMD, Qualcomm, Apple or Intel.
sub Jun 1, 2021
Nvidia has two game changers for gaming platforms on their side.
ARM (see Apple's M1 performance and efficiency) and DLSS.

Don't get it. Why would a game console developer NOW pick Nvidia? because they own ARM? Why they haven't picked an ARM earlier? Mali was always an option or Tegra with or without Nvidia owning the "specification" comapny. With a licence everbody can design an ARM. If Nvidia changes this they can throw away ARM again everybody will switch to RISC V.

DLSS but could be the same as raytracing. If some vendor asks I have no doubt any gpu manufacturer will deliver no matter if AMD, Qualcomm, Apple or Intel.

As for ARM. Indeed, they could've licensed it before.
But the point is, that with implementations like M1 more than competitive horse power is now available at high efficiency.

More important DLSS looks like a real game changer to me.
I think you're wrong thinking this is easy to copy by competitors.
Nvidia has a aggregated a lot of know how in their software department plus they have silicons out, that provide hardware acceleration for AI operations (Tensor Cores).
Also I don't know if and how much of the DLSS stuff is patented - I guess it's a lot.

Stuff like upscaling is actually an ideal category for AI/DL,
that cannot simply be matched with a classical scaling and filtering algorithm.
It shows. FSR doesn't look that good in comparison. Plus it's slower.

There are ingame videos from games with DLSS upscaled from 1080p to 2160p running almost twice as fast and looking absolutely credible like they were rendered in the higher resolution.

I bet AMD wanted to go the same way but could not (lacking hardware support and probably patenting).

Rumor has it Nintendo will update their Switch (Switch Pro) to a version leveraging DLSS.
It's clever. Exactly what I would Nintendo expecting to do.
And you can get it from just one vendor: Nvidia.
Same like the AMD situation with the current (and previous) SONY and Microsoft consoles.
M@GOid Jun 1, 2021
I believe FSR will do to DLSS the same thing Freesync did to G-Sync. It will become the industry standard and eventually Nvidia will be forced to officially support it too.

It will be "good enough" in up-scaling the image and most people will not see the differences, like MP3 and FLAC. The difference is there, but most gamers will not care.
Shmerl Jun 1, 2021
I think DLSS is also a kind of a trade off idea. It's sold as some kind of improvement but in fact it also reduces quality in general like any upscaling does. I'd personally prefer lower resolution with higher quality to higher resolution with lower one. So I don't see a big appeal in upscaling just to put a checkmark that "I have higher resolution". GPUs are improving without upscaling tricks anyway, so what's the rush? For higher resolution there will be more powerful GPUs.


Last edited by Shmerl on 1 June 2021 at 8:04 pm UTC
Calinou Jun 1, 2021
It will be interesting to see how FSR fares in situations where DLSS (and temporal AA methods) struggle. According to the GPUOpen announcement, FSR is said not to require motion vectors, but maybe it requires the scene to have been antialiased with a temporal method beforehand to look good (which means it needs motion vectors indirectly).


Last edited by Calinou on 1 June 2021 at 9:40 pm UTC
scaine Jun 1, 2021
View PC info
  • Contributing Editor
  • Mega Supporter
I've watch Linus Tech Tips do a video on Cyberpunk featuring DLSS and it was underwhelming. When you see it in action side-by-side with the original image, it always suffers in the quality department.

But that's not how you game. You typically only have your one gaming PC and that's what you use. If you can play about with DLSS on/off and 4K vs 1080p, then you'll find your sweet spot.

So FSR is huge news for me. It's DLSS-like enough that I'm excited to try it. Because I went AMD last year, and I ain't going back to Nvidia any time soon. FSR is going to be open source for goodness sake. Absolutely love what AMD is doing right now.

If only GPUs were actually available...
Shmerl Jun 1, 2021
If only GPUs were actually available...

Supposedly the situation should get better later this year. Still waiting to get Sapphire Pulse RX 6800 XT myself.
doomwarriorx Jun 2, 2021
But the point is, that with implementations like M1 more than competitive horse power is now available at high efficiency.

Still don't get it why does this is a win for nvidia? The Apple M1 is not a Cortex design. Thus nvidia has no properties there and would still need to develop something. The M1 is ARMv8 instruction compatible but is otherwise Apple own design. It is like saying Intel is the go to CPU supplier because AMD Ryzen shown good performance. You could do a fast general purpose CPU with every modern instruction set.
scaine Jun 2, 2021
View PC info
  • Contributing Editor
  • Mega Supporter
I think for an anti-feature, it's good enough to end DLSS. Because it works everywhere and will also get better over time. The other side of it - it's general purpose. While AI/ML is more limited to specific use cases. So it is better in some cases and worse in others. Everything is a trade off.

This sounds like it was written by someone who doesn't understand either of these features. Hardware-assisted AI/ML being "limited" to specific use cases while an upscaling tech being "general purpose"? Give me a break...

Honestly, you sound like you don't know how Machine Learning and Artificial Intelligence works. It's typically incredibly focused. Using ML/AI in an implementation of anything doesn't magically make it versatile or flexible.

However, that said, the point of this technology is to enable fast frame rates of complex scenes (potentially with ray tracing thrown in) at high resolutions. That's a pretty focused target. So I'm not really sure why an upscaler like FSR is somehow more "general purpose" than DLSS.

But I agree with Shmerl that FSR can be used anywhere - mobiles, consoles, Windows, Linux, on AMD, Nvidia, ARM, Intel, you name it. So I sincerely hope it does well. DLSS can do well too... for all I care. But I don't care, since I doubt I'll ever buy Nvidia again, unless they start competing with AMD on the open source front. And DLSS doing well only benefits Nvidia.

As someone else has already pointed out, it's G-Sync vs Freesync all over again.

(as an aside, I actively avoided buying a monitor advertising g-sync support recently - it looked like a nice piece of hardware, but I knew that if I bought it, I'd be paying for a feature that would forever be locked from me. This sums up my view on Nvidia right now. Hopefully they change that view by following AMD's example in future)
Shmerl Jun 2, 2021
However, that said, the point of this technology is to enable fast frame rates of complex scenes (potentially with ray tracing thrown in) at high resolutions. That's a pretty focused target. So I'm not really sure why an upscaler like FSR is somehow more "general purpose" than DLSS

I think if I understood correctly FSR makes generic assumptions, while DLSS has to be trained on specific input (unless I misunderstood the idea). Generic assumption sounds like a broader approach to me. Trained neural network can produce better result for what it's trained for, but will be close to useless for other cases. Something that's more generic is probably in between. So both are a trade off I think.


Last edited by Shmerl on 2 June 2021 at 9:30 pm UTC
scaine Jun 2, 2021
View PC info
  • Contributing Editor
  • Mega Supporter
However, that said, the point of this technology is to enable fast frame rates of complex scenes (potentially with ray tracing thrown in) at high resolutions. That's a pretty focused target. So I'm not really sure why an upscaler like FSR is somehow more "general purpose" than DLSS

I think if I understood correctly FSR makes generic assumptions, while DLSS has to be trained on specific input (unless I misunderstood the idea). Generic assumption sounds like a broader approach to me. Trained neural network can produce better result for what it's trained for, but will be close to useless for other cases. Something that's more generic is probably in between. So both are a trade off I think.

Yeah, but I think in this case, DLSS is trained on the textures of the game itself, so in terms of what it does, I don't think it's necessarily any less useful than FSR? I might be wrong.

Just being Nvidia-only is good enough for me to fully get behind FSR. And the AMD showcase video for it was pretty impressive given how young the technology is (dunno what user "sub" was talking about above, claiming that FSR doesn't look as good - not only is there barely any difference, the whole point of these technologies is that they won't look as good, but you'll get 100%+ FPS out of them at high-res, and if you can only tell the difference in a side-by-side video, then that's clear "good enough").
Shmerl Jun 2, 2021
Yeah, but I think in this case, DLSS is trained on the textures of the game itself, so in terms of what it does, I don't think it's necessarily any less useful than FSR? I might be wrong.

What I mean is, what if you have a game or any kind of use case on which it wasn't trained? Will it still fare better than FSR?


Last edited by Shmerl on 2 June 2021 at 9:47 pm UTC
tuubi Jun 2, 2021
View PC info
  • Supporter Plus
Yeah, but I think in this case, DLSS is trained on the textures of the game itself, so in terms of what it does, I don't think it's necessarily any less useful than FSR? I might be wrong.

What I mean is, what if you have a game or any kind of use case on which it wasn't trained? Will it still fare better than FSR?
Apparently the current iteration of DLSS (2.0) uses a generic algorithm and doesn't require per-game training anymore. So I guess any game could implement support, but it would still only work on a GPU with Nvidia's tensor cores or whatever they call them.
Shmerl Jun 2, 2021
Apparently the current iteration of DLSS (2.0) uses a generic algorithm and doesn't require per-game training anymore. So I guess any game could implement support, but it would still only work on a GPU with Nvidia's tensor cores or whatever they call them.

Why do they need tensor cores if it's a generic algorithm and not some trained neural network? I thought those are focused on AI application.

In that sense I don't see it being too different from FSR. They'll just keep competing on who nails the better algorithm. However from cross GPU usage perspective DLSS is already DOA like another CUDA, so it's not even a choice and I agree with what @scaine said above about it.


Last edited by Shmerl on 2 June 2021 at 10:41 pm UTC
Purple Library Guy Jun 3, 2021
I think for an anti-feature, it's good enough to end DLSS. Because it works everywhere and will also get better over time. The other side of it - it's general purpose. While AI/ML is more limited to specific use cases. So it is better in some cases and worse in others. Everything is a trade off.

This sounds like it was written by someone who doesn't understand either of these features. Hardware-assisted AI/ML being "limited" to specific use cases while an upscaling tech being "general purpose"? Give me a break...

Honestly, you sound like you don't know how Machine Learning and Artificial Intelligence works. It's typically incredibly focused.
Personally, I think a more honest name for it would be Artificial Instinct. 'Cause like, instinct is this stuff animals do without actually being smart or thinking or figuring it out, where through evolution they became really good at some particular task they need to be good at to survive. Machine Learning AI seems to be this forced evolution thing where you let the algorithms survive that are good at some specific task, until you've evolved a black box that can do that specialized thing really well but has no general reasoning ability. So it's instinct, but for some (marketing) reason we call it "intelligence".
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
The comments on this article are closed.