AMD has today revealed AMD FidelityFX Super Resolution 2.0, the next-generation version of their impressive spatial upscaling tech that can really help improve performance.
For those who don't use it and are confused: the whole idea is that it produces high-resolution outputs from lower resolution inputs. It's one way to get good performance at 4K for example, for games that are a bit too resource intensive. It can work with many resolutions and the Steam Deck has FSR built-in.
There's limitations of course, and AMD explained these examples for FSR 1.0:
- FSR 1.0 requires a high quality anti-aliased source image, which is not always available without making further changes to code and/or the engine.
- Upscaling quality is unavoidably a function of the source resolution input. So with a low resolution source, there is just not enough information with a spatial upscaler for thin detail.
Bring on FSR 2.0 then! Which continues to be open source.
"FSR 2.0 is the result of years of research from AMD, and is developed from the ground up. It uses cutting-edge temporal algorithms to reconstruct fine geometric and texture detail in the upscaled image, along with high-quality anti-aliasing."
Some of what's new in FSR 2.0 include:
- Delivers similar or better than native image quality using temporal data.
- Includes high-quality anti-aliasing.
- Higher image quality than FSR 1.0 at all quality presets/resolutions.
- Does not require dedicated Machine Learning (ML) hardware.
- Boosts framerates in supported games across a wide range of products and platforms, both AMD and select competitors.
It will continue to work across all vendors too so NVIDIA and Intel will also benefit from this. Since it's open source, any developer can just pick it up and use it.
FSR 2.0 temporal upscaling uses frame color, depth, and motion vectors in the rendering pipeline and leverages information from past frames to create very high-quality upscaled output and it also includes optimized high-quality anti-aliasing. Spatial upscaling solutions like FSR 1.0 use data from the current frame to create the upscaled output and rely on the separate anti-aliasing incorporated into a game’s rendering pipeline. Because of these differences, FidelityFX Super Resolution 2.0 delivers significantly higher image quality than FSR 1.0 at all quality mode presets and screen resolutions.
An example AMD included was DEATHLOOP which is adding support for it:
Direct Link
When will it actually be available? They're not saying other than a vague "Q2 2022". They will be attending GDC though next week to give a talk on it.
How can i use them just for antialiasing (without upscaling) ?
(Especially for games that have neither DLSS/FSR nor antialiasing built in)
I'm not sure, I don't have an RTX card, and never tried FSR. But in theory, you could run your native resolution (for example 1080p), let it upscale to 1440p or 2160p, and then configure your display to downscale that to 1080p. That should result in a high quality anti alias effect.
But I never tried that.
you know what also will happen q2 2022? second batch of steam decks...
I don't see how something like Gamescope could have access to that data.gamescope dont, but proton can i guess.
RTX cards simply execute the process that was the result of the machine learning that was done on different hardware - which also happened to be regular hardware.isnt that the definition of ASIC?
asic is much more efficient to do what it does, so i dont see the issue here.
if you know you gona need more multiplication than sums,subtractions and divisions there is no reason to care about sum/sub/div as much as you do for multiplication operation.
I hope they release the source code soon so godot 4 can add this (Although it need some TAA implementation first for the motion vectors)
i dont think they have enough man power for that, it would only delay vulkan, i think they will focus on deliver vulkan on 4.0, bring back openGL at 4.1, and those extra features either will go at =>4.2x or 5.0x.
one thing i think they could implement though, that shouldnt require too much effort to add is some form of rendering the text in a different render contex so FSR 1.0 could work on everything else from the game, then you render the text as the last step in an higher resolution instead of upscaling it.
(something like an text overlay on top of the game render.)
i think its doable.
The meaning of the terms around "artificial intelligence" are weird
"Machine Learning" is a huge pile of buzzword/b*llsh*t bingo. That's just how it is.
sigh, no it isnt.
go learn how it work and what is possible with it, you have no idea what you're talking about.
on a side note, i can agree that the term "learn" is miss leading, its like calling an AI an inteligent system, as if the machine could "think", that i agree , its bullshit.
The meaning of the terms around "artificial intelligence" are weird
"Machine Learning" is a huge pile of buzzword/b*llsh*t bingo. That's just how it is.
sigh, no it isnt.
go learn how it work and what is possible with it, you have no idea what you're talking about.
on a side note, i can agree that the term "learn" is miss leading, its like calling an AI an inteligent system, as if the machine could "think", that i agree , its bullshit.
I'm strictly speaking about the rhetoric used around ML, to be fair that wasn't clear in my comment. What's coming from that field in terms of results is impressive at times.
As you seem to be knowledgeable about the topic, how far has it come on the "explainability"? Last time I dived deeper into ML (like a couple years ago) the "why" was pretty sketchy (coming from a more classic statistical modelling POV) and there was that whole issue about decisions made with these models no one could really understand (like ML-based credit ratings). I know there have been some advancements, but I'm not following ML closely. But I'm watching the occasional talk in academia about ML being used for some specific problem and that didn't bode well for the answer.
How can i use them just for antialiasing (without upscaling) ?
(Especially for games that have neither DLSS/FSR nor antialiasing built in)
I'm not sure, I don't have an RTX card, and never tried FSR. But in theory, you could run your native resolution (for example 1080p), let it upscale to 1440p or 2160p, and then configure your display to downscale that to 1080p. That should result in a high quality anti alias effect.
But I never tried that.
I have tried that in the past, on Windows, on some older games that didn't have built-in AA.
Scaling up to 4K, then downscaling to 1080p (max resolution supported by my monitor), the edges did look better, as well as some other details.
But it was a bit clunky to use (you had to enable it manually for each .exe program)
I found back the link that explained how it works (with the dot grids) : https://www.nvidia.com/en-us/geforce/news/dynamic-super-resolution-instantly-improves-your-games-with-4k-quality-graphics/
I'm curious to see how it will work with static frames.I've been playing with Unreal's temporal anti-aliasing some time ago. It indeed moves the camera very slightly every frame, using the FViewMatrices::HackAddTemporalAAProjectionJitter(const FVector2D& offset) method. The offset is taken from one of several hardcoded sample patterns, which can be selected using the r.TemporalAASamples setting.
I imagine that temporal AA induces some sort of subtle movement to the camera (?).
It this is true, then it would not be possible to implement such thing on compositor side, like with FSR today.
Somebody has some insight on TAA?
Yes. That's because the camera is moved sub-pixel distances between frames, giving you an effective resolution that can be above the original one. It's similar to Multisampling, but does not have the same performance overhead, as it re-uses images from previous frames, instead of doing multiple samples of the same frame.Delivers similar or better than native image quality using temporal dataWait, so it can make an image that's better than the original??
Since it's using previous images, it cannot account for objects that were not visible in those frames or that change their movment speed. This causes ghosting and is the main drawback of Temporal over regular Multisampling.
Last edited by soulsource on 18 March 2022 at 10:34 am UTC
The meaning of the terms around "artificial intelligence" are weird
"Machine Learning" is a huge pile of buzzword/b*llsh*t bingo. That's just how it is.
sigh, no it isnt.
go learn how it work and what is possible with it, you have no idea what you're talking about.
on a side note, i can agree that the term "learn" is miss leading, its like calling an AI an inteligent system, as if the machine could "think", that i agree , its bullshit.
I'm strictly speaking about the rhetoric used around ML, to be fair that wasn't clear in my comment. What's coming from that field in terms of results is impressive at times.
As you seem to be knowledgeable about the topic, how far has it come on the "explainability"? Last time I dived deeper into ML (like a couple years ago) the "why" was pretty sketchy (coming from a more classic statistical modelling POV) and there was that whole issue about decisions made with these models no one could really understand (like ML-based credit ratings). I know there have been some advancements, but I'm not following ML closely. But I'm watching the occasional talk in academia about ML being used for some specific problem and that didn't bode well for the answer.
im not an expert, i saw a few videos explaining the process, but i'm not an data scientist or anything and dont have an good video to recomend from the top of my head.
i was just a bit pissed off by what you said, but as you explained, it was just an miss fortunate choice of words, so lets ignore that.
i agree that a LOT of companies are puting machine learn as an buzz world to market their tech to investors or end users, but the area as an whole isnt limited to that.
unlike metaverse...
isnt that the definition of ASIC?
I'm not sure what you mean. The Tensor cores?
Last edited by Doc Angelo on 18 March 2022 at 3:55 pm UTC
yupisnt that the definition of ASIC?
I'm not sure what you mean. The Tensor cores?
I hope they release the source code soon so godot 4 can add this (Although it need some TAA implementation first for the motion vectors)
i dont think they have enough man power for that, it would only delay vulkan, i think they will focus on deliver vulkan on 4.0, bring back openGL at 4.1, and those extra features either will go at =>4.2x or 5.0x.
one thing i think they could implement though, that shouldnt require too much effort to add is some form of rendering the text in a different render contex so FSR 1.0 could work on everything else from the game, then you render the text as the last step in an higher resolution instead of upscaling it.
(something like an text overlay on top of the game render.)
i think its doable.
HOLY GUACAMOLE! its a bit strange to self quote, but...
https://www.youtube.com/watch?v=5JBZj7u3U6k
that! if godot can do that, FSR 1.0 will happen.
I hope they release the source code soon so godot 4 can add this (Although it need some TAA implementation first for the motion vectors)
i dont think they have enough man power for that, it would only delay vulkan, i think they will focus on deliver vulkan on 4.0, bring back openGL at 4.1, and those extra features either will go at =>4.2x or 5.0x.
TAA is being implemented in Godot 4.0 as we speak! There's not much to show yet, but this means motion vectors will be available for FSR 2.0 to use (and/or XeSS, depending on whether it goes open source in a timely fashion). These motion vectors will also be useful for a motion blur implementation in the long run.
MSAA and FXAA will remain available, with SMAA potentially being implemented in the future for high-quality post-processing spatial antialiasing.
one thing i think they could implement though, that shouldnt require too much effort to add is some form of rendering the text in a different render contex so FSR 1.0 could work on everything else from the game, then you render the text as the last step in an higher resolution instead of upscaling it.
(something like an text overlay on top of the game render.)
This is indeed what Godot 4.0 does if you reduce the `scaling_3d_scale` project setting/Viewport property below 1.0 :)
It can also be set above 1.0 for supersampling if you have the GPU power to spare.
This can be done in Godot 3.x too with a two-Viewport setup: https://github.com/godotengine/godot-demo-projects/tree/master/viewport/3d_scaling
Last edited by Calinou on 19 March 2022 at 4:17 pm UTC
Delivers similar or better than native image quality using temporal data
Wait, so it can make an image that's better than the original??
It depends which part we are talking about. They say it includes both up-scaling AND temporal anti aliasing(TAA). The up-scaling obviously won't give you a better image than original. TAA might if the base game does not implement it (very very unlikely for a game from the past 5-6 years I'd say, but well.).
So it's kind of a buzzphrase here, that's not really a lie, but that's not really true either. At least not in the sense that most people will understand it.
Overall, temporal upscale has existed for quite some time already, and 4A games has been using it extensively for Metro Exodus on DLSS-less graphics card. So once again, AMD did not really make a breakthrough in term of algorithm, so what it will do is very predictable. It will be better than static upscale, but it will introduce temporal artifacts, (reverse) ghosting, maybe shimering and other fun issues in exchange. Fixing those issues are what (I suspect) the AI part of DLSS/XeSS is mostly for. That and likely interpolating some small details. The fact that it comes in an open source toolkit however is very nice, and that it bring a TAA implementation with it will likely help smaller studios too. Also, once they add the AI part to fix the temporal artifacts, it will likely be just a small update for devs too. Now, XeSS might end up being more interesting on that front IF (and only if) it is open source.
The main inconvenient of the technique is that requires a more complicated plumbing in the game engine, so the integration complexity will be the same as DLSS.
These motion vectors will also be useful for a motion blur implementation in the long run.i hate when good news come togheter with bad news...
i mean, i didnt knew godot didnt had support for motion blur...
This can be done in Godot 3.x too with a two-Viewport setup:omg i saw an video about this a few days ago, rendering the game in low-res to save processing power while render the text in high res for redability.
can this trick be used to apply FSR on one viewport (the 3D one) while ignore it on the other?
[edit]
actually you just said its possible, but have you tried?
it will introduce temporal artifacts, (reverse) ghosting
god... read this phrase out of context, that sounds like sci-fi...
"yeah yeah , our time machine will introduce some temporal artifacts, like reverse ghosting but we can fix that with that <lot of techinical terms for an solution>
Last edited by elmapul on 20 March 2022 at 9:00 am UTC
can this trick be used to apply FSR on one viewport (the 3D one) while ignore it on the other?
[edit]
actually you just said its possible, but have you tried?
This is already how it works – the `scaling_3d_scale` property will never affect 2D rendering, as its name implies.
See more from me