AMD has today revealed AMD FidelityFX Super Resolution 2.0, the next-generation version of their impressive spatial upscaling tech that can really help improve performance.
For those who don't use it and are confused: the whole idea is that it produces high-resolution outputs from lower resolution inputs. It's one way to get good performance at 4K for example, for games that are a bit too resource intensive. It can work with many resolutions and the Steam Deck has FSR built-in.
There's limitations of course, and AMD explained these examples for FSR 1.0:
- FSR 1.0 requires a high quality anti-aliased source image, which is not always available without making further changes to code and/or the engine.
- Upscaling quality is unavoidably a function of the source resolution input. So with a low resolution source, there is just not enough information with a spatial upscaler for thin detail.
Bring on FSR 2.0 then! Which continues to be open source.
"FSR 2.0 is the result of years of research from AMD, and is developed from the ground up. It uses cutting-edge temporal algorithms to reconstruct fine geometric and texture detail in the upscaled image, along with high-quality anti-aliasing."
Some of what's new in FSR 2.0 include:
- Delivers similar or better than native image quality using temporal data.
- Includes high-quality anti-aliasing.
- Higher image quality than FSR 1.0 at all quality presets/resolutions.
- Does not require dedicated Machine Learning (ML) hardware.
- Boosts framerates in supported games across a wide range of products and platforms, both AMD and select competitors.
It will continue to work across all vendors too so NVIDIA and Intel will also benefit from this. Since it's open source, any developer can just pick it up and use it.
FSR 2.0 temporal upscaling uses frame color, depth, and motion vectors in the rendering pipeline and leverages information from past frames to create very high-quality upscaled output and it also includes optimized high-quality anti-aliasing. Spatial upscaling solutions like FSR 1.0 use data from the current frame to create the upscaled output and rely on the separate anti-aliasing incorporated into a game’s rendering pipeline. Because of these differences, FidelityFX Super Resolution 2.0 delivers significantly higher image quality than FSR 1.0 at all quality mode presets and screen resolutions.
An example AMD included was DEATHLOOP which is adding support for it:
Direct Link
When will it actually be available? They're not saying other than a vague "Q2 2022". They will be attending GDC though next week to give a talk on it.
<meta name="description" content="Time. Get it? Never mind." />
on the announcement page made my day.
So cool to have this tech free for anyone to use. Hopefully Intel follows the same with XeSS and both techs kills the proprietary crap from the green side.
QuoteFSR 2.0 temporal upscaling uses frame color, depth, and motion vectors in the rendering pipeline
That doesn't sound promising for the chances of getting something like FSR 2.0 built into Steam OS.
FSR 1.0 was easy enough because it could just take a single frame image and scale it, but FSR 2.0 will need depth and motion vectors in addition to colour. I don't see how something like Gamescope could have access to that data.
The meaning of the terms around "artificial intelligence" are weird. There's no machine learning hardware on RTX cards as well. RTX cards simply execute the process that was the result of the machine learning that was done on different hardware - which also happened to be regular hardware.
Quoting: gradyvuckovicQuoteFSR 2.0 temporal upscaling uses frame color, depth, and motion vectors in the rendering pipeline
That doesn't sound promising for the chances of getting something like FSR 2.0 built into Steam OS.
FSR 1.0 was easy enough because it could just take a single frame image and scale it, but FSR 2.0 will need depth and motion vectors in addition to colour. I don't see how something like Gamescope could have access to that data.
Those "vectors" are most likely calculated from previous frames and not the vectors used in the game engine. So the data can be extracted by saving the image of previous frames.
Quoting: Doc Angelo> Does not require dedicated Machine Learning (ML) hardware
The meaning of the terms around "artificial intelligence" are weird. There's no machine learning hardware on RTX cards as well. RTX cards simply execute the process that was the result of the machine learning that was done on different hardware - which also happened to be regular hardware.
Actually there is. Machine learning requires a very limited and specific set of instructions. Nvidia does have hardware for this included in their GPU to speed things up. It is true that they use other computers to calculate the weights for the machine learning model, but without the dedicated part, even the creation of the model itself would probably have been to expensive.
Quoting: DonkeyActually there is. Machine learning requires a very limited and specific set of instructions. Nvidia does have hardware for this included in their GPU to speed things up. It is true that they use other computers to calculate the weights for the machine learning model, but without the dedicated part, even the creation of the model itself would probably have been to expensive.
I'm not sure what you're saying. As far as I know, there is no actual machine learning going on in the RTX cards. Just the execution of the models. Yes, they use hardware that is well fitted for that task, but it's just the execution, not the machine learning itself.
I'm not sure what you mean with "very limited and specific instructions" for ML. ML is possible on any hardware, as far as I know. Not efficient maybe, but possible. What kind of instructions are needed for ML in your view?
Quoting: DonkeyQuoting: gradyvuckovicQuoteFSR 2.0 temporal upscaling uses frame color, depth, and motion vectors in the rendering pipeline
That doesn't sound promising for the chances of getting something like FSR 2.0 built into Steam OS.
FSR 1.0 was easy enough because it could just take a single frame image and scale it, but FSR 2.0 will need depth and motion vectors in addition to colour. I don't see how something like Gamescope could have access to that data.
Those "vectors" are most likely calculated from previous frames and not the vectors used in the game engine. So the data can be extracted by saving the image of previous frames.
The diagram on the announcement page looks more like it's going to use the depth/motion buffers directly.
See more from me