While DLSS has been technically available in the NVIDIA drivers for Linux for some time now, the missing piece was support for Proton which will be landing tomorrow - June 22.
In one of their GeForce blog posts, they made it very clear:
Today we’re announcing DLSS is coming to Facepunch Studios’ massively popular multiplayer survival game, Rust, on July 1st, and is available now in Necromunda: Hired Gun and Chernobylite. Tomorrow, with a Linux graphics driver update, we’ll also be adding support for Vulkan API DLSS games on Proton.
This was revealed originally on June 1 along with the GeForce RTX 3080 Ti and GeForce RTX 3070 Ti announcements. At least now we have a date for part of this extra support for Linux and DLSS. This, as stated, will be limited to games that natively use Vulkan as their graphics API which will be a short list including DOOM Eternal, No Man’s Sky, and Wolfenstein: Youngblood. Support for running Windows games that use DirectX with DLSS in Proton will arrive "this Fall".
With that in mind then, it's likely we'll see the 470 driver land tomorrow, that is unless NVIDIA have a smaller driver coming first with this added in. We're excited for the 470 driver as a whole, since that will include support for async reprojection to help VR on Linux and hardware accelerated GL and Vulkan rendering with Xwayland.
Physx was in life support for many years, at the end it was the same as their tessellation strategy. The discussion here is about bringing solutions and not gimmick features, which is what any user should look at. Unless you're a shareholder of Nvidia, this strategy cannot be appreciated (mainly from a Linux user pov).
PhysX was a success for a long time, tessellation also made a lot of noise for them, and did give them an edge. OF COURSE it does not last forever - for as long a there is competition - (CUDA has for a very long time though). I am not appreciating it, I am being purely realist. I don't particularly like it, I don't encourage it as a consumer, but I do understand the rational from their PoV. And wishing them to do otherwise in their position is, well, wishful thinking.
We can wish all that we want, but R&D cost money, a lot of money. So companies want some pay back for it. Nvidia is already doing the effort of supporting most features faster and faster on Linux. And now contributing directly to Proton too so we get even more. So from our point of view, it is a clear win.
As for the open standard DLSS, it would be useless as of now, and while it might help getting more games with XeSS if Intel does make it good, it would not change much anyway as long as they do not open the background which they won't for very obvious reasons I already pointed out in another message.
To this day, AMD still has no real support for RT on Linux (except in the proprietary driver that one uses and no developers target). Also they have a very bad track record in term of day 1 support for GPUs themselves (yes they tend to boot now, clap clap, well done, thx for allowing us to boot your gpu, now also give all features and a stable driver). Nvidia has lagged behing for wayland support (but honestly, from a user perspective this does not matter one bit).
Who even knows when/if FSR will have (good) support on Linux, and even more so on Proton. It might work in reshade though according to GN's video.
Link? Unless you mean the sdk.[/quote]
They open sourced the headers ( of NVAPI), it was in news here multiple times. So I would guess you can do the plumbing behind that. Obviously no implem behind that, just headers. I am not saying it became an "open standard" per se either. It has no frozen version for others to implement etc. It might come, who knows.
Now, ignoring all that, FSR might still help a little with sub par configs, and it is always nice to have. But it does not seem like support is too hot either - metro said they wouldn't, and the games which does are not so hot either. Maybe reshade will save it... And even in consoles, I doubt it does much better than checkboard.
Now the main issue is, with HW accelerated inference, you tend to need to fine tune the network for each accelerator architecture. So it is unlikely you will have s one size fit it all network you can deploy everywhere directly.
That's why cramming AI ASICs into GPU isn't necessarily a good idea. If you really need an ASIC it's better to just add another card.
Now the main issue is, with HW accelerated inference, you tend to need to fine tune the network for each accelerator architecture. So it is unlikely you will have s one size fit it all network you can deploy everywhere directly.
That's why cramming AI ASICs into GPU isn't necessarily a good idea. If you really need an ASIC it's better to just add another card.
Hmmmm, yes and no. First, dedicated HW in ML can still bring performance improvements which are dramatic enough that you can not say no to them, especially to keep power consumption in check (in datacenter, you will not double the number of GPUs without a crazy electricity bill...).
Secondly, even not talking about full blown ASICs, you will use specific AI instructions to get a better result (unless you really do not care about efficiency). So your quantization will not be the same and maybe you will need to swap some layer for better perf/result too. No magic there.
No real alternative either. Especially if you want something real time. In time, things will likely standardize more too, as acceleration is easier with better hw (less choice to be made if you have more transistor, you can just accel everything, or just have high freq with same power consumption and accel less stuff anyway). But we are not quite there yet.
Last edited by Shmerl on 22 June 2021 at 3:27 pm UTC
What I mean it's better to have a separete ASIC card just for that AI, instead of stuffing every new ASIC idea into the GPU making it bloated and less useful for actual tasks like graphics.
Well, GPUs are ASIC theme park anyway :)
more seriously, I guess die size is dominated by IO, so you have room to add some stuff. Also, more cuda cores has diminutive return, so why not give some other cool stuff to the buyer. Be it RT or Tensor cores (which you can use for other cool stuff than DLSS/games if you would like to have a try in the ML field). I think it is quite nice to give access to this kind of hw for consumer products.
For data center gpus even, you get better locality that way, if you need to run more general computation on your cuda core on the side, so I guess this is good for them too (otherwise they would probably have pressured for a change too).
PhysX was a success for a long time, tessellation also made a lot of noise for them, and did give them an edge. OF COURSE it does not last forever - for as long a there is competition - (CUDA has for a very long time though). I am not appreciating it, I am being purely realist. I don't particularly like it, I don't encourage it as a consumer, but I do understand the rational from their PoV. And wishing them to do otherwise in their position is, well, wishful thinking.
We can wish all that we want, but R&D cost money, a lot of money. So companies want some pay back for it. Nvidia is already doing the effort of supporting most features faster and faster on Linux. And now contributing directly to Proton too so we get even more. So from our point of view, it is a clear win.
https://en.wikipedia.org/wiki/List_of_games_with_hardware-accelerated_PhysX_support
40 games in ten years... I call that far away from a success.
You don't encourage it but you see it as a win. idk, for me it's clear that the best that can happen is that DLSS has the same fate as Physx, which is quite probable as their implementation requires a lot of resources from Nvidia.
As for the open standard DLSS, it would be useless as of now, and while it might help getting more games with XeSS if Intel does make it good, it would not change much anyway as long as they do not open the background which they won't for very obvious reasons I already pointed out in another message.
To this day, AMD still has no real support for RT on Linux (except in the proprietary driver that one uses and no developers target).Also they have a very bad track record in term of day 1 support for GPUs themselves (yes they tend to boot now, clap clap, well done, thx for allowing us to boot your gpu, now also give all features and a stable driver). Nvidia has lagged behing for wayland support (but honestly, from a user perspective this does not matter one bit).
Who even knows when/if FSR will have (good) support on Linux, and even more so on Proton. It might work in reshade though according to GN's video.
And somehow you end up with a rant against AMD using arguments that applies for past releases of Nvidia hw as well...
All I have to say is hat any AMD problems of the past won't change the fact that Nvidia practices are anti-competitive. You may like them from a corporate point of view, but as a end user you should definitely feel them as despicable.
They open sourced the headers ( of NVAPI), it was in news here multiple times. So I would guess you can do the plumbing behind that. Obviously no implem behind that, just headers. I am not saying it became an "open standard" per se either. It has no frozen version for others to implement etc. It might come, who knows.
Now, ignoring all that, FSR might still help a little with sub par configs, and it is always nice to have. But it does not seem like support is too hot either - metro said they wouldn't, and the games which does are not so hot either. Maybe reshade will save it... And even in consoles, I doubt it does much better than checkboard.
Correct me if I'm wrong, but I always understood that DLSS was part of NGX, not NVAPI.
Last edited by x_wing on 22 June 2021 at 4:34 pm UTC
Yup, that's what I find amusing. Like two people who used to be best buds let a woman get between them or something. Though to be fair (to be fai-uh), it isn't like nvidia is hurting for money because of it.Granted, nvidia doesn't support Mac at all, which I still find amusing.They can't. On Windows and Linux, the GPU vendor provides the API implementation. On Macs, Apple do. Apple and Nvidia had a falling out, so no more Nvidia hardware in Macs, so no support from Apple for Nvidia hardware in Macs.
https://en.wikipedia.org/wiki/List_of_games_with_hardware-accelerated_PhysX_supportThis is kind of a false pretense. The PhysX engines have been built into the GPUs for years now, and so special support for it is no longer a thing. So 40 sounds about right. New games for the most part just use the hardware if they need/want to.
40 games in ten years... I call that far away from a success.
https://en.wikipedia.org/wiki/List_of_games_with_hardware-accelerated_PhysX_supportThis is kind of a false pretense. The PhysX engines have been built into the GPUs for years now, and so special support for it is no longer a thing. So 40 sounds about right. New games for the most part just use the hardware if they need/want to.
40 games in ten years... I call that far away from a success.
40 games and most of them (if not all of them) being sponsored by Nvidia in ten years. And as far I know, most of the nowdays game physics are still running on the CPU. So, the idea was to accelerate physics execution using the GPU but their reluctance to make a standard made them fail and 15 years after they first release of Physx we are still using the CPU. IMO, that's a failure.
Last edited by x_wing on 22 June 2021 at 5:30 pm UTC
PhysX was a success for a long time, tessellation also made a lot of noise for them, and did give them an edge. OF COURSE it does not last forever - for as long a there is competition - (CUDA has for a very long time though). I am not appreciating it, I am being purely realist. I don't particularly like it, I don't encourage it as a consumer, but I do understand the rational from their PoV. And wishing them to do otherwise in their position is, well, wishful thinking.
We can wish all that we want, but R&D cost money, a lot of money. So companies want some pay back for it. Nvidia is already doing the effort of supporting most features faster and faster on Linux. And now contributing directly to Proton too so we get even more. So from our point of view, it is a clear win.
https://en.wikipedia.org/wiki/List_of_games_with_hardware-accelerated_PhysX_support
40 games in ten years... I call that far away from a success.
You don't encourage it but you see it as a win. idk, for me it's clear that the best that can happen is that DLSS has the same fate as Physx, which is quite probable as their implementation requires a lot of resources from Nvidia.
As for the open standard DLSS, it would be useless as of now, and while it might help getting more games with XeSS if Intel does make it good, it would not change much anyway as long as they do not open the background which they won't for very obvious reasons I already pointed out in another message.
To this day, AMD still has no real support for RT on Linux (except in the proprietary driver that one uses and no developers target).Also they have a very bad track record in term of day 1 support for GPUs themselves (yes they tend to boot now, clap clap, well done, thx for allowing us to boot your gpu, now also give all features and a stable driver). Nvidia has lagged behing for wayland support (but honestly, from a user perspective this does not matter one bit).
Who even knows when/if FSR will have (good) support on Linux, and even more so on Proton. It might work in reshade though according to GN's video.
And somehow you end up with a rant against AMD using arguments that applies for past releases of Nvidia hw as well...
All I have to say is hat any AMD problems of the past won't change the fact that Nvidia practices are anti-competitive. You may like them from a corporate point of view, but as a end user you should definitely feel them as despicable.
They open sourced the headers ( of NVAPI), it was in news here multiple times. So I would guess you can do the plumbing behind that. Obviously no implem behind that, just headers. I am not saying it became an "open standard" per se either. It has no frozen version for others to implement etc. It might come, who knows.
Now, ignoring all that, FSR might still help a little with sub par configs, and it is always nice to have. But it does not seem like support is too hot either - metro said they wouldn't, and the games which does are not so hot either. Maybe reshade will save it... And even in consoles, I doubt it does much better than checkboard.
Correct me if I'm wrong, but I always understood that DLSS was part of NGX, not NVAPI.
Not so sure which part DLSS is in, I just now they specifically open sourced headers so it would be stubbed into proton. It was not to be competition friendly clearly.
For the rant on AMD, ok, I am still salty for the last card I bought in prev gen (btw, it is really not that old, like 1 or 2 year ?). I am back on nvidia because that was a very painful experience - maybe I had my hopes too high but well.
And Nvidia is anti competitive yes ... I mean, I would do the same as them in their position, and frankly most sane people would, so I have a hard time criticizing them. At the same time, they are not a charity, but a company, they are supposed to be making money, not give kiss and hugs to everyone.
My only wish is that they open source the core driver, for which it makes absolutely no sense from a business perspective to keep closed source. It would keep everyone happy too, as those who do not want proprietary features could ignore them.
As for PhysX, it is embedded in engines directly since a long time already, and the point for them is not so much if many games use it or not, but at some point it was the cool thing that made you buy an Nvidia GPU. It's really all that matters to them, and it was a clear win on that point. It was also still used in metro last light at least, not sure for redux. I'd say it is mostly phased out by new techs - I remember it was used for some lighting, which as an example would be replaced by RT now. Once a feature like that is used up, you just do the next one. It also profits everyone eventually since the competitors will implement an alternative, potentially cross vendor and cross platform. Or it will just become a de facto standard, depends.
The win I present for us is the subject of the news, that is, Nvidia cares enough about us to support its features here. And I did throw some salt at AMD for their (lack of) RT support, but also OC that came after a long time etc ... (yeah I do not forgive easily, I know).
And Nvidia is anti competitive yes ... I mean, I would do the same as them in their position, and frankly most sane people would, so I have a hard time criticizing them. At the same time, they are not a charity, but a company, they are supposed to be making money, not give kiss and hugs to everyone.
My only wish is that they open source the core driver, for which it makes absolutely no sense from a business perspective to keep closed source. It would keep everyone happy too, as those who do not want proprietary features could ignore them.
Nobody is expecting that they behave as charity company. But creating standards is not about charity but to create a sustainable market environment and allow it to evolve for the better.
As for PhysX, it is embedded in engines directly since a long time already, and the point for them is not so much if many games use it or not, but at some point it was the cool thing that made you buy an Nvidia GPU. It's really all that matters to them, and it was a clear win on that point. It was also still used in metro last light at least, not sure for redux. I'd say it is mostly phased out by new techs - I remember it was used for some lighting, which as an example would be replaced by RT now. Once a feature like that is used up, you just do the next one. It also profits everyone eventually since the competitors will implement an alternative, potentially cross vendor and cross platform. Or it will just become a de facto standard, depends.
As I answered slaapliedje, the innovation was to move physics calculations into the GPU but they completely failed, mostly because their crappy proprietary API strategy.
The win I present for us is the subject of the news, that is, Nvidia cares enough about us to support its features here. And I did throw some salt at AMD for their (lack of) RT support, but also OC that came after a long time etc ... (yeah I do not forgive easily, I know).
Which can also be seen as a marketing movement. I mean, you didn't get DLSS Proton support until AMD came up with FSR and what a coincidence that we get the Nvidia "new linux feature" on top of the AMD FSR article.
https://en.wikipedia.org/wiki/List_of_games_with_hardware-accelerated_PhysX_supportThis is kind of a false pretense. The PhysX engines have been built into the GPUs for years now, and so special support for it is no longer a thing. So 40 sounds about right. New games for the most part just use the hardware if they need/want to.
40 games in ten years... I call that far away from a success.
40 games and most of them (if not all of them) being sponsored by Nvidia in ten years. And as far I know, most of the nowdays game physics are still running on the CPU. So, the idea was to accelerate physics execution using the GPU but their reluctance to make a standard made them fail and 15 years after they first release of Physx we are still using the CPU. IMO, that's a failure.
40 big games for such feature is okay. Also, it is in fact open https://github.com/NVIDIAGameWorks/PhysX. It seems indeed to be used a lot on the CPU, but that is likely not because of openness or not but because it does not make much difference nowadays. From what I read, I see some games do win from using the GPU (PUBG-https://www.reddit.com/r/PUBATTLEGROUNDS/comments/c17kol/is_it_safe_to_put_the_physx_settings_from_auto/) while others see no difference (rocket league). Not idea why, but well. It seems every unreal engine game can potentially use it, and you have an option on whether you put in on auto-gpu or cpu (I am mostly browsing reddit and co, so don't quote me too much either).
So yeah, far from a loss I would say ? Once again, it did turn out to be a win in term of image. So I doubt Nvidia sees it as a loss.
Overall, it is not a loss for us, since the CPU implem seems to be pretty fast now. So win win ?
It is the same story in a way as G-Sync VS Freesync. Nvidia spearheaded the effort, made the R&D and marketing, then locked it in. It pushed competitors, who already had a reference so it was easier for them, to propose an alternative. And bam, we got freesync. Better yet, Nvidia gracefully allowed the use of freesync on their GPUs (which means not so locked in and evil, they could really just have dropped the price on gsync and "sponsored" it to death), and turned G-Sync into a quality indicator: not supported == we did not test it, and likely the panel quality is meh, expect flicker - freesync validated (or whatever the name) == we tested it and it's cool - gsync == massively tested it, and have very high quality standard. Now, whether you want to pay for the difference between the "validated" and gsync is your own affair. There is some gain, but for me it is not worth it clearly. Other people might think that it is a vital difference. But in the end, everyone win, we have a feature which we probably would never have been implemented if not spearheaded by them, and a "seal of quality" now that they adopted the new standard. That is also why I do not particularly hate them, they do spearhead a lot of stuff we got as standard today.
They are also fixing the stupid stuff (virt io lock) they did before. So I am much more "kind" to them than one year or one year and a half ago.
I do appreciate that AMD is trying to catch up, and that they open the result, thus participating in getting everyone together after the front runner opened the path. I appreciate less their code drop approach, but many companies do that ... So can't completely blame them either.
So far, if you want a real open source support (as in, working upstream and ahead of time) only Intel does that. Will they still do it with DG2 ? Time will tell, if so, and if XeSS is good and supported then count me in.
\The win I present for us is the subject of the news, that is, Nvidia cares enough about us to support its features here. And I did throw some salt at AMD for their (lack of) RT support, but also OC that came after a long time etc ... (yeah I do not forgive easily, I know).
Which can also be seen as a marketing movement. I mean, you didn't get DLSS Proton support until AMD came up with FSR and what a coincidence that we get the Nvidia "new linux feature" on top of the AMD FSR article.
Of course it is a marketing movement, but it means we matter, which is by far the most important. If we did not, they would just not implement it. They also supported virt io properly, and are coming on multiple other features (wayland, NvFBC when using gsync or gsync compatible etc).
Also, they have dropped headers quite a while ago, so I don't think it was originally with FSR in mind. Most likely, the date is because of FSR, not the feature. At that time, there was no reaction from the community though, no implem or anything that I could see at least, so it seems they lost patience and pushed it themselves. Maybe they just did not open it enough too.
What you are saying here, is that competition is good and make companies be more consumer friendly. Which I 100% agree on. I also am not wishing death to AMD or anything. I just want them to push more fwd. For now, I am still disappointed of their Linux support, and hope for them to do better. I also wish they would be more clean in their marketing for FSR. And I want Intel to enter the market full force too, and give another standard of good support. Hell, if a 4th one could enter, it would be even better. Strong competition will enforce differentiation, but at the same time, will accelerate the standardization of the most important differentiators (since every competitor will cooperate to take the crown and make a standard to share it).
So, the idea was to accelerate physics execution using the GPU but their reluctance to make a standard made them fail and 15 years after they first release of Physx we are still using the CPU.No, the idea was that you'd buy a separate card just for accelerating physics calculations. But that was silly: no one was going to buy a card just for that, and no one was going to put support into their game for something that no one had. So Nvidia bought the company and made it so that you could run those calculations on the GPU that you already had. Then they open sourced it some time later.
So, the idea was to accelerate physics execution using the GPU but their reluctance to make a standard made them fail and 15 years after they first release of Physx we are still using the CPU.No, the idea was that you'd buy a separate card just for accelerating physics calculations. But that was silly: no one was going to buy a card just for that, and no one was going to put support into their game for something that no one had. So Nvidia bought the company and made it so that you could run those calculations on the GPU that you already had. Then they open sourced it some time later.
IIRC, the first sample of Physx I saw was on 2005 and it was from the former company that created the tech, using dedicated hardware, which was in a very early stage (I'm almost sure that their dedicated solution never got to the market). In the moment that Nvidia bought that company, their strategy was to implement that solution into the GPU. So, Nvidia wanted to move physics calculation into GPU as use case of GPGPU. But they fucked up with that proprietary API that only became open source long after the hype was gone. That's my point.
Time will tell what will win. But I'm confident to say that Nvidia will fuck up once again.
IIRC, the first sample of Physx I saw was on 2005 and it was from the former company that created the tech, using dedicated hardware, which was in a very early stage (I'm almost sure that their dedicated solution never got to the market).
The PPUs definitely existed. I doubt that many got sold, because the business case for them was rubbish, but you could get pre-built gaming machines with them in. The technology was also in a bunch of console games before Nvidia bought Ageia.
In the moment that Nvidia bought that company, their strategy was to implement that solution into the GPU. So, Nvidia wanted to move physics calculation into GPU as use case of GPGPU.
Of course they did. Buying an extra PPU was silly, but GPGPU is great. And of course they wanted it to be a market differentiator to make back the purchase price, particularly as Intel had just bought Havok at the time.
Yeah, the idea of offloading physics calculation to an extra GPU was awesome. Also not only a gaming feature, by the way. First game I remember utilizing it was Ghost Recon: Advanced Warfighter. Excellent game, but I think to this day one where PhysX won't work under wine :( That's one of those games that for the longest time, you couldn't play at max detail unless you had specific hardware, or it was SLOW...IIRC, the first sample of Physx I saw was on 2005 and it was from the former company that created the tech, using dedicated hardware, which was in a very early stage (I'm almost sure that their dedicated solution never got to the market).
The PPUs definitely existed. I doubt that many got sold, because the business case for them was rubbish, but you could get pre-built gaming machines with them in. The technology was also in a bunch of console games before Nvidia bought Ageia.
In the moment that Nvidia bought that company, their strategy was to implement that solution into the GPU. So, Nvidia wanted to move physics calculation into GPU as use case of GPGPU.
Of course they did. Buying an extra PPU was silly, but GPGPU is great. And of course they wanted it to be a market differentiator to make back the purchase price, particularly as Intel had just bought Havok at the time.
Nvidia has done pretty well for themselves, considering they bought 3Dfx, and many other technologies as they went along. Sure their 'OMG, Ray Tracing!' was a little stupid for those of us that have known Ray Tracing has been a thing for decades, but it is still 'OMG realtime Ray Tracing!' which is actually rather phenomenal for consumer grade cards to be able to have such a feature.
People in the Linux community are interesting as there are some that are like 'Awesome, they support us with a driver that actually covers all of the features!' and then there are those that are 'OPEN SOURCE or GTFO!' I understand both, but for now I can't find an AMD graphics card, and I did finally find an nvidia one, and outside of Optimus shenanigans, I've never had any issues with nvidia's hardware / drivers.
The PPUs definitely existed. I doubt that many got sold, because the business case for them was rubbish, but you could get pre-built gaming machines with them in. The technology was also in a bunch of console games before Nvidia bought Ageia.
I googled and found some... on Geocities. (So that's the age we're talking about. :D )
http://www.geocities.ws/nagaty_h/hardware/asus_physx_p1.htm
Last edited by Eike on 23 June 2021 at 9:02 am UTC
Well, not quite Geocities' heyday, but it was a while ago. It was the PS3 era, and MySpace was the world's biggest social network. AMD had a really terrible open source driver and a really terrible proprietary driver (I won't say the name in case it triggers flashbacks), and were selling off their fabs because they'd run out of money. Intel was switching to the Core architecture after the failures of Netburst and Itanium. YouTube was full of videos showing off Compiz, and Ubuntu had released a "Long Term Support" version called Dapper Drake.The PPUs definitely existed. I doubt that many got sold, because the business case for them was rubbish, but you could get pre-built gaming machines with them in. The technology was also in a bunch of console games before Nvidia bought Ageia.
I googled and found some... on Geocities. (So that's the age we're talking about. :D )
http://www.geocities.ws/nagaty_h/hardware/asus_physx_p1.htm
There are some dedicated PhysX cards on eBay. Weirdly PCIe, my memory through the years would have insisted that they were PCI!Well, not quite Geocities' heyday, but it was a while ago. It was the PS3 era, and MySpace was the world's biggest social network. AMD had a really terrible open source driver and a really terrible proprietary driver (I won't say the name in case it triggers flashbacks), and were selling off their fabs because they'd run out of money. Intel was switching to the Core architecture after the failures of Netburst and Itanium. YouTube was full of videos showing off Compiz, and Ubuntu had released a "Long Term Support" version called Dapper Drake.The PPUs definitely existed. I doubt that many got sold, because the business case for them was rubbish, but you could get pre-built gaming machines with them in. The technology was also in a bunch of console games before Nvidia bought Ageia.
I googled and found some... on Geocities. (So that's the age we're talking about. :D )
http://www.geocities.ws/nagaty_h/hardware/asus_physx_p1.htm
https://www.ebay.com/itm/362488709628?hash=item546602c1fc:g:io4AAOSw8R9b7tcM
See more from me