On Linux with AMD GPUs you can decide between the RADV and AMDVLK drivers for Vulkan API support, and it appears AMD want to make things a little easier for you.
It can get a little confusing so here's the real basics: AMDVLK is the "official" external Vulkan driver developed by AMD, whereas RADV is part of Mesa and comes with most distributions by default. Sometimes certain games work better on one, sometimes on the other. Additionally, AMD only directly support Ubuntu and Red Hat, whereas Mesa with RADV focuses on everything they can.
With the latest AMDVLK 2021.Q1.1 release, AMD has made switching between the two a little easier. With this driver installed, you only need to set an environment variable to tell whatever game or application you're using what driver to use with "AMD_VULKAN_ICD" set to either "AMDVLK" or "RADV". The default is AMDVLK of course, if none is set.
Here's the highlights of this new driver release:
New feature and improvement
- Add AMD switchable graphics layer to switch AMD Vulkan driver between amdvlk and RADV
- Update Khronos Vulkan Headers to 1.2.164
- Navi21 performance tuning for game X-Plane, Madmax, Talos Principle, Rise of Tomb Raider, F12017
Issue fix
- RPCS3 Corruption observed on Game window on Navi10
See more on GitHub.
Spoiler, click me
Quote* Update due to argument IImage* retired from SignalNativeFence()
* Remove Mutex::Init(), RWLock::Init(), and ConditionVariable::Init() usage
* [Navi21] X-Plane: LLPC performance tuning
* Add scope to some settings
* Add AMD switchable graphics layer to switch AMD Vulkan driver between amdvlk and RADV
* [Navi21] Madmax LLPC performance tuning
* Fix memory alignment for memory dedicated allocation
* [Navi21] Talos Principle: LLPC performance tuning
* Update PAL Interface in Vulkan to 640
* Update Khronos Vulkan Headers to 1.2.164
* Remove DebugReportCallback::Message() and DebugUtilsMessenger::Message() since they are unused
* PhysicalDevice::m_memoryUsageTracker::trackerMutex corrputed
* [Navi21] Rise of Tomb Raider-LLPC performance tuning
* Move spirv-headers from XGL to LLPC
* flags cleanup - meaningless const on return types
* Enable NGG compactionless for GFX10.3+
* [Navi21] F12017 LLPC performance tuning
PAL update:
Spoiler, click me
Quote* Add Mesh shader support
* ImageAspect Removal (clean up IsFullSubresRange asserts)
* Bump version number to 288
* Reorder start of CMakeLists.txt in pal root so that TEST_BIG_ENDIAN works for stand alone builds
* Add declarative heap selection in GpuMemoryCreateInfo
* Fix warning (found in cmake build) that bltSyncToken is defined twice
* [Navi21] Meta equation of multiple layer image is incorrect.
* Add new interface function to query command feedback status from PAL Security Processor
* Remove Util::ConditionVariable::Init()
* Remove RWLock::Init() from PAL
* Ensure there is a fallback to local visible memory when requesting invisible memory for RGP traces
* Move initialization of Util::ConditionVariable to constructor
* Initialize Util::RWLock in constructor
* Fix several issues in error handling
* Fence style barrier signaling and waiting, part1
* Remove Mutex::Init()
* File::Rseek & File::FastForward Added
* [Navi21] Meta equation of 4/8xMsaa image is incorrect
* Update UserDataMapping enum
* Add Tonga back to null device tables
* Fix DRI3, Wayland and DRM traces
* Remove several dead settings
* Allow Util::Vector to qualify as a `range_expression` concept
* Generate different RPM shaders on diff milestones of a chip
* YV12 format update
* Inconsistent layout masks for ResolveSrc/ResolveDst
* Initialize Mutex in the constructor
* [GFX9/10] Remove RMW for DB_RENDER_OVERRIDE in most cases
* Remove ImageAspect from PAL interface (replaced with plane index), and add numPlanes to SubresRange
* Fix invalid SET_PREDICATION asserts
* [cmake] Created PalBuildParameters.cmake
* Add a nodiscard helper
* Minor mistake on handling exception in palElfProcessorImpl.h
* Missing DrawDispatchInfo in CmdBufferLogger output
* [Navi10] RPCS3 Corruption observed on Game window
I think that this is the best approach possible to the driver.
Last edited by pageround on 8 January 2021 at 2:06 pm UTC
Quoting: strycoreThis is a vendor specific implementation of something that already exists and is not vendor specific. We have supported switching vulkan drivers with the VK_ICD_FILENAMES environment variable for months in Lutris so I don't really see what the big improvement is here.
Yep. In fact, you can use the same env var to switch to the closed source vulkan implementation as well. The only advantage I can see to this is that you can switch from one driver to another by just using a short name instead of a full path.
Quoting: pageroundI am happy AMD are contributing code, however I've been sticking with Mesa+RADV on my 5600 Fedora laptop as its been working great. Plays everything I can throw at it. Maybe with this change I'll try out AMDVLK in the future.
Please don't. Messing with AMDGPU drivers is a risky operation for your OS and you shouldn't risk worsening the stability of your system if everything works great with Mesa.
For the record, last time I used AMDVLK Pro was for Doom Eternal when the Mesa driver wasn't ready yet. I haven't had a game that benefited from the AMDVLK (non Pro) driver. Currently, every game I play works fine with Mesa.
Quoting: pageroundI am happy AMD are contributing code, however I've been sticking with Mesa+RADV on my 5600 Fedora laptop as its been working great. Plays everything I can throw at it. Maybe with this change I'll try out AMDVLK in the future. Thanks for the heads up, Liam!
FYI, you can try AMDVLK and the AMD closed source Vulkan implementation by just installing these libraries into a specific directory of your system and playing around with VK_ICD_FILENAMES in order to switch from one driver to another. You will probably end up using RADV for everything, but I think it's always welcome to have the option to test the other vulkan implementations (in my experience, it can help to detect driver bugs or game bugs).
Quoting: torbidoAMDVLK is now used by default? I do not like that at all.Presumably only if you go out of your way to actually install AMDVLK.
See more from me