Every article tag can be clicked to get a list of all articles in that category. Every article tag also has an RSS feed! You can customize an RSS feed too!
We do often include affiliate links to earn us some pennies. See more here.

No longer just for the AMD camp, Linux GPU Configuration Tool 'LACT' has a fresh release out that brings in NVIDIA support.

This free and open source app gives you a fancy readout of various GPU stats, along with giving you control over fan control, power and thermals monitor, overclocking, power states and more. Some feature are specific to certain GPUs and vendors though.

In version 0.6.0 here's the main standout additions:

  • Nvidia support! LACT now works with Nvidia GPUs for all of the core functionality (monitoring, clocks configuration, power limits and fan control). It uses the NVML library, so unlike the Nvidia control panel it doesn't rely on X11 extensions and works under Wayland.
  • Multiple profiles for configuration. Currently it is not possible to switch them automatically, but they are configurable through the UI or the unix socket.
  • Clocks configuration now works on AMD IGPUs (at least RDNA2). Previously it was not parsed properly due to lack of VRAM settings.
  • Zero RPM mode settings on RDNA3. Currently this needs a linux-next to be used, and the functionality is expected to land in kernel 6.13. But this resolves a long-standing issue with RDNA3 that made the fan always disabled below a certain temperature, even if using a custom curve.

Plenty more listed in the changelog.

What NVIDIA GPUs does it support? According to the documentation, "Anything Maxwell or newer should work, but generation support has not yet been tested thoroughly".

Nice to see more apps like this for configuring all your hardware on Linux.

Article taken from GamingOnLinux.com.
14 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly came back to check on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly.
See more from me
13 comments
Page: «2/2
  Go to:

tarcisiosurdi about 11 hours ago
This is amazing! Got to configure stuff I always wanted on my GPU ever since switching to Linux but couldn’t because of the NVIDIA Wayland situation…
fagnerln about 10 hours ago
QuoteClocks configuration now works on AMD IGPUs (at least RDNA2). Previously it was not parsed properly due to lack of VRAM settings.

What? Is it possible to handle the VRAM size of the iGPU? I'll try it later.
DamonLinuxPL about 2 hours ago
Quoting: nnohonsjnhtsylaySadly its written in rust so it takes forever to compile on my computer, even with my cpu with 24 threads

This is indeed a big problem. As a developer in one of the Linux distributions, I would like to highlight a few problems related to rust. Of course, this does not mean that rust is bad, I myself think that it may not be great, but it is definitely good - although it has its big problems.

One of them is the lack of the possibility of compiling without using bootstrap, or to put it simply, to compile rust we need to have previously compiled rust... So to compile rust we have to use rust and how to do it? Well, it is not possible, so we have to use precompiled rust from the developer and take their word for it that there is no danger in the code - among other reasons, several distributions did not want to include it in their repositories at the beginning of rust's existence. The creators of rust know about it and supposedly announced work on solving this issue, but in practice no one is in a hurry until someone injects something into the binaries and it goes like bad code in .tar.xz...

Another more serious issue is the way libraries are distributed. Normally, when we build an application, we need several development packages, e.g. libdrm-devel, etc., and we have these libraries in the system, and if not, we add them to the system repository. In rust, the problem is more complex. If we want to compile a rust package, e.g. LACT, we run the cargo-build command inside the source directory, it will actually compile quickly... but this is cheating, not real compilation, because it downloads the necessary modules on the fly during compilation. None of the distributions that focus on security, such as Debian, Ubuntu, Fedora, OpenMandriva or OpenSUSE, etc., can allow compilation with network access, it is strictly forbidden, why? Well, in this way someone could inject potentially dangerous code, or over time simply replace the library at the source, in addition, it is also protection against losing sources when, for example, a program or library becomes unavailable because the site that hosted it disappears from the network etc.

So what other options do I have? Two, do it properly or vendor it.

Do it properly: that is, open the archive and check what dependencies a given rust package needs. And we have a lot of them here, e.g. Lact or Fractal etc. require at least 100 dependencies! A lot, right? Considering that e.g. the corectrl alternative needs only a few... Ok, let's start, we'll add 10 packages a day to the repository and we'll finish by the end of the month ;) And let's try the first crate - let's say it's called rust-libfoo, we add it and when we try to build offline it shows that it needs 3-5 other rust libraries... So we take the first one and it also has its dependencies, which we also have to add... Understanding that 100 packages turns into not 300 but 500 or even more... Even distributions that are paid/sponsored by large companies can't handle it, neither Canonical with their Ubuntu, nor RedHat in RHEL and Fedora, nor even Suse in OpenSuse or even the large community Debian... not to mention small systems. But let's assume that some Mr. Mark from Canonial decides to do it right and tells his 10/20 developers to sit down and make rust in the repository for money, build a mechanism to automate it, use AI or whatever you want, but clean up the rust crates in the repo. Ok, let's say that some time has passed and they've added all the dependencies from one application. Cool? Now give them a second application, e.g. Fractal or Shortwave etc. Ok, they start adding and encounter the same rust-libfoo dependency but in a different version, because the first application required libfoo in version 1.1.0 and the second application already 1.2.0... Another problem, so what if we add two, let them duplicate? There will be hundreds of these duplicates in the repo and they will create problems... Even in Python, hated by some, we have this nicely solved so-called application A requires python-libfoo in version equal to or greater than 1.0.0, sometimes as the API changes they also add but not greater than 2.0.0. But rust had to mess up as usual and requires this specific 1.1.0 and nothing more or less, MADNESS! Nobody can handle this with 500 packages at the moment...

Okay so we abandon the idea of ​​doing it right and we want to make it work at all. So we take and vendor all these rust crates, open the archive and forge a 'cargo vendor' on it and after 10 minutes of downloading we create an archive from these 5GB of downloaded data and send it to our server... Now rust is told to build offline but using vendored dependency. The build begins, which even on ThreadRipper takes forever.

What's wrong with this method apart from the big size and duration of compilation? So if a vulnerability is detected in the rust-libfoo library, we have to take and manually fix every package containing libfoo... When in the normal world, like in libpng, a vulnerability is detected, it is enough to update one libpng package and all applications that require it fixed! Simple right? So let's start, we download the fixed rust-libfoo in version 1.2.1 and try to re-vendor rust packages. The problem is that with each subsequent vendor it still downloads the old package (affected by the dangerous code) hmm. We have to make a patch and force it to require the 1.2.1 package now. Ok, the vendor was successful, we start compiling. After recompiling the error, we look for a solution, we waste time (and the code with the vulnerability is still running around the repository), we finally have it, it was a protection against patching... What id*ot could have come up with that? Well, we apply to rust (the main package) patch to removing patching protection and compile it first - it takes a few hours, and only then we go back to our package with libfoo. And so we have to re-vendor every package containing this vulnerability. It's dangerous and basically duplicates what Microsoft did in Windows, it was to avoid this that we created system repositories apt from .deb, urpmi, zypper or dnf from .rpm and all this just to come back after 30 years to what Microsoft did wrong ;) Madness

And this is just the beginning of problems with rust in distribution. The longer we deal with it, the more problems we see, and we are not the only ones, we talk to creators of other distributions and they face the same problems. But the biggest problem is convincing the community that rust may be good from the perspective of an application developer, but not necessarily from the point of view of distribution. The problem is, if anyone speaks up on this matter, a group of trolls will come - who have never had anything to do with it, but the fact that someone said that rust is great is enough to start hating and discrediting other people who try to draw attention to its flaws. Nobody says - let's not use rust because it has problems, we say: 'hey rust developers, it has issues and before we implement it, fix it please'. But as you can see, someone argued against rust in the Linux kernel, and the trolls sent a SWAT team after him (remember that drama from the last few months?). The problem is that rust is already implemented everywhere, firefox, chromium, mesa with opencl implementation (rusticl), or arm and rust drivers (apple m1), now panfrost, and also open source nvidia vulkan in rust or even the kernel itself... This causes problems for us, and they are getting bigger and bigger. For example, mesa released nvidia vulkan, ok everyone thought how to handle that rust there, look at debian, they already implemented a new mesa - they probably found a way to do it, great job guys ;). Checked and they simply disabled vulkan for nvidia because they were not able to handle it in time...

Ps. Sorry for long topic (and all mistakes in it) but this is most ignored case and most important for distribution.
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register


Or login with...
Sign in with Steam Sign in with Google
Social logins require cookies to stay logged in.

Buy Games
Buy games with our affiliate / partner links: