Optimus-manager is a nice piece of software that lets you configure dual GPU setups usually available on laptops that share the same built-in screen and have a lot of nuances to be dealt with:
- Is there a mux switch available?
- Are the HDMI or DP output ports hardwired to the laptop? PS: one of my current unanswered questions from ASUS.
- Is the integrated GPU Intel or AMD?
- Is the dedicated GPU able to handle PCI resets when you want to disconnect it?
- Are you able to disable the iGPU on BIOS and let only the dGPU on? (In my case, no)
- The list goes on...
I've started using optimus-manager after I noticed some performance improvements of nearly 10% more frames per second when using my dedicated GPU only from the boot rather than using some GPU delegation tool like Prime Rendering. But that is something to be discussed in another article. This software basically works by probing well-known PCI output for GPUs using lspci
auto generating a Xorg configuration for one of the three following modes depending on your setup before loading your login manager:
- Hybrid: iGPU is the primary, offload to dGPU when needed
- Dedicated: X and applications will always run on dGPU
- Integrated: iGPU will be the only GPU available(alias to Intel mode).
It also comes with a nice Qt-based system tray applet for those who want to change the configuration on the go and relogin on X to apply it. My current configuration is: Hybrid mode when booting on battery, Dedicated mode when booting with a power cord attached.
The first setup of my current laptop took me more time than I would expected because optimus-manager has a lingering issue opened regarding dealing with Domain IDs on PCI: Wrong BusID in generated xorg.conf .
[nwildner@sandworm ~]$ lspci -m | grep -i 'RTX\|UHD'
00:02.0 "VGA compatible controller" "Intel Corporation" "Alder Lake-P GT1 [UHD Graphics]" -r0c -p00 "ASUSTeK Computer Inc." "Device 136d"
01:00.0 "VGA compatible controller" "NVIDIA Corporation" "GA104M [GeForce RTX 3070 Mobile / Max-Q]" -ra1 -p00 "ASUSTeK Computer Inc." "Device 136d"
[nwildner@sandworm ~]$ lspci | grep -i 'RTX\|UHD'
0000:00:02.0 VGA compatible controller: Intel Corporation Alder Lake-P GT1 [UHD Graphics] (rev 0c)
0000:01:00.0 VGA compatible controller: NVIDIA Corporation GA104M [GeForce RTX 3070 Mobile / Max-Q] (rev a1)
To make things harder, Xorg has some arcane configuration where Domain ID and BUS ID need to be swapped in order(first 2 columns of lspci
output), and running lspci
in compatibility mode to suppress the Domain ID will generate a broken Xorg configuration crashing X.
While this isn't a deal breaker because I already have a workaround in place, I was wondering it this issue was ever going to be fixed. Tried to download to submit some patches myself that would handle the PCI Domain ID stuff, but to my surprise I'm not the only one not really familiarized with the software. The lead developer is also struggling on this one.
Since there wasn't much activity on optimus-manager repository lately, I decided to google keywords like "is optimus manager active?", "is optimus manager abandoned" , "state of optimus manager" and unfortunately I've found an announcement from the lead developer of this tool that it is a little saddening: [Discussion] The state of optimus-manager. Basically the developer stated that he does not have the energy, the hardware neither the knowledge to keep up with such a big project that looked simpler at first place when forked from Ubuntu. And that is OK.
While it is sad that optimus-manager is not being actively maintained anymore, it is nice to see the official feedback from the lead developer, admitting that he might have fell into the trap where the project seemed to be simpler that he thought.
Some lessons to be learned here are:
- For those who want to use optimus-manager, there could be other alternatives like Prime and stay on hybrid mode always. Don't take my word for granted on performance and try it by yourself.
- While the project is in a "limbo", it might be stable enough for your daily use.
- Don't expect updates on the software, but also, try to not blame the lead developer who took the first step into the obscure world of dealing with multi GPU setups on Linux laptops.
- The project needs more hands and eyeballs so, if you have the knowledge just sent some patches. Or if you are really knowledgeable of laptop GPU setups, the lead developer can hand over the project to you.
Quoting: nwildner[...]
My current Laptop does not have the BIOS option to set the dGPU only, but I know Lenovo Legion allows it.
The part that I disagree is with the fact that dGPUs don't scale down power on laptops and that is untrue. Here are some tests with optimus-manager set to "Nvidia"(power cord plugged)
[...]
With Legion 5 pro the only way to actually make it work ( for me atleast ) is to run it with with dGPU only, any other configuration results in black screens :|
And atleast with ubuntu 20.2 there is no way to enable nvidia-powerd so I'm stuck at max 80W of the 130W the card can provide :| (*)
Linux has come a long way in terms of ease of use in the last 30 years, but laptops gfx and wifi is one of the areas that is still kinda of ifffy
Edit:
*) immediately after posting this I thought of something and found a way to make it work -ish. It now has a max power draw of 115W, which IIRC is below what it should be able to do - but it's (hopefully) better performance than 80W
Last edited by Guppy on 4 September 2023 at 6:54 am UTC
Quoting: GuppyAnd atleast with ubuntu 20.2 there is no way to enable nvidia-powerd so I'm stuck at max 80W of the 130W the card can provide :| (*)
Linux has come a long way in terms of ease of use in the last 30 years, but laptops gfx and wifi is one of the areas that is still kinda of ifffy
Yeah, I had problems in the past with nidia-powerd as well and got stuck into that bug where it was consuming a high amount of CPU (like 4 cores at 100%). Problem is that disabling nvidia-powerd also removes the power boost feature, which in your laptop is 130W and mine is 115W on high load. But it was solved and not nvidia is handling pretty well the higher and lower power delivery features.
As for the wifi, my rule of thumb is get laptops with Intel or Atheros for wifi(and maybe Ralink). Those work out of the box and are usually feature complete. Last time I've got a broadcom wifi subnotebook back in 2013, I had to use ndiswrapper with all those nasty bridges to use the Windows .INI driver converted into a Linux driver.
So while I would not generalize that all wifi and gpu support is iffy, there are some points to take extra care.
Last edited by nwildner on 4 September 2023 at 9:33 am UTC
Quoting: nwildnerQuoting: sarmadI have always said it: hybrid graphics is a bad design to begin with and should have never been adopted by anyone. It complicates the hardware, complicates the software, and it also wastes space on the chipset. Complexity results in bugs, and anyone who has used hybrid graphics knows how clunky the concept is.
A fraction of the efforts spent by all parties on getting hybrid graphics to work could have instead been used to actually enable dGPUs to scale down their performance when it's not needed. We could've had dGPUs that can scale down their power consumption to a point close to those of iGPUs when no game is running (lower the frequency even more, power down some of the cores, etc).
Agreed on laptops going full dGPU here, and ditching off the iGPU
When I was using a Muxless laptop 3 years ago, I've made some simple tests and configuring Reverse PRIME to start all X on the dedicated GPU entirely, and I was astonished with the results. Setting the dGPU from the ground up as the provided made my system perform better and drain less battery than using the iGPU or the dGPU through "prime-run" even without a mux switch in place.
My current Laptop does not have the BIOS option to set the dGPU only, but I know Lenovo Legion allows it.
The part that I disagree is with the fact that dGPUs don't scale down power on laptops and that is untrue. Here are some tests with optimus-manager set to "Nvidia"(power cord plugged)
1. With Firefox(10 tabs), Telegram Desktop and Steam running:
[nwildner@sandworm ~]$ nvidia-smi
Sun Sep 3 23:26:29 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3070 ... Off | 00000000:01:00.0 On | N/A |
| N/A 40C P8 14W / 80W | 674MiB / 8192MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 2721 G /usr/lib/Xorg 328MiB |
| 0 N/A N/A 5175 G /usr/lib/firefox/firefox 308MiB |
| 0 N/A N/A 5183 G telegram-desktop 3MiB |
| 0 N/A N/A 6474 G ...local/share/Steam/ubuntu12_32/steam 3MiB |
| 0 N/A N/A 6499 G ...re/Steam/ubuntu12_64/steamwebhelper 7MiB |
+---------------------------------------------------------------------------------------+
2. With Xorg only running and a text editor opened(buffer for this answer):
[nwildner@sandworm ~]$ nvidia-smi
Sun Sep 3 23:28:55 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3070 ... Off | 00000000:01:00.0 On | N/A |
| N/A 39C P8 8W / 80W | 166MiB / 8192MiB | 18% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 2721 G /usr/lib/Xorg 160MiB |
+---------------------------------------------------------------------------------------+
nvidia-powerd needs to be enabled for power scaling to work.
I know that dGPUs do scale down, but they don't scale down enough to the point that makes iGPUs totally unneeded. Even with their lowest performance they still draw considerably more power than iGPUs. As far as I know, scaling down is currently limited to lowering down the clock speed and that's it. If they also power down some cores (dGPUs have more cores than iGPUs) I believe they can bring down the power cost to a level close to iGPUs. They will still draw more because of the extra memory (the vRAM), but that won't be a lot and is a cost totally worth it.
See more from me