We do often include affiliate links to earn us some pennies. See more here.

Optimus-manager is a nice piece of software that lets you configure dual GPU setups usually available on laptops that share the same built-in screen and have a lot of nuances to be dealt with:

  • Is there a mux switch available?
  • Are the HDMI or DP output ports hardwired to the laptop? PS: one of my current unanswered questions from ASUS.
  • Is the integrated GPU Intel or AMD?
  • Is the dedicated GPU able to handle PCI resets when you want to disconnect it?
  • Are you able to disable the iGPU on BIOS and let only the dGPU on? (In my case, no)
  • The list goes on...

I've started using optimus-manager after I noticed some performance improvements of nearly 10% more frames per second when using my dedicated GPU only from the boot rather than using some GPU delegation tool like Prime Rendering. But that is something to be discussed in another article. This software basically works by probing well-known PCI output for GPUs using lspci auto generating a Xorg configuration for one of the three following modes depending on your setup before loading your login manager:

  • Hybrid: iGPU is the primary, offload to dGPU when needed
  • Dedicated: X and applications will always run on dGPU
  • Integrated: iGPU will be the only GPU available(alias to Intel mode).

It also comes with a nice Qt-based system tray applet for those who want to change the configuration on the go and relogin on X to apply it. My current configuration is: Hybrid mode when booting on battery, Dedicated mode when booting with a power cord attached.

The first setup of my current laptop took me more time than I would expected because optimus-manager has a lingering issue opened regarding dealing with Domain IDs on PCI:  Wrong BusID in generated xorg.conf .

[nwildner@sandworm ~]$ lspci -m | grep -i 'RTX\|UHD'
00:02.0 "VGA compatible controller" "Intel Corporation" "Alder Lake-P GT1 [UHD Graphics]" -r0c -p00 "ASUSTeK Computer Inc." "Device 136d"
01:00.0 "VGA compatible controller" "NVIDIA Corporation" "GA104M [GeForce RTX 3070 Mobile / Max-Q]" -ra1 -p00 "ASUSTeK Computer Inc." "Device 136d"
[nwildner@sandworm ~]$ lspci | grep -i 'RTX\|UHD'
0000:00:02.0 VGA compatible controller: Intel Corporation Alder Lake-P GT1 [UHD Graphics] (rev 0c)
0000:01:00.0 VGA compatible controller: NVIDIA Corporation GA104M [GeForce RTX 3070 Mobile / Max-Q] (rev a1)

To make things harder, Xorg has some arcane configuration where Domain ID and BUS ID need to be swapped in order(first 2 columns of lspci output), and running lspci in compatibility mode to suppress the Domain ID will generate a broken Xorg configuration crashing X.

While this isn't a deal breaker because I already have a workaround in place, I was wondering it this issue was ever going to be fixed. Tried to download to submit some patches myself that would handle the PCI Domain ID stuff, but to my surprise I'm not the only one not really familiarized with the software. The lead developer is also struggling on this one.

Since there wasn't much activity on optimus-manager repository lately, I decided to google keywords like "is optimus manager active?", "is optimus manager abandoned" , "state of optimus manager" and unfortunately I've found an announcement from the lead developer of this tool that it is a little saddening: [Discussion] The state of optimus-manager. Basically the developer stated that he does not have the energy, the hardware neither the knowledge to keep up with such a big project that looked simpler at first place when forked from Ubuntu. And that is OK.

While it is sad that optimus-manager is not being actively maintained anymore, it is nice to see the official feedback from the lead developer, admitting that he might have fell into the trap where the project seemed to be simpler that he thought.

Some lessons to be learned here are:

  • For those who want to use optimus-manager, there could be other alternatives like Prime and stay on hybrid mode always. Don't take my word for granted on performance and try it by yourself.
  • While the project is in a "limbo", it might be stable enough for your daily use.
  • Don't expect updates on the software, but also, try to not blame the lead developer who took the first step into the obscure world of dealing with multi GPU setups on Linux laptops.
  • The project needs more hands and eyeballs so, if you have the knowledge just sent some patches. Or if you are really knowledgeable of laptop GPU setups, the lead developer can hand over the project to you.
Article taken from GamingOnLinux.com.
Tags: Misc, Open Source
12 Likes
About the author -
author picture
I'm and enthusiast of Linux on Laptops and Secure Boot related stuff. Playing exclusively on Linux since 2013. Played on Wine on dates that trace back to 2008(Diablo 2, Lineage 2...). A troubleshooter that used to work with strace and it is now working with Kubernetes...
See more from me
The comments on this article are closed.
13 comments
Page: 1/2»
  Go to:

Purple Library Guy Sep 2, 2023
I still say they should have called it "Optimus-Prime"
Pikolo Sep 2, 2023
That's unfortunate. Have you talked to the people behind asus-linux? They're not asus, but they might know what's behind the odd displayport behaviour on asus laptops


Last edited by Pikolo on 2 September 2023 at 6:27 pm UTC
Luke_Nukem Sep 2, 2023
I maintain the alternative supergfxctl which leans towards Asus but can be used by other laptops https://gitlab.com/asus-linux/supergfxctl/


Last edited by Luke_Nukem on 2 September 2023 at 7:09 pm UTC
setzer22 Sep 2, 2023
Yes, I know, I could also shut up but... You know which GPU brand doesn't require jumping through hoops and just works on Linux?
nwildner Sep 2, 2023
Quoting: Purple Library GuyI still say they should have called it "Optimus-Prime"

I think that this would cause major confusion with the already established Prime technology...

https://wiki.archlinux.org/title/PRIME

Quoting: PikoloThat's unfortunate. Have you talked to the people behind asus-linux? They're not asus, but they might know what's behind the odd displayport behaviour on asus laptops

I kinda solved that by buying a DP-to-USBC cable, and using it on the thunderbolt port, which funny enough is not a port that is being advertised as DisplayPort compatible as it is with the other usb-c port on this laptop.

But it is weird how plugging on the port that is supposed to be dedicated for USB and DP connections to instantly limit frequency of the HDMI output. Maybe bus share or other engineering trick I'm not aware of.

Quoting: Luke_NukemI maintain the alternative supergfxctl which leans towards Asus but can be used by other laptops https://gitlab.com/asus-linux/supergfxctl/

I can see that supergrafxctl works with Hybrid, Integrated, Vfio and AsusMuxDgpu. Is the AsusMuxDgpu equivalent of setting full NVIDIA on BIOS, since it manipulates some ACPI data?

I had a black screen problem back in the past with this TUF model when setting the GPU mode to "Ultimate" using ROG control center, and had to do some touch typing to get the system back to normal...

That is why I was holding back on using supergfxctl.

Also, the battery and AC for different modes is pretty convenient on optimus-manager. Is this possible to do with supergfxctl? (full nvidia when booting on AC, hybrid when booting on battery).

Quoting: setzer22Yes, I know, I could also shut up but... You know which GPU brand doesn't require jumping through hoops and just works on Linux?

Are you suuuuuuuuure?

The experience is quite the same when you have an Intel processor and AMD GPU on a Laptop.

https://bbs.archlinux.org/viewtopic.php?id=279808

Don't know how it works when you have an AMD processor, if it comes with a "buit-in Vega" when paired with a AMD GPU as well...
edo Sep 3, 2023
I havent used my Nvidia Intel laptop since I bought a steam deck
sarmad Sep 3, 2023
I have always said it: hybrid graphics is a bad design to begin with and should have never been adopted by anyone. It complicates the hardware, complicates the software, and it also wastes space on the chipset. Complexity results in bugs, and anyone who has used hybrid graphics knows how clunky the concept is.
A fraction of the efforts spent by all parties on getting hybrid graphics to work could have instead been used to actually enable dGPUs to scale down their performance when it's not needed. We could've had dGPUs that can scale down their power consumption to a point close to those of iGPUs when no game is running (lower the frequency even more, power down some of the cores, etc).
Grogan Sep 3, 2023
Quoting: setzer22Yes, I know, I could also shut up but... You know which GPU brand doesn't require jumping through hoops and just works on Linux?

I don't shut up about things I dislike either, but in this case the hoops are the hybrid GPU switching, not so much a problem with a vendor. That's going to be a pain in the ass either way.

Here's an example of the hoops a person would jump through with AMD switchable graphics:
https://wiki.archlinux.org/title/PRIME
nwildner Sep 3, 2023
Quoting: sarmadI have always said it: hybrid graphics is a bad design to begin with and should have never been adopted by anyone. It complicates the hardware, complicates the software, and it also wastes space on the chipset. Complexity results in bugs, and anyone who has used hybrid graphics knows how clunky the concept is.
A fraction of the efforts spent by all parties on getting hybrid graphics to work could have instead been used to actually enable dGPUs to scale down their performance when it's not needed. We could've had dGPUs that can scale down their power consumption to a point close to those of iGPUs when no game is running (lower the frequency even more, power down some of the cores, etc).

Agreed on laptops going full dGPU here, and ditching off the iGPU

When I was using a Muxless laptop 3 years ago, I've made some simple tests and configuring Reverse PRIME to start all X on the dedicated GPU entirely, and I was astonished with the results. Setting the dGPU from the ground up as the provided made my system perform better and drain less battery than using the iGPU or the dGPU through "prime-run" even without a mux switch in place.

My current Laptop does not have the BIOS option to set the dGPU only, but I know Lenovo Legion allows it.

The part that I disagree is with the fact that dGPUs don't scale down power on laptops and that is untrue. Here are some tests with optimus-manager set to "Nvidia"(power cord plugged)

1. With Firefox(10 tabs), Telegram Desktop and Steam running:

[nwildner@sandworm ~]$ nvidia-smi 
Sun Sep  3 23:26:29 2023       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05             Driver Version: 535.104.05   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3070 ...    Off | 00000000:01:00.0  On |                  N/A |
| N/A   40C    P8              14W /  80W |    674MiB /  8192MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      2721      G   /usr/lib/Xorg                               328MiB |
|    0   N/A  N/A      5175      G   /usr/lib/firefox/firefox                    308MiB |
|    0   N/A  N/A      5183      G   telegram-desktop                              3MiB |
|    0   N/A  N/A      6474      G   ...local/share/Steam/ubuntu12_32/steam        3MiB |
|    0   N/A  N/A      6499      G   ...re/Steam/ubuntu12_64/steamwebhelper        7MiB |
+---------------------------------------------------------------------------------------+


2. With Xorg only running and a text editor opened(buffer for this answer):

[nwildner@sandworm ~]$ nvidia-smi 
Sun Sep  3 23:28:55 2023       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05             Driver Version: 535.104.05   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3070 ...    Off | 00000000:01:00.0  On |                  N/A |
| N/A   39C    P8               8W /  80W |    166MiB /  8192MiB |     18%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      2721      G   /usr/lib/Xorg                               160MiB |
+---------------------------------------------------------------------------------------+


nvidia-powerd needs to be enabled for power scaling to work.
rustigsmed Sep 3, 2023
yes this is the reason i decided to put Pop_os on my newish laptop (msi stealth 14" -13700h / rtx4060), so easy to switch between iGPU, hybrid and dGPU.
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
The comments on this article are closed.
Buy Games
Buy games with our affiliate / partner links: