AMD is moving even further into open source, with their plans to have openSIL replace AGESA for their processors and they've now put up the initial open source code on GitHub.
It's only currently a proof of concept, to start working more in the open with the community. Their current plan is to have openSIL available properly in some form beginning 2026. AMD has been collaborating with the likes of 9elements, AMI, AWS, 3mdeb, Datacom, Google, Meta and Oxide on this.
I'll leave the explanation up to AMD on this one from their blog post back in April:
AMD believes one of the ways to attain an improved security posture is to open Silicon Initialization Firmware architecture, development, and validation to the open-source community. AMD is committed to open-source software and is now expanding into the various firmware domains with the re-architecture of its x86 AGESA FW stack - designed with UEFI as the host firmware that prevented scaling, to other host firmware solutions such as coreboot, oreboot, FortiBIOS, Project µ and others. A newer, open architecture that potentially allows for reduced attack surface, and perceivably infinite scalability is now available as a Proof-of-Concept, within the open-source community for evaluation, called the AMD openSIL – Open-Source Silicon Initialization Library.
AMD openSIL adheres to simple goals of an agnostic set of library functions written in an industry-standard language that can be statically linked to the host firmware without having to adhere to any host firmware protocols. AMD openSIL is designed to be scalable and simple to integrate, light weight, low chirp and transparent, potentially allowing for an improved security posture.
Sounds like a nice win for open source and future system security with AMD CPUs.
What do you think to this? Let me know in the comments.
I'm confused. Why add Pluton to the same CPUs then? So you get open-source firmware with a backdoor?
From Wikipedia:
Pluton is a Microsoft-designed security subsystem that implements a hardware-based root of trust for Azure Sphere. It includes a security processor core, cryptographic engines, a hardware random number generator, public/private key generation, asymmetric and symmetric encryption, support for elliptic curve digital signature algorithm (ECDSA) verification for secured boot, and measured boot in silicon to support remote attestation with a cloud service, and various tampering counter-measures.
I don't see anything that constitutes a backdoor here. It's more akin to a TPM. From what I'm reading above, this processor probably can't access system memory, spy or change anything on the main machine.
Measured boot is typically done "manually" by bootloader, IIRC, each stage providing a hash of the next stage before executing it.
One should maintain a healthy dose of skepticism against Microsoft-provided solutions, but this doesn't mean spreading FUD :)
I don't see anything that constitutes a backdoor here.
That is understandable... if you're a Microsoft employee.
The processor doesn't need to be able to access your RAM to be a security risk. If its crypto engines or RNG engine is analyzed and a flaw in the algorithm is detected, it can be exploited to weaken anything that has been encrypted with this processor.
The processor doesn't need to be able to access your RAM to be a security risk. If its crypto engines or RNG engine is analyzed and a flaw in the algorithm is detected, it can be exploited to weaken anything that has been encrypted with this processor.
This can be said about basically any piece of security hardware or software. “it could be broken someday” ≠ “it's a backdoor”
I was reading the GNU Project's article on Trusted Computing recently, and one thing that stood out to me is that remote attestation is the one thing that makes all the nefarious stuff work:I'm confused. Why add Pluton to the same CPUs then? So you get open-source firmware with a backdoor?
From Wikipedia:
Pluton is a Microsoft-designed security subsystem that implements a hardware-based root of trust for Azure Sphere. It includes a security processor core, cryptographic engines, a hardware random number generator, public/private key generation, asymmetric and symmetric encryption, support for elliptic curve digital signature algorithm (ECDSA) verification for secured boot, and measured boot in silicon to support remote attestation with a cloud service, and various tampering counter-measures.
I don't see anything that constitutes a backdoor here. It's more akin to a TPM. From what I'm reading above, this processor probably can't access system memory, spy or change anything on the main machine.
Measured boot is typically done "manually" by bootloader, IIRC, each stage providing a hash of the next stage before executing it.
One should maintain a healthy dose of skepticism against Microsoft-provided solutions, but this doesn't mean spreading FUD :)
As of 2015, the main method of distributing copies of anything is over the internet, and specifically over the web. Nowadays, the companies that want to impose DRM on the world want it to be enforced by programs that talk to web servers to get copies. This means that they are determined to control your browser as well as your operating system. The way they do this is through “remote attestation”—a facility with which your computer can “attest” to the web server precisely what software it is running, such that there is no way you can disguise it. The software it would attest to would include the web browser (to prove it implements DRM and gives you no way to extract the unencrypted data), the kernel (to prove it gives no way to patch the running browser), the boot software (to prove it gives no way to patch the kernel when starting it), and anything else relating to the security of the DRM companies' dominion over you.
Under an evil empire, the only crack by which you can reduce its effective power over you is to have a way to hide or disguise what you are doing. In other words, you need a way to lie to the empire's secret police. “Remote attestation” is a plan to force your computer to tell the truth to a company when its web server asks the computer whether you have liberated it.
As of 2015, treacherous computing has been implemented for PCs in the form of the “Trusted Platform Module”; however, for practical reasons, the TPM has proved a total failure for the goal of providing a platform for remote attestation to verify Digital Restrictions Management. Thus, companies implement DRM using other methods. At present, “Trusted Platform Modules” are not being used for DRM at all, and there are reasons to think that it will not be feasible to use them for DRM. Ironically, this means that the only current uses of the “Trusted Platform Modules” are the innocent secondary uses—for instance, to verify that no one has surreptitiously changed the system in a computer.
...
This also does not mean that remote attestation is not a threat. If ever a device succeeds in implementing that, it will be a grave threat to users' freedom. The current “Trusted Platform Module” is harmless only because it failed in the attempt to make remote attestation feasible. We must not presume that all future attempts will fail too.
As of 2022, the TPM2, a new “Trusted Platform Module”, really does support remote attestation and can support DRM. The threat I warned about in 2002 has become terrifyingly real.
Last edited by pleasereadthemanual on 16 June 2023 at 1:09 am UTC
This is pretty cool. I wish actual UEFI for common motherboards was also open source.I think this is a building block for going in that direction.
yeah and also faster. linux boots up in 1.5 seconds, yet before it uefi takes 8 seconds to initialize. that's awful
I've heard DDR 5 on Zen 4 takes a long time to initialize in general. Didn't get one to test yet.
See more from me