While you're here, please consider supporting GamingOnLinux on:
Reward Tiers: Patreon. Plain Donations: PayPal.
This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!
You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Reward Tiers: Patreon. Plain Donations: PayPal.
This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!
You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register
- Intel and NVIDIA drivers holding back a public SteamOS release, Valve not trying to compete with Windows
- GOG joins the European Federation of Game Archives, Museums and Preservation Projects
- Discord screen-sharing with audio on Linux Wayland is officially here
- NVIDIA release new GPU driver updates for Linux and Windows after announcing security issues
- GE-Proton 9-23 released with a Battle.net update fix for Linux / Steam Deck
- > See more over 30 days here
-
Indian mystery adventure game Detective Dotson arrives …
- chr -
GE-Proton 9-23 released with a Battle.net update fix fo…
- 14 -
GE-Proton 9-23 released with a Battle.net update fix fo…
- ToddL -
GE-Proton 9-23 released with a Battle.net update fix fo…
- ElamanOpiskelija -
Wireless HORIPAD for Steam gets a firmware fix for the …
- ThatSpoonyBard - > See more comments
I am very much enjoying seeing all these utterly ridiculous AI things from Google. Arse Technica did a nice overview.
Last edited by Liam Dawe on 24 May 2024 at 2:42 pm UTC
Unrelated to Google's AI, but related to AI in general, the results of this AI-written cake recipe were quite funny.
View PC info
The overview was pretty good until the end they put some platitudes about how it is "improving all the time", diminishing these serious concerns in a way that can dangerously mislead the general public.
Most people without a somewhat technical understanding of machine learning and LLMs see these mistakes as "bugs", as "growing pains" while the tech is being perfected... when they are in fact fundamental limitations of this approach. It isn't a few implementation errors you can debug and fix; to fix the general problem, all they can do is research and hope someone discovers a totally new technique, which might not even exist to be discovered.
What those scammy AI companies do is add "filters" to deal with those particular edge cases (which is why sometimes you can't replicate the problems people reported on social media), but they do nothing for all the other bullshit it will spew in the future. We keep seeing people "tricking" LLMs into assuming personas and talking about stuff it was not supposed to talk about, or using hypotheticals and double negatives. Sometimes we will even see these filters overcompensate and insist on a pre-programmed answer to one of those detected flaws, even when that answer isn't really relevant (but the question is similar enough). It all showcases how this approach is never foolproof: you need filters to protect your filters, you need your heuristics to cover every possibility. Instead of programming all correct answers into a program, they have to instead program all wrong answers so that their bullshit generator doesn't make up stuff that looks too bad.
Last edited by eldaking on 24 May 2024 at 5:36 pm UTC