While you're here, please consider supporting GamingOnLinux on:
Reward Tiers: Patreon. Plain Donations: PayPal.
This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!
You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Reward Tiers: Patreon. Plain Donations: PayPal.
This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!
You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register
- Fedora KDE gets approval to be upgraded to sit alongside Fedora Workstation
- Steam gets new tools for game devs to offer players version switching in-game
- Palworld dev details the patents Nintendo and The Pokemon Company are suing for
- GOG launch their Preservation Program to make games live forever with a hundred classics being 're-released'
- Sony say their PSN account requirement on PC is so you can enjoy their games 'safely'
- > See more over 30 days here
-
Old Skies from Wadjet Eye Games looks like one to remem…
- TheSHEEEP -
Old Skies from Wadjet Eye Games looks like one to remem…
- emphy -
Classic Unreal Tournament and Unreal now easier to down…
- emphy -
Minecraft-like free and open source game VoxeLibre (for…
- kneekoo -
Mesa 24.2.7 out now and Mesa 24.3 may come sooner than …
- nnohonsjnhtsylay - > See more comments
- Steam and offline gaming
- missingno - Does Sinden Lightgun work?
- helloCLD - No more posting on X / Twitter
- Liam Dawe - Weekend Players' Club 10/11/2024
- Pengling - Upped the limit on article titles
- eldaking - See more posts
AI is new and AI developers are doing things right now that can only be done by keeping your sources secret and even than only temporary.
People keep demanding safe AI: AI that won't do certain unethical things.
They demand this, because they want nobody to be able to do these things.
This is the digital equivalent of demanding from individual hammer manufacturers that nobody should be able to use hamers to smash windows. They might be able to design hammers that can't smash windows, but they'll never be able to keep people from making their own window smashing hammers.
This is what proprietary AI is achieving.
By making certain they're the only one who can build good AI and only servicing its end products they ensure that they by making their product behave safe all AI behaves safe, because they're all AI.
This works under the assumption that nobody other than a limited set of sanctioned parties can make AI that's worth squad.Open source AI violates that assumption.
You know that even before ChatGPT came out there already existed AI to nudify pictures of women.
Is this unethical: yes.
Can OpenAI do anything about it: a little, they can make certain that they outcompete all the smaller AI models on everything except unethical behavior(including marketing), so that the unethical AI providers go out of business.
It's like keeping non-disabled people from competing in the Paralympics by training handicapped athletes beyond the level of non-handicapped ones.
Does this work forever.
No.
What allows them to do this?
These advantages are all deteriorating.
Also consumer devices now include NPU's
In the end not only open source AI will work be competitive, but its development will become, so easy that making criminal models based on it will become feasible and all we can do to fight it is armor up and make it so that the criminal models can't effect us.
I imagine a market for AI fooling clothing springing up.
There already is a market for AI generated content detection.
Last edited by LoudTechie on 25 May 2024 at 2:32 pm UTC
View PC info
This is more about governments unwilling to do anything about it. Even then it's also corporate greed. Why work harder when you make bank and it's enough to make an about face when your bot turns Hitler? (or Tesla self driving)
Predictable AI is a sane expectation, one i guess many scientists are working toward, and also one, that the governments could help along with sane regulation. But you know, the posterchild of the problems around this economic model and lack of regulations is the richest man in America. (self driving and the mars landing are only a year away every year for like a decade)
What do you expect governements to do about it?
They can make AI that can do unethical things illegal(EU AI act), but they would still need to enforce it.
The internet allows people to hide behind behind the most permissive government in the world and even make combinations in it.
View PC info
Simple, regulate it. Companies are afraid of being fined. If regulation exists, they will comply with it.
And i don't get your fear of individuals... Law will go after criminals. We all have knives and cars, and nothing stops anyone with killing with them.
Companies will comply, but what about criminals, criminal organizations and hostile governments(doesn't matter where you live there is always a government that doesn't like yours). Right now most don't have the resources to train AI that's worth squad, but what when they do.
My fear of individuals is, because it's what happened with piracy, ddos, ransomware and criminal market places.
The technology becomes/is, so easy that a single bored and opiniated/ethically challenged specialist can make it available to everybody partime or even voluntary behind a bunch of privacy measures and since it's software on the internet it can be used to affect anybody.
I can hope it ends up as ransomware development and ddos where it can be pushed back into dark web market places where it falls into the hands of a few gangs.
What happened with those?