Citat:
Nya LLM:s som open source är en väldigt dålig idé. Det finns många idioter på den här planeten som kommer att använda AI destruktiv. Denna färska tweet förstår problemet:
Meta wants to open-source a GPT-5-level model and seems dead-set on open-sourcing right up until AGI. I want to be clear about what this means:
There is no kill-switch. If something goes wrong--an agent gets out of control or a bad actor weaponizes it--there's no easy way to turn it off. It could be running on any small cluster. There will be no security.
Safety research becomes meaningless. All the work people have done into making AI systems honest, aligned, ethical, etc becomes (mostly) moot. The population of AI's out in the world will evolve towards whichever systems produce the most economic output, irrespective of what values or motives they have. There will be no guardrails. Anyone can change their AI's values or capabilities as they want, for good or bad.
If Meta continues to open-source as we get much smarter AI, it's pretty clear to me that things will become a shitshow. The arrival of these alien intelligences in the world is already going to be chaotic, but much much moreso if we just fling off what little levers of human control we have.
As far as I can tell, Meta's wish to open-source stems mostly from some software industry dogma that "open-source good". And as far as I can tell, they weren't so in favor of open-source until their first model, Llama, was accidentally leaked, after which they pretended they were in favor of it the whole time.
https://twitter.com/mezaoptimizer/st...UeLrNLkBA&s=19