Holy shit are you telling me…
Garbage In…
= Garbage Out?
No, that can’t be it, throw billions and billions of dollars at this instead of, I don’t know, housing the homeless.
You realize that those “billions of dollars” have actually resulted in a solution to this? “Model collapse” has been known about for a long time and further research figured out how to avoid it. Modern LLMs actually turn out better when they’re trained on well-crafted and well-curated synthetic data.
Honestly, everyone seems to assume that machine learning researchers are simpletons who’ve never used a photocopier before.
This has been obvious for a while to those of us using GitHub Copilot for programming. Start a function, and then just keep hitting tab to let it autotype based on what it already wrote. It quickly devolves into strange and random bullshit. You gotta babysit it.
Same thing with Stable Diffusion if you’ve ever used a generated image as an input and repeated the same prompt. You basically get a deep-fried copy.
No shit. People have known about the perils of feeding simulator output back in as input for eons. The variance drops off so you end up with zero new insights and a gradual worsening due to entropy.
Can’t wait
So do humans if I’m being honest, look at the RNC.
Yep. It leads to a positive feedback loop. They just continue to self-reinforce whatever came out before.
And with increasing amounts of the internet being polluted with AI text output…
To be fair this doesn’t sound much different than your average human using the internet.
… AI inbreeding.
hapsburgGPT
deleted by creator
We call it the GRRM model.
In the USA, they call it the AlaLlama model.
GPTargaryen
You don’t say, Sherlock
No shit.
So it’s basically an AI prion disease?
No.
( Horseshack voice: )
Oh! Oh! Oh! Mr Kotter!
YOU MEAN FILTER-BUBBLES DO THE SAME THING TO BOTH HUMANS AND AIs??
How Very Incredibly Surprising™, Oh, My!
/s
_ /\ _
Garbage in garbage out
It’s an old expression, but it still checks out
Eventually an AI will be developed that can learn with much less data. In the end we don’t need to read the entire internet to get through our education. But, that’s not going to be LLM. No matter how much you tweak LLM models, it won’t get there. It’s like trying to tune a coal fired steam powered car until you can compete in a formula 1 race.
Yeah, it’s entirely plausible that LLMs are a small part of the answer as basically the language center of the brain, but the brain is a hell of a lot more complex than that. The language center isn’t your whole brain, and is only loosely connected to actual decision making. It confabulates a lot.
OpenAI stumbled on something that worked and ran with it, and people started proclaiming it to be the answer to everything. The same happened with Deep Learning and every AI invention so far. It’s all just another stepping stone on the way.