‘It works fine now, but what about after years of this very recent development?’ is absolutely imagined.
You wanna argue for it? Argue. Don’t posture.
‘It works fine now, but what about after years of this very recent development?’ is absolutely imagined.
You wanna argue for it? Argue. Don’t posture.
It’s a whole new kind of software.
A pile of examples can become a working program. Neural networks are universal approximators, and anyone with a video card can now make them. The work they do feels like hard science fiction written by comedians.
For some reason we’ve only seen two models taken seriously: spicy autocomplete and a denoiser. One is a chatbot that’s just smart enough to get in trouble. The other is CGI for dummies that could make movies as cheap as pen and paper.
The problem in full is the world’s most obvious bubble forcing these technologies on people. On everybody. The folks who choose this, for themselves, don’t need worrying about. Where it doesn’t work out they’ll pretend it never happened. Where it works, neat. Again: the problem is the force and the scale.
So yes, an artificial tornado beside your house is intolerable, but it’s obviously not a fundamental problem with the technology. Even an identical quantity of GPUs could simply be spread out, so many buildings merely hum.
And vegan local models will arise, made from only bespoke licensed data, trained by distributed amateurs. But the big boys shove fancier models into your hands so often that it’d be archaic before it begins… and most people loudly complaining would just keep complaining.
The identarian performance has to stop. Even folks mumbling ‘it’s awful, you should never,’ usually end with ‘but anyway here’s how I use it.’ The tech is fine. It doesn’t belong in your browser. It doesn’t belong on your keyboard. It doesn’t belong in your goddamn e-mail, before you’ve even read it. But curmudgeons and iconoclasts alike have found utility in this Yes Man improv partner who kinda knows C++. And animators will get real quiet when some product magically in-betweens their drawings.
Sam Altman is a fraud. Facebook can burn. CUDA must become open-source after Nvidia craters. But five years from now, this wave of AI will still be so commonplace that it’s boring. We will take for granted that computers perform dubious witchcraft.
Yet it’s old enough to declare so worthless that any inclusion damns the whole project.
They know people spit slop slop slop slop like a thirsty dog. Every public defense is protesting too much, every quiet effort is conscience of guilt. The nature of bad faith is that there is no right answer.
We each need private vigilance against participating in public harassment campaigns. Is there any reason these people’s behavior changed, or that they were keeping things quiet, besides the fear of dealing with you?
> entire product loudly denigrated because of new tool used
Yeah can’t imagine why they’d remove the ‘come have an argument at me’ label.
I want the bubble to burst so this moral panic will end. Programs can code, now. That’s not going away. Make your peace. We can either leverage this new ability to describe code into existence, and improve all the ways where it demonstrably works okay - or we can pretend that wasn’t the goal of compilers and high-level languages the whole time.
Oh but this new thing is different; yeah it’s always different, that’s what new means. Neural networks sounded great for decades but had a hard time existing. We finally accepted the bitter lesson that power scales better than cleverness - and hey presto, ‘what’s the next symbol?’ is as smart as a junior developer.
If you think these fumbling efforts are the best this tool will ever be, we can still extract useful work from it. It’s already a punchline in videos that build some crazy thing the hard way, then have an LLM effortlessly switch languages for speed. Or fight integration hell on their behalf. We’re not doing anyone favors by pretending the problem is the tech. Or by harassing people who work for free on things you like.


And it’s not like servers have gotten harder to run! Pirates serve terabytes of data that’s straight-up illegal! Your fuckin’ commercial connection should be plenty for any damn thing you want.


I don’t give a shit what children see.
They’ll live.
Stop spying on adults.


Is there a point explaining what the N in NP-Complete means, when you’re just gonna ignore two-thirds of a much simpler comment?
If you demand determinism, it’s just matrix algebra. Randomness is optional. It makes them work better. They run on your normal-ass computer, a deterministic Turing machine.
I categorically do not claim determinism is necessary for consciousness or intelligence. I ask you, again: are you deterministic?


Slide rule.


Argumentum ad webster is shite philosophy. Only an explanation of consciousness in terms of unconscious events could explain consciousness.
LLMs could obviously be deterministic - they add randomness because it’s useful. Matrix algebra is not intrinsically stochastic.
What other intelligent entity can you name, that’s purely deterministic? Why is that a precondition? Why is it even relevant?


Okay. So what’s the difference between a model of thinking and literally doing it?
You can say it’s different from how people do it. But a calculator doesn’t multiply the way students do. In mathematics and Turing machines, any process that gets the right answer is the same.


Right, because nothing important in life is ambiguous or approximate.


Does that razor let you say anything at all about intelligence or consciousness, given that neither has a rigid, formal, or universal definition?
If the metric is ‘see, it does the thing,’ then a model which demonstrates thought would not be pretending to think.


Fuck no. It is only because of the Turing test that we can say they’re not conscious. You get someone questioning a bot and a person at the same time, they’re gonna figure out who’s who in short order. See: how many Rs in strawberry, name states without an E, should I walk to the car wash.
If a program was indistinguishable from a person, what basis would we have to say the person is intelligent but the program is not?


Any woman can make a whole new consciousness all by herself, with just a little help from a friend.


… and this wasn’t made by accident, it was deliberately engineered to develop emergent behavior. Quite a lot of money has been spent hiring a variety of experts to make it do this thing.
Hasn’t worked. Almost certainly will never work, with this particular kind of network. But we would not have known that, just by looking at diagrams and going ‘naaahhh.’


Does a calculator simulate math?


Careful down that road. Thought is a process, and we don’t understand it well enough to explain it. So we cannot confidently declare it couldn’t happen by tumbling text through layers of fake neurons.
LLMs definitely aren’t conscious, because they’re dumb as hell. But we had to check. When GPT-2 was novel and closely guarded, we had no idea how well backpropagation could abstract all text ever published - and pessimists were mostly pushing Chinese Room nonsense. We have to bully that denialist thought experiment off the internet. It starts from a demonstrably intelligent subject - as real to you as I am now - then interrogates some unrelated interchangeable hardware. As if the conversations with your short-range pen-pal were not real unless the guy in the box knows why he’s blindly following instructions. It’s p-zombie dualism, except instead of a soul, you need Steve to pay attention.
Only an explanation in terms of unconscious events could explain consciousness.
I miss Everything from Windows.
It’s a file search tool from voidtools, which has instant as-you-type results. It is never out of date because it tracks filesystem events. FSearch simply does not compare.