

I will literally starve to death living in the wilderness before someone puts a Neuralink in me. If you forced me to choose, I would choose starvation.
Hello
I will literally starve to death living in the wilderness before someone puts a Neuralink in me. If you forced me to choose, I would choose starvation.
Even as a white male US born citizen, I also don’t see myself flying back any time soon. I’m nobody, but I’ve run my mouth against fascists online enough that I’ve probably triggered some flag in the system.
So you’re describing a reasoning model, which is 1) still based on statistical token sequences and 2) trained on another tool (logic and discourse) that it uses to arrive at the truth. It’s a very fallible process. I can’t even begin to count the number of times that a reasoning model has given me a completely false conclusion. Research shows that even the most advanced LLMs are giving incorrect answers as much as 40% of the time IIRC. Which reminds me of a really common way that humans arrive at truth, which LLMs aren’t capable of:
Fuck around and find out. Also known as the scientific method.
What you’re describing is not an LLM, it’s tools that an LLM is programmed to use.
This doesn’t sound like a nonprofit.
You don’t understand what an LLM is, or how it works. They do not think, they are not intelligent, they do not evaluate truth. It doesn’t matter how smart you think you are. In fact, thinking you’re so smart that you can get an LLM to tell you the truth is downright dangerous naïveté.
I’m not saying humans are infallible at recognizing truth either. That’s why so many of us fall for the untruths that AI tells us. But we have access to many tools that help us evaluate truth. AI is emphatically NOT the right tool for that job. Period.
That’s very interesting. I’ve been trying to use ChatGPT to turn my photos into illustrations. I’ve been noticing that it tends to echo elements from past photos in new chats. It sometimes leads to interesting results, but it’s definitely not the intended outcome.
I’m not saying these prompts won’t help, they probably will. But the notion that ChatGPT has any concept of “truth” is misleading. ChatGPT is a statistical language machine. It cannot evaluate truth. Period.
Pretty sure I’m not gonna hire you to do any professional work for me.
My first reaction was “who give a fuck?” then I got to the part of the article that says:
His website, which also features the purple dragon and a bunch of busted links in the footer, says that the firm “integrates AI to lower the cost of legal services.”
Which is honestly a thousand times more concerning than how he chooses to display his silly logo. Dude is writing legal documents with AI. At least his lack of professionalism is obvious.
Anyone who understands that it’s a statistical language algorithm will understand that it’s not an honesty machine, nor intelligent. So yes, it’s relevant.
If I could get a laptop with a screen like this, I could finally sit outside in a park and code like nature intended.
Strange headline to say that credit card payments as age gates are trending again. Reddit has nothing to do with it.
I’m sure that the “consent” is part of the terms and conditions when you sign up for a line on a family plan. Not that it’s genuinely informed consent, or that people know what they agreed to, but technically…
Out of curiosity, what is illegal about it, exactly?
Has anyone tried a pencil and paper?
The two are more connected than you may think at first.
Musk is out to delete all laws that don’t benefit him, and replace them with harsh private rules that are not accountable to the people.
You think that’s exclusive to the UK? I live in rural Asia and police patrol the countryside with drones.