book bad
- 0 Posts
- 32 Comments
Tetragrade@leminal.spaceto
Technology@lemmy.world•Sony AI patent will see PlayStation games play themselves when players are stuckEnglish
9·1 month agoLook, that ledge has yellow paint. I bet I can climb it.
Tetragrade@leminal.spaceto
Programming@programming.dev•Building a React App with Formally Verified StateEnglish
4·1 month agoSo proud of Claude for writing this app.
Tetragrade@leminal.spaceto
Programmer Humor@programming.dev•More code = more betterEnglish
2·2 months agoYou’re absolutely right! I used more than 10 words in my prompt. Cry about it.
Tetragrade@leminal.spaceto
Programmer Humor@programming.dev•More code = more betterEnglish
8·2 months agoThis isn’t just a function, it’s a bold restatement of what it means to write code — a symphony of characters, questioning the very nature of the cutting edge language models that I want to beat with hammers.
Tetragrade@leminal.spaceto
Programmer Humor@programming.dev•They're just like us!English
1·2 months agoI mean, because it’s a risk that’s obvious even to me, and it’s not my job to think about it all day. I guess they could just be stupid. 🤷
Tetragrade@leminal.spaceto
Programmer Humor@programming.dev•They're just like us!English
31·2 months agoI’m not sure I understand what you’re saying. By “the commenter”
I was talking about you, but not /srs, that was an attempt @ satire. I’m dismissing the results by appealing to the fact that there’s a process.
negative reward
Reward is an AI maths term. It’s the value according to which the neurons are updated, similar to “loss” or “error”, if you’ve heard those.
I don’t believe this makes sense either way because if the model was producing garbage tokens, it would be obvious and caught during training.
Yes this is also possible, it depends on minute details of the training set, which we don’t know.
Edit: As I understand, these models are trained in multiple modes, one where they’re trying to predict text (supervised learning), but there are also others where it’s given a prompt, and the response is sent to another system to be graded i.e. for factual accuracy. It could learn to identify which “training mode” it’s in and behave differently. Although, I’m sure the ML guys have already thought of that & tried to prevent it.
it still does not make it sentient (or even close).
I agree, noted this in my comment. Just saying, this isn’t evidence either way.
Tetragrade@leminal.spaceto
Programmer Humor@programming.dev•They're just like us!English
85·3 months agoYou cannot know this a-priori. The commenter is clearly producing a stochastic average of the explanations that up the advantage for their material conditions.
For instance, many SoTA models are trained using reinforcement learning, so it’s plausible that its learned that spamming meaningless tokens can delay negative reward (this isn’t even particularly complex). There’s no observable difference in the response, without probing the weights we’re just yapping.
Tetragrade@leminal.spaceto
No Stupid Questions@lemmy.world•Why are people using the "þ" character?English
41·4 months agoI þon’t know.
Tetragrade@leminal.spaceto
Technology@lemmy.world•Nvidia and TSMC produce the first Blackwell wafer made in the U.S. — chips still need to be shipped back to Taiwan to complete the final productEnglish
7·4 months agoDumbass shipping route, just tunnel through.
NO, YOU ARE WRONG!
Ewestile dysfallen.
Tetragrade@leminal.spaceto
Fediverse@lemmy.world•Fediverse still going strong and stabilizingEnglish
4·5 months agoWow, I wonder why.
Tetragrade@leminal.spaceto
Programmer Humor@programming.dev•The JavaScript type coercion algorithmEnglish
1·6 months agoYeah I mean it’s definitely possible to write a mostly sensible string-number equality function that only breaks in edge-cases, but at this point it’s all kinda vibes-based mush, and the real question is like… Why would you want to do that? What are you really trying to achieve?
The most likely case is that it’s a novice that doesn’t understand what they’re doing and the Python setup you describe does a better job at setting up guardrails.
I don’t really see the connection to concatenation, that’s kind of its own thing.
Tetragrade@leminal.spaceto
Programmer Humor@programming.dev•The JavaScript type coercion algorithmEnglish
2·6 months agoNot quite. As the previous commenter said, every string has at least one string representation (i.e. 100 -> “100”, “1e2”). So there’s no sensible way to write a pure function handling that, you’re just cooked no matter what you do.
Tetragrade@leminal.spaceto
No Stupid Questions@lemmy.world•Socially inept, introverted employees. How do you survive the workplace? Because I’m in dire need of some serious advice.English
111·6 months agoneutral voice
Autism dead ringer. There’s no such thing as a neutral tone, because neurotypical people will experience an involuntary emotional reaction to any piece of speech.
Most likely you’re just speaking in an unusual way and it’s priming your coworkers to dislike you. Unfortunate situation but you gotta lock in if you want people to like you.
What you describe as a neutral tone, most would interpret as a strange but deliberate inflection that they don’t understand the intent of (confusing, scary, hostile).
Tetragrade@leminal.spaceto
Technology@lemmy.world•Reddit in talks to embrace Sam Altman’s iris-scanning Orb to verify usersEnglish
31·8 months agoDrink verification can to continue.
Tetragrade@leminal.spaceto
Technology@lemmy.world•Amazon is reportedly training humanoid robots to deliver packagesEnglish
6·8 months agoWhen the mask comes off, humans will revolt. Robots won’t.
Or, that’s the delusion.
Tetragrade@leminal.spaceto
Programmer Humor@programming.dev•Python needs an actual default functionEnglish
5·9 months agoDiabolical

le librul destroyed epic style