

The notion that AI is half-ready is a really poignant observation actually. It’s ready for select applications only, but it’s really being advertised like it’s idiot-proof and ready for general use.
polite leftists make more leftists
more leftists make revolution
The notion that AI is half-ready is a really poignant observation actually. It’s ready for select applications only, but it’s really being advertised like it’s idiot-proof and ready for general use.
may well be a Gell-Mann amnesia simulator when used improperly.
In the situation outlined, it can be pretty effective.
yeah.
Hitler liked to paint, doesn’t make painting wrong. The fact that big tech is pushing AI isn’t evidence against the utility of AI.
That common parlance is to call machine learning “AI” these days doesn’t matter to me in the slightest. Do you have a definition of “intelligence”? Do you object when pathfinding is called AI? Or STRIPS? Or bots in a video game? Dare I say it, the main difference between those AIs and LLMs is their generality – so why not just call it GAI at this point tbh. This is a question of semantics so it really doesn’t matter to the deeper question. Doesn’t matter if you call it AI or not, LLMs work the same way either way.
I’m impressed you can make strides with Rust with AI. I am in a similar boat, except I’ve found LLMs are terrible with Rust.
The problem is they are not i.i.d., so this doesn’t really work. It works a bit, which is in my opinion why chain-of-thought is effective (it gives the LLM a chance to posit a couple answers first). However, we’re already looking at “agents,” so they’re probably already doing chain-of-thought.
obviously
It really depends on the context. Sometimes there are domains which require solving problems in NP, but where it turns out that most of these problems are actually not hard to solve by hand with a bit of tinkering. SAT solvers might completely fail, but humans can do it. Often it turns out that this means there’s a better algorithm that can exploit commanalities in the data. But a brute force approach might just be to give it to an LLM and then verify its answer. Verifying NP problems is easy.
(This is speculation.)
semantics.
I think everyone in the universe is aware of how LLMs work by now, you don’t need to explain it to someone just because they think LLMs are more useful than you do.
IDK what you mean by glazing but if by “glaze” you mean “understanding the potential threat of AI to society instead of hiding under a rock and pretending it’s as useless as a plastic radio,” then no, I won’t stop.
Are you just trolling or do you seriously not understand how something which can do a task correctly with 30% reliability can be made useful if the result can be automatically verified.
Right, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.
yes, that’s generally useless. It should not be shoved down people’s throats. 30% accuracy still has its uses, especially if the result can be programmatically verified.
I meant the latter, not “it can do 30% of tasks correctly 100% of the time.”
I’m not claiming that the use of AI is ethical. If you want to fight back you have to take it seriously though.
I’d just like to point out that, from the perspective of somebody watching AI develop for the past 10 years, completing 30% of automated tasks successfully is pretty good! Ten years ago they could not do this at all. Overlooking all the other issues with AI, I think we are all irritated with the AI hype people for saying things like they can be right 100% of the time – Amazon’s new CEO actually said they would be able to achieve 100% accuracy this year, lmao. But being able to do 30% of tasks successfully is already useful.
Absolutely, this matches my experience. I think this is also the experience of most coders who willingly use AI. I feel bad for the people who are forced to use it by their companies. And those who are laid off because of C-levels who think AI is capable of replacing an experienced coder.
AI as it exists today is only effective if used sparingly and cautiously by someone with domain knowledge who can identify the tasks (usually menial ones) that don’t need a human touch.
yeah, this is why I’m #fuck-ai to be honest.