• 0 Posts
  • 360 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle




  • To be fair they made a lot of strides to the point where config file wrangling went from mandatory to almost never done.

    But yes, Nvidia would have quirks driving people back to wrangling config file, but they got better too.

    Though I’m not particularly interested in X11. The biggest thing they had was trivial application forwarding, but the architecture didn’t scale well to modern resolutions and UI design that was largely bitmaps being pushed, as well as not handling higher latency networks too well.


  • I’d say that those details that vary tend not to vary within a language and ecosystem, so a fairly dumb correlative relationship is enough to generally be fine. There’s no way to use logic to infer that it’s obvious that in language X you need to do mylist.join(string) but in language Y you need to do string.join(mylist), but it’s super easy to recognize tokens that suggest those things and a correlation to the vocabulary that matches the context.

    Rinse and repeat for things like do I need to specify type and what is the vocabulary for the best type for a numeric value, This variable that makes sense is missing a declaration, does this look to actually be a new distinct variable or just a typo of one that was declared.

    But again, I’m thinking mostly in what kind of sort of can work, my experience personally is that it’s wrong so often as to be annoying and get in the way of more traditional completion behaviors that play it safe, though with less help particularly for languages like python or javascript.





  • GPTs which claim to use a stockfish API

    Then the actual chess isn’t LLM. If you are going stockfish, then the LLM doesn’t add anything, stockfish is doing everything.

    The whole point is the marketing rage is that LLMs can do all kinds of stuff, doubling down on this with the branding of some approaches as “reasoning” models, which are roughly “similar to ‘pre-reasoning’, but forcing use of more tokens on disposable intermediate generation steps”. With this facet of LLM marketing, the promise would be that the LLM can “reason” itself through a chess game without particular enablement. In practice, people trying to feed in gobs of chess data to an LLM end up with an LLM that doesn’t even comply to the rules of the game, let alone provide reasonable competitive responses to an oppone.





  • incorrect behavior that doesn’t even have the courtesy to throw an actual error.

    To be fair, this can be said of C. A C executable only really forces a crash out when you royally screw up beyond the bounds of your memory. Otherwise functions just return a negative value and calling code that never bothers to check just keep on going.

    Golang is similar, slightly mitigated that if you are assigning any return value from a function, you must also explicitly receive an error and you know full well that you are being lazy if you don’t handle it. Well unless you use a panic/recover scheme but golang community will skewer you alive for casually suggesting that and certainly third party libraries aren’t going to do it that way.


  • Could I write a compiler in C that does this check on a piece of Rust code?

    Well yes, but that code has to be written in Rust. The human has to follow rules to give the compiler a chance to check things.

    C is so simplictic, that if I can write a piece of functionality in C, I must understand its inner workings fully. Not just how to use the feature, but how the feature works under the hood.

    I don’t think that’s particularly more true of C than Rust or even Golang. In C you are frequently making function calls anyway for the real fun stuff. If you ever compile a “simplistic” chunk of C code that you think is obvious how it would compile to assembly and you open up the assembly output, you are likely to be very surprised with what the compiler chose to do. I’ve seen some professional C developers that never actually had a reason to fully understand how the stack works, since C abstracts that away and the implications of the stack don’t matter until you exceed some limitations.



  • Yes, as long as you were on the side that benefits from success, it was better to leave things “simple” and not challenge the incorrect stuff out loud you aren’t going to “well actually…” the “expert” if it risks your job and/or the wrong stuff isn’t too important or too hard to overcome when the rubber meets the road.

    Still, sitting in a room or otherwise being a party to a conversation where an executive is constantly being confidently incorrect and still praised as a smart expert likely making 7 figures is maddening.


  • While I have not reviewed a lot of Musk speak, let alone armed with enough to credibly review his commentary, but based on my own field and “respected technical leaders” that interview with customers and the press, with broad acknowledgement that they really know their stuff…

    Most of them I’ve known can sound very confident and credible while saying completely incorrect stuff. No one tries to correct them because them being actually correct doesn’t add value and trying to fix that is more trouble than it’s worth much of the time. The people paying attention don’t know well enough to recognize they are wrong… usually…

    Upon occasion my company throws one of these “geniuses” at a customer that actually knows what they are doing. Then I got to see our executive basically try to gaslight the audience when they challenged his competency. The sales people has to last minute pull in the actual technical people to try to repair our image after the customer interacted with the executive…

    Now one would think, clearly, after such an embarrassment, surely the company learned to field the actual technical experts to deal with technical questions… But no, for every smart customer that is turned off by that executive, there’s 10 more clients that don’t know any better and respond so much better to his baseless confidence than actual competent discussion. Also, those 10 suckers will also get suckered into more high margin stuff versus the smart customer, that will be really good at getting the most cost effective products, with low margin and skipping the pointless addions.



  • Unfortunately, the ecosystem around github has evolved so that most folks centralize their testing and deployment code into being executed on github infrastructure. Frankly a perversion of the decentralized design of git.

    Fortunately for my team, it doesn’t matter because our process requires stuff that can’t be done from github infrastructure anyway, so we have kept the automatic testing and deployment on premise even as github is the ‘canonical’ place for the code to live.