• amemorablename@lemmygrad.ml
    link
    fedilink
    arrow-up
    5
    ·
    9 days ago

    The reason I posted this is that it’s good to try and hold demoncorps like Google accountable even though it won’t likely make a dent.

    Agreed. I have no love for google or how they and others like them are going about this. Personally, it’s a subject I hang around a lot, so I tend to use what opportunities I have to drop some basics about it, in case there are people around who think it’s more… magical than it is, for lack of a better word.

    So now you get bombastic claims about what LLMs will be able to do five years from now alongside disclaimers that it currently makes shit up so please double check the responses.

    Lol yeah, that stuff is… something. AGI (Artificial General Intelligence) seems to be the go-to buzzword to fuel the hype machine, but as far as I can tell, the logistics of actually achieving it are so beyond what an LLM is, at least in the current transformer infrastructure of things. One of the things I’ve picked up along the way is just how important data is that goes into training an LLM. And it’s this thing that kinda makes intuitive sense when you think about it, but can get lost in the black box “AI so clever” hype; that it can’t know something it hasn’t ever been presented with before. To put it one way, if you trained an LLM on a story with binary good and a story with binary evil, it’s not necessarily going to extrapolate from that how to write a mundane story about shades of gray. It might instead combine the two flavors, creating a blend of the extremes. I can’t claim with confidence it’s exactly this straightforward in practice, but trying to get at a general idea.