• 2 Posts
  • 123 Comments
Joined 9 months ago
cake
Cake day: March 2nd, 2024

help-circle
  • I’d like to point out that the dialect-language-family distinction is really a continuum. As dialects drift apart from each other, there is no point where God comes in and declares a dialect has graduated into its own language. Mutual intelligibility simply decreases continuously.

    For instance, Portuguese and Spanish are widely considered to be different languages, although they are partially mutually intelligible, particularly in written form. Cantonese and Mandarin are less so, but still a bit. My uncle-in-law speaks Canto but can still understand my Mandarin (however, he can’t respond). I won’t deny that there is a political reason to want to refer to the Chinese/中文 languages as a single “language,” but the classification is honestly quite arbitrary. My understanding is that linguists generally place the category of “Chinese” somewhere between “language” and “family.”

    Is Scots a different language than English? I don’t think I could understand someone speaking Scots without incredible concentration. (However, it’s still considered a “linguistic variety” of middle english.)








  • LLMs are basically just good pattern matchers. But just like how A* search can find a better path than a human can by breaking the problem down into simple steps, so too can an LLM make progress on an unsolved problem if it’s used properly and combined with a formal reasoning engine.

    I’m going to be real with you: the big insight behind almost all new mathematical ideas is based on the math that came before. Nothing is truly original the way AI detractors seem to believe.

    By “does some reasoning steps,” OpenAI presumably are just invoking the LLM iteratively so that it can review its own output before providing a final answer. It’s not a new idea.


  • I do agree that grad students don’t exactly live in luxury, and frequently develop mental health crises. But their contributions and insight are what power their labs. Profs often have to spend so much time teaching and chasing grants that they can’t do much real research. Academia overall is in a sad state.

    But Tao is a superstar, and a charismatic blogger. I’d be disappointed to learn he mistreats his grad students. (I don’t know if he even has any tbh)












  • jsomae@lemmy.mlOPtoPrivacy@lemmy.mlI'm losing faith
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    4 months ago

    where did you get the idea that gpt4 is capable of this? this is concerns for 10+ years from now, assuming AI makes the same strides is has in the past 10 years, which is not guaranteed at all.

    I think there are probably 3-5 big leaps still required, on the order of the invention of transformer models, deep learning, etc., before we have superintelligence.

    Btw humans are also bad at arithmetic. That’s why we have calculators. if you don’t understand that LLMs use RAG, langchain (or similar), and so on, you clearly don’t understand the scope of the problem. Superintelligence doesn’t need access to anything in particular except, say, email or chat to destroy the world.