Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

  • 🐝bownage [they/he]@beehaw.org
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    By now, most of us have heard about the survey that asked AI researchers and developers to estimate the probability that advanced AI systems will cause “human extinction or similarly permanent and severe disempowerment of the human species”. Chillingly, the median response was that there was a 10% chance.

    How does one rationalize going to work and pushing out tools that carry such existential risks? Often, the reason given is that these systems also carry huge potential upsides – except that these upsides are, for the most part, hallucinatory.

    Ummm how about the obvious answer: most AI researchers won’t think they’re the ones working on tools that carry existential risks? Good luck overthrowing human governance using ChatGPT.

    • alexdoom@beehaw.org
      link
      fedilink
      arrow-up
      13
      ·
      1 year ago

      The chance of Fossil Fuels causing human extinction carries a much higher chance, yet the news cycle is saturated with fears that a predictive language model is going to make calculators crave human flesh. Wtf is happening

      • exohuman@kbin.socialOP
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        I agree that climate change should be our main concern. The real existential risk of AI is that it will cause millions of people to not have work or be underemployed, greatly multiplying the already huge lower class. With that many people unable to take care of themselves and their family, it will make conditions ripe for all of the bad parts of humanity to take over unless we have a major shift away from the current model of capitalism. AI would be the initial spark that starts this but it will be human behavior that dooms (or elevates) humans as a result.

        The AI apocalypse won’t look like Terminator, it will look like the collapse of an empire and it will happen everywhere that there isn’t sufficient social and political change all at once.

        • alexdoom@beehaw.org
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          I dont disagree with you, but this is a big issue with technological advancements in general. Whether AI replaces workers or automated factories, the effects are the same. We dont need to boogeyman AI to drive policy changes that protect the majority of the population. Just frustrated with AI scares dominating the news cycle while completely missing the bigger picture.

          • cnnrduncan@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Yeah - green energy puts coal miners and oil drillers out of work (as the right likes to constantly remind us) but that doesn’t make green energy evil or not worth pursuing, it just means that we need stronger social programs. Same with AI in my opinion - the potential benefits far outweigh the harm if we actually adequately support those whose jobs are replaced by new tech.

    • fsniper@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I think that the results are “high” as much as 10 percent because the researcher do not want to downplay how “intelligent” their new technology is. But it’s not that intelligent as we and they all know it. There is currently 0 chance any “AI” can cause this kind of event.

      • aksdb@feddit.de
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Not directly, no. But the tools we have already that allow to imitate voice and faces in video streams in realtime can certainly be used by bad actors to manipulate elections or worse. Things like that - especially if further refined - could be used to figuratively pour oil into already burning political fires.

      • Spzi@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        the results are “high” as much as 10 percent because the researcher do not want to downplay how “intelligent” their new technology is. But it’s not that intelligent as we and they all know it. There is currently 0 chance any “AI” can cause this kind of event.

        Yes, the current state is not that intelligent. But that’s also not what the expert’s estimate is about.

        The estimates and worries concern a potential future, if we keep improving AI, which we do.

        This is similar to being in the 1990s and saying climate change is of no concern, because the current CO2 levels are no big deal. Yeah right, but they won’t stay at that level, and then they can very well become a threat.