• bitfucker@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      So does OSM data. Everyone can download the whole earth but to serve it and provide routing/path planning at scale takes a whole other skill and resources. It’s a good thing that they are willing to open source their model in the first place.

    • chiisana@lemmy.chiisana.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      What’s the resources requirements for the 405B model? I did some digging but couldn’t find any documentation during my cursory search.

      • modeler@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 months ago

        Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model. Ouch.

        Edit: you can try quantizing it. This reduces the amount of memory required per parameter to 4 bits, 2 bits or even 1 bit. As you reduce the size, the performance of the model can suffer. So in the extreme case you might be able to run this in under 64GB of graphics RAM.

        • cheddar@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model.

  • hperrin@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Yo this is big. In both that it is momentous, and holy shit that’s a lot of parameters. How many GB is this model?? I’d be able to run it if I had an few extra $10k bills lying around to buy the required hardware.

  • abcdqfr@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Wake me up when it works offline “The Llama 3.1 models are available for download through Meta’s own website and on Hugging Face. They both require providing contact information and agreeing to a license and an acceptable use policy, which means that Meta can technically legally pull the rug out from under your use of Llama 3.1 or its outputs at any time.”

    • just another dev@lemmy.my-box.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 months ago

      WAKE UP!

      It works offline. When you use with ollama, you don’t have to register or agree to anything.

      Once you have downloaded it, it will keep on working, meta can’t shut it down.

        • just another dev@lemmy.my-box.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          Oh, sure. For the 405B model it’s absolutely infeasible to host it yourself. But for the smaller models (70B and 8B), it can work.

          I was mostly replying to the part where they claimed meta can take it away from you at any point - which is simply not true.