I currently have a hodgepodge of solutions for my hosting needs. I play ttrpgs online, so have two FoundryVTT servers hosted on a pi. Then I have a second pi that is hosting Home Assistant. I then also have a synology device that is my NAS and hosts my Plex server.

I’m looking to build a home server with some leftover parts from a recent system upgrade that will be my one unified server doing all the above things in the same machine. A NAS, hosting a couple Foundry instances, home assistant, and plex/jellyfin.

My initial research has me considering Unraid. I understand that it’s a paid option and am okay with paying for convenience/good product. I’m open to other suggestions from this community.

The real advice I’m hoping to get here is a kind of order of operations. Assume I have decided on the OS I want to use for my needs, and my system is built. What would you say is the best way going about migrating all these services over to the new server and making sure that they are all reachable by web?

  • BearOfaTime@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    ·
    5 months ago

    Can Proxmox with some containers/VMs address your needs?

    Its what I’m running for a media server (a VM) and some containers for things like Pihole and Syncthing.

    • iAmTheTot@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      5 months ago

      I don’t know, that’s why I’m here for advice lol. I’ve never had to tackle “which OS?” before.

      • lemming741@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 months ago

        Proxmox was the answer for me. OpenMediaVault in a VM for NAS, LXC containers for things that need GPU access (Plex and frigate). Hell, I even virtualized my router. One thing I probably should have done was a single docker host and learn podman or something similar. I ended up with 8 or 9 VMs that run 8 or 9 dockers. It works great, but it’s more to manage.

        You’ll want 2 network cards/interfaces- one for the VMs and another for the host. Power usage is not great using old gaming parts. Discrete graphics seem to add 40 watts no matter what. A 5600G or Intel with quicksync will get the job done and save you a few bucks a month. I recently moved to a 7700x and transcode performance is great. Expect 100-150 watts 24/7 which costs me $10-15 month. But I can compile ESPHome binaries in a few seconds 🤣

        • AbidanYre@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          I ended up with 8 or 9 VMs that run 8 or 9 dockers. It works great, but it’s more to manage.

          It’s more overhead on the cpu, but it’s so easy.

          • lemming741@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 months ago

            It really was easy. And it works so well I didn’t have to lean the names of stuff haha

            For anyone following along, I meant portainer to manage dockers. Podman is a different container technology it seems.

    • floridaman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      Proxmox sounds like it fits their use case , it’s a useful and tweakable solution, and because it’s based on KVM you can pass through hardware with IOMMU. Personally, I run Proxmox on my (admittedly not very good) home server with like 12 gigs of ram and a processor from the early 2010s, handles a few VMs just fine with hardware passthrough to a TrueNAS VM. I do run a lot of my micro services on some cheap thin clients (DNS mainly) for redundancy as I mentioned, they were cheap. Home Assistant OS is happy on Proxmox as is Jellyfin with hardware acceleration.

    • Pika@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Seconding this, I took the plunge a month or two back myself using proxmox for my home lab. Fair warning if you have never operated anything virtualized outside of using virtualbox or Docker like I was you are in for an ice Plunge so if you do go this route prepare for a shock, it is so nice once everything is up and running properly though and it’s real nice being able to delegate what resource uses what and how much, but getting used to the entire system is a very big jump, and it’s definitely going to be a backup existing Drive migrate data over to a new Drive style migration, it is not a fun project to try to do without having a spare drive to be able to use as a transfer Drive

  • april@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 months ago

    TrueNAS is pretty good and they have a Linux version which will have better compatibility with your game servers.

  • unrushed233@lemmings.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 months ago

    Unraid would be my first suggestion as well. But if you prefer something FOSS, check out TrueNAS Scale. (It is important that you go with TrueNAS Scale, not Core. TrueNAS Core is the continuation of the former FreeNAS, which is based on FreeBSD. Since it’s not a Linux system, it doesn’t support Docker. TrueNAS Scale is based on Debian Linux and much closer to Unraid, it has full support for KVM Virtualization and Docker containers.)

    • iAmTheTot@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Scale was probably my number two so far, but I read a lot of good things about Unraid. I think I might try both and see which one I like working with more.

      • unrushed233@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        Both are great. Unraid makes things really easy with their Community Apps feature. On the technical side, I prefer TrueNAS Scale because it’s based on Debian, whereas Unraid is based on Slackware Linux. TrueNAS Scale is fully FOSS, whereas big parts of Unraid are proprietary. But there are more guides and tutorials for Unraid, as it seems to be the more popular option. If you’re going to install Unraid, definitely check out Spaceinvader One on YouTube, he’s got some awesome videos on the topic.

  • AA5B@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    5 months ago

    Step 0). Decide if there’s anything you dont want on a common server.

    I realized long ago that my projects sometimes stall out partway through. However some things need to just work, regardless of where I am in a project. HA is a great example of something that manages itself (so less advantage to the VM) and that I want always available, so even if I decide to go down a route like you are, HA stays independent, stays available

    • null@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      5 months ago

      Yup, same experience. I started out hosting everything on a single box, but have slowly moved things like HA and Pi-hole to their own machines, so they don’t all go down when that one box goes down.

  • fruitycoder@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    5 months ago

    K3s! You could even reuse your pis in the cluster.

    I would deploy it to your new server, setup your CSI (e.g longhorn its pretty simple), find a helm chart for one of the apps and try deploying it.

    • iAmTheTot@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 months ago

      I understood like four of the words in your comment so I’m going to go ahead and assume that solution is too advanced for me.

      • MigratingtoLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        5 months ago

        K3s is an embedded Kubernetes distribution by a Californian company called Rancher, which is owned by the Enterprise Linux Giant SUSE.

        Kubernetes works on the idea of masters and workers. I.e. you usually cannot bring up (“schedule”) containers (pods) on the master nodes (control nodes for brevity). K3s does away with such limitations, meaning you can just run one VM with k3s and run containers on top.

        Although if Kubernetes is too hard I would push you towards Podman.

        I do not know the extrapolation for CSI but Longhorn is a storage backend of Kubernetes for persistent storage across nodes

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    HA Home Assistant automation software
    ~ High Availability
    LXC Linux Containers
    NAS Network-Attached Storage
    Plex Brand of media server package

    5 acronyms in this thread; the most compressed thread commented on today has 15 acronyms.

    [Thread #822 for this sub, first seen 21st Jun 2024, 21:15] [FAQ] [Full list] [Contact] [Source code]

  • pdavis@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    I run just one Windows machine with some VMs for various services if needed. Less to maintain and have to tinker with.

  • Illecors@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    edit-2
    5 months ago

    If you can dedicate some time to constant keep up - pick a rolling distro. Doing major version upgrades has never not had problems for me. Every major distro has one.

    My choice is Gentoo, but I’m weird like that. Having said that - my email server has been running happily on Arch for just over 5 years now.

    The lemmy instance I host is on Debian testing - Gentoo was not available on DO - no issues so far.

    Even when it’s mostly containers - why waste time every n years doing the big upgrade? Small change is always safer.

  • Ebby@lemmy.ssba.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Why not dockerize Foundry and run all on the Synology?

    Though I did convert my home assistant docker to HAOS on a Pi for extra features way back in the day. Not sure you have to now.

    • iAmTheTot@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      I don’t want to use the synology anymore, I’m interested in building my own system from previous parts with better performance. I don’t have a very great synology right now and also want more drive space.