But I want it so badly! All i need to figure out is:

reverse proxys (I stumbled through getting one caddy instance setup so far but gosh I struggle with that also, nginx proxy manager seems like my next step)

a rock solid backup/restore setup (but first I need to figure out where the vaultwarden alpine files live, then be able to get those off of the proxmox vm)

this is more of a vent, than a request for someone to spell it all out for me. But I wouldn’t be upset if anyone had the time to point me in the right direction for me.

Would it just be easier to run a keypass XC and syncthing setup?

  • towerful@programming.dev
    link
    fedilink
    English
    arrow-up
    42
    ·
    3 months ago

    Bitwarden is cheap enough, and I trust them as a company enough that I have no interest in self hosting vaultwarden.

    However, all these hoops you have had to jump through are excellent learning experiences that are a benefit to apply to more of your self hosted setup.

    Reverse proxies are the backbone of hosting and services these days.
    Learning how to inspect docker containers, source code, config files and documentation to find where critical files are stored is extremely useful.
    Learning how to set up more useful/granular backups beyond a basic VM snapshot in proxmox can be applied to any install anywhere.

    The most annoying thing about a lot of these is that tutorials are “minimal viable setup” sorta things.
    Like “now you have it setup, make sure you tune it for production” and it just ends.
    And finding other tutorials that talk about the next step, to get things production ready, often reference out dated versions, or have different core setups so doesn’t quite apply.

    I understand your frustrations.

    • model_tar_gz@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      3 months ago

      The most annoying thing about a lot of these is that tutorials are “minimal viable setup” sorta things. Like “now you have it setup, make sure you tune it for production”

      Dude I’m already in pain from trying to serve these models and you just have to go rub salt into my eyes. “Simplify your stack with <Tech>” they said. “Share your resources effectively and easily with <Tech>” they said. “Here’s your fuckin’ ‘Hello, World’ now GRTFM and buzz off” they said.

      Working close to the metal do be like that.

      • towerful@programming.dev
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 months ago

        At the homelab scale, proxmox is great.
        Create a VM, install docker and use docker compose for various services.
        Create additional VMs when you feel the need. You might never feel the need, and that’s fine. Or you might want a VM per service for isolation purposes.
        Have proxmox take regular snapshots of the VMs.
        Every now and then, copy those backups onto an external USB harddrive.
        Take snapshots before, during and after tinkering so you have checkpoints to restore to. Copy the latest snapshot onto an external USB drive once you are happy with the tinkering.

        Create a private git repository (on GitHub or whatever), and use it to store your docker-compose files, related config files, and little readmes describing how to get that compose file to work.

        Proxmox solves a lot of headaches. Docker solves a lot of headaches. Both are widely used, so plenty of examples and documentation about them.

        That’s all you really need to do.
        At some point, you will run into an issue or limitation. Then you have to solve for that problem, update your VMs, compose files, config files, readmes and git repo.
        Until you hit those limitations, what’s the point in over engineering it? It’s just going to over complicate things. I’m guilty of this.

        Automating any of the above will become apparent when tinkering stops being fun.

        The best thing to do to learn all these services is to comb the documentation, read GitHub issues, browse the source a bit.

        • ChapulinColorado@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          Great points, as someone who is very happy with their current home automation and services, checking in the config files to a git repo was the critical step. Also backup volumes since many containers tend to store state in some binary or internal DB. At the very least try restoring the config to verify you have what’s needed. The containers should start even if they have no media on it.

          In terms of tinkering not being fun anymore. That’s okay, sometimes you need a break.

          A point that is sometimes not brought up enough in my opinion is to plan for loses. What can you afford to lose if you can’t backup everything (due to price, etc.)? config files and photos or personal data are relatively small (compared to something like a media library) and should be prioritized.

    • InvertedParallax@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      and I trust them as a company enough that I have no interest in self hosting vaultwarden.

      I pay the subscription, but I trust no company that much.

  • Lem453@lemmy.ca
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Vaultwarden itself is actually one of the easiest docker apps to deploy…if you already have the foundation of your home lab setup correctly.

    The foundation has a steep learning curve.

    Domain name, dynamic DNS update, port forwarding, reverse proxy. Not easy to get all this working perfectly but once it does you can use the same foundation to install any app. If you already had the foundation working, additional apps take only a few minutes.

    Want ebooks? Calibre takes 10 mins. Want link archiving? Linkwarden takes 10 mins

    And on and on

    The foundation of your server makes a huge difference. Well worth getting it right at the start and then building on it.

    I use this setup: https://youtu.be/liV3c9m_OX8

    Local only websites that use https (Vaultwarden) and then external websites that also use https (jellyfin).

  • EmoPolarbear@lemmy.ca
    link
    fedilink
    English
    arrow-up
    18
    ·
    3 months ago

    Honestly these things are really vital to learn if you want to be self hosting, however if you’re unfamiliar with them I would not start with your password vault. You’re almost certainly going to make mistakes and risk losing the vault. I would learn on something less vital then once you’re feeling more comfortable add vault warden.

  • fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 months ago

    Really? Have you set up services with docker before? I found it super easy compared to other systems. Curious what specifically threw you as I barely did anything except spin it up.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 months ago

    That’s more of a general DevOps/server admin steep learning curve than Vaultwarden’s there, to be fair.

    It looks a bit complicated at first as Docker isn’t a trivial abstraction, but it’s well worth it once it’s all set up and going. Each container is always the same, and always independent. Vaultwarden per-se isn’t too bad to run without a container, but the same Docker setup can be used for say, Jitsi which is an absolute mess of components to install and make work, some Java stuff, and all. But with Docker? Just docker compose up -d, wait a minute or two and it’s good to go, just need to point your reverse proxy to it.

    Why do you need a reverse proxy? Because it’s a centralized location where everything comes in, and instead of having 10 different apps with their own certificates and ports, you have one proxy, one port, and a handful of certificates all managed together so you don’t have to figure out how to make all those apps play together nicely. Caddy is fine, you don’t need NGINX if you use Caddy. There’s also Traefik which lands in between Caddy and NGINX in ease of use. There’s also HAproxy. They all do the same fundamental thing: traffic comes in as HTTPS, it gets the Host header from the request and sends it to the right container as plain HTTP. Well it doesn’t have to work that way specifically but that’s the most common use case in self hosted.

    As for your backups, if you used a Docker compose file, the volume data should be in the same directory. But it’s probably using some sort of database so you might want to look into how to do periodic data exports instead, as databases don’t like to be backed up live since the file is always being updated so you can’t really get a proper snapshot of it in one go.

    But yeah, try to think of it as an infrastructure investment that makes deploying more apps in the future a breeze. Want to add a NextCloud? Add another docker compose file and start it, Caddy picks it up automagically and boom, it’s live and good to go!

    Moving services to a new server is also pretty easy as well. Copy over your configs and composes, and volumes if applicable. Start them all, and they should all get back exactly in the same state as they were on the other box. No services to install and configure, no repos to add, no distro to maintain. All built into the container by someone else so you don’t have to worry about any of it. Each update of the app will bring with it the whole matching updated OS with the right packages in the right versions.

    As a DevOps engineer we love the whole thing because I can have a Kubernetes cluster running on a whole rack and be like “here’s the apps I want you to run” and it just figures itself out, automatically balances the load, if a server goes down the containers respawn on another one and keeps going as if nothing happened. We don’t have to manually log into any of those servers to install services to run an app. More upfront work for minimal work afterwards.

  • minnix@lemux.minnix.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 months ago

    I use bitwarden and the setup was fairly standard with the helper script. I use my own isolated proxy for all my services so that was already built. I haven’t used vaultwarden but if anyone that has used both can tell me the differences I could maybe help out.

    • gray@pawb.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      VaultWarden is pretty much the same setup, the big difference being that it doesn’t take like 4 GB of ram.

      I switched over years ago because Bitwarden server is chunky for like no reason.

      • minnix@lemux.minnix.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        If it’s the same then after installing docker, creating a vaultwarden user, adding said user to docker group, and creating your vaultwarden directories, all that’s left is to curl the install script and answer the questions it asks.

  • InvertedParallax@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    3 months ago

    Have nginx for all my reverse proxies, it wasn’t trivial, but I used it for a lot of other things so it’s fine.

    I back it up manually to encrypted json, it’s not the right way, but I never had much of a proper backup system, other than zfs snapshots and occasionally mirroring to another zfs pool.

    It’s not a lot of extra work once you have the rest of your apps running, it’s fairly low maintenance and mostly just works, but again I haven’t bothered with backups really.

    Edit: Running most if not all my services on freebsd as jails, that might have made it easier.

  • grimer@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    3 months ago

    Not sure if this is a path you’d want to follow but I use it with Cloudflare tunnels.

    • schizo@forum.uncomfortable.business
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      3 months ago

      Don’t do that, please: there’s less than no reason to make your entire password vault accessible on the public internet.

      Vaultwarden is probably secure, and the vault data is probably encrypted in a way that’s not vulnerable, but I mean, why add the attack surface?

      Yeah yeah, exceptions, but if you legitimately have an exception you already know it and I’d bet that the vast majority of people don’t, or would be much better served by a VPN tunnel than just rawdogging an argo tunnel.

    • Lemongrab@lemmy.one
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 months ago

      Self hosting has the advantage of keeping your encrypted vault local and under your control.

    • Schlemmy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      Dirt cheap actually. But still I’m setting up a self hosted version. I suppose that’s why we’re here.

    • Moonrise2473@feddit.it
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      Technically, if it wasn’t for the unofficial server component, you had to pay for a subscription even if you self host

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        2 months ago

        It is literally the product OP is struggling to host and understand. Nothing wrong with saving yourself the struggle and recover time by just buying the official product.

    • seang96@spgrn.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 months ago

      At least one great thing about bitwarden, the passwords are stored on each device, so you kind of already have backups. That being said backups for vaultwarden is still beneficial.

      • Chewy@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        Yeah, I’m not sure whether Bitwarden always had support for exporting the vault on mobile, but it’s an awesome feature.

  • Matt The Horwood@lemmy.horwood.cloud
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    maybe its just me, but self hosting is more about learning to run and then simplify my setup. Thats why I read the documentation for the project I want to deploy, then see if I have anything that looks similar. But as I’ve been doing self hosting for almost 20 years, plus working at a SaaS company. I have done a lot of things with a lot of different tech

    All my docker stuff has a very common look to it, also I have tried a lot of stuff. See my Git repo with some examples -> https://github.com/mhzawadi/docker-stash

  • Starfighter@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    Why not set up backups for the Proxmox VM and be done with it?

    Also makes it easy to add offsite backups via the Proxmox Backup Server in the future.

  • Tinkerer@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    I have vaultwarden in docker but I don’t expose my instance externally as you really don’t need to. Put the bitwarden app on your phone sign into the instance and it will work even if your instance is borked. You can’t add items but it works.

    My suggestion, run it in docker and just back up the entire docker compose and folder structure as that includes the database as well.

    If you want to expose it use nginx proxy manager its dead simple and awesome.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    Git Popular version control system, primarily for code
    HTTP Hypertext Transfer Protocol, the Web
    HTTPS HTTP over SSL
    IP Internet Protocol
    SSL Secure Sockets Layer, for transparent encryption
    VPN Virtual Private Network
    nginx Popular HTTP server

    [Thread #966 for this sub, first seen 11th Sep 2024, 17:45] [FAQ] [Full list] [Contact] [Source code]