I know for many of us every day is selfhosting day, but I liked the alliteration. Or do you have fixed dates for maintenance and tinkering?

Let us know what you set up lately, what kind of problems you currently think about or are running into, what new device you added to your homelab or what interesting service or article you found.

This post is proudly sent from my very own Lemmy instance that runs at my homeserver since about ten days. So far, it’s been a very nice endeavor.

  • sugoidogo@discuss.online
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    I wrote myself a new python script for a palworld server I run. Wanted to figure out a generic way to track active connections without running something in front of the daemon. That’s easy to do for TCP, but since UDP has no concept of an established connection, the regular tools wouldn’t work. Realized I could use conntrack to get the linux firewalls connection tracking data, which works outside of tcp/udp concepts and maintains its own active connection state based on timeouts, which is what I was gonna do anyways. Now I can issue SIGSTOP/SIGCONT to keep buildings from degrading on the server when nobody’s online to deal with it, along with saving the cpu resources of an empty game server. Rather niche project, but I figured I’d publish it anyways. https://github.com/sugoidogo/pausepal

  • DarkSpectrum@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 days ago

    Looking to install Immich, BitDefender Password Manager and YouTube downloader on the NAS this week.

  • metaStatic@kbin.earth
    link
    fedilink
    arrow-up
    81
    ·
    4 days ago

    what’s maintenance? is that when an auto-update breaks everything and you spend an entire weeknight looking up tutorials because you forgot what you did to get this mess working in the first place?

    • daddycool@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      4 days ago

      I know you’re half joking. But nevertheless, I’m not missing this opportunity to share a little selfhosting wisdom.

      Never use auto update. Always schedule to do it manually.

      Virtualize as many services as possible and take a snapshot or backup before updating.

      And last, documentation, documentation, documentation!

      Happy selfhosting sunday.

      • tofu@lemmy.nocturnal.gardenOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        I think auto update is perfectly fine, just check out what kind of versioning the devs are using and pin the part of the version that will introduce breaking changes.

        • daddycool@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          4 days ago

          I just like it when things break on scheduled maintenance and I have time to fix it or the possibility to roll back with minimal data loss, instead of an auto update forcing me spend a week night fixing it or running a broken system till I have the time.

          • tofu@lemmy.nocturnal.gardenOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 days ago

            You can have the best of both worlds - scheduled auto updates on a time that usually works for you.

            With growing complexity, there are so many components to update, it’s too easy to miss some in my experience. I don’t have everything automated yet (in fact, most updates aren’t) but I definitely strive towards it.

            • daddycool@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              4 days ago

              In my experience, the more complex a system is, the more auto updates can mess things up and make troubleshooting a nightmare. I’m not saying auto updates can’t be a good solution in some cases, but in general I think it’s a liability. Maybe I’m just at the point where I want my setup to work without the risk of it breaking unexpectedly and having to tinker with it when I’m not in the mood. :)

              • iggy@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 days ago

                There’s a fine line between “auto-updates are bad” and “welp, the horribly outdated and security hole riddled CI tool or CMS is how they got in”. I tend to lean toward using something like renovate to queue up the updates and then approve them all at once. I’ve been seriously considering building out a staging and prod env for my homelab. I’m just not sure how to test stuff in staging to the point that I’d feel comfortable auto promoting to prod.

    • IronKrill@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      I’ve had this happen twice in two weeks since installing Watchtower and have since scheduled it to only run on Friday evening…

      • Appoxo@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        Nothing greater than crashing your weekend evening just trying to watch a movie on a broken jellyfin server :'D

  • TheFANUM @lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    Finally upgrading my Plex server from Ubuntu 22.04 to 24.04! I’ve been putting it off out of habit, as I always wait for the *.1 releases but I’ve done several of these for clients and every single one went flawlessly. But I still waited it out.

    Also thinking about switching my Ext4 mirrored softRAID to ZFS… Since Ubuntu has the only acceptable ZFS implementation outside of UNIX proper (Ubuntu’s is in-kernel, everyone else uses kernel modules, which i hate). But that’s going to be extra work I may not be in the mood for. But damn would compression and deduplication be nice! So still maybe

    • Estebiu@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Wait, you mean you host plex servers for clients? Or that you work with Ubuntu in general? And for the ZFS thing, it doesn’t really matter if it’s in-kernel or something else, at the end of the day, they all work the same. I’m using zfs on my arch machine for example, and everything works just fine (dkms). And zfs is super easy in general, you should definetly try it

    • faethon@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      That is one thing I still need to do, upgrade my Ubuntu server from 22.04 to 24.04. laat time I tried this I noticed many python packages were missing or failing. Reverted to the backup. Maybe now is the time to do the switch and iron out the crinks that may be left after.

  • Appoxo@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 days ago

    For the first time I configured ssh with pubkey auth.
    Auth between windows (agent) and alpine (host) to use as a helper/backup proxy in veeam (helper is used to mount file level restore assistant)
    Took me 3 hours to find out that
    Windows didnt know the private key
    Pubkey auth wasnt active
    Fucked up pubkey auth
    Alpine isnt supported by Veeam so it didnt work
    Needed to install a small debian VM.

    :|
    At least I did my first pubkey auth setup.

  • Domi@lemmy.secnd.me
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 days ago

    I finally got IPv6 working in Docker Swarm…by moving from Docker Swarm to regular Docker.

    Traefik now properly gets IPv6 addresses and forwards them to the backend.

    • AustralianSimon@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      What’s the big benefit of moving to IPv6 for a LAN? Just wondering if there is any other benefits over addresses? My unifi kit can convert us to IPv6 but I’m hesitant without knowing what devices it will break.

      • Domi@lemmy.secnd.me
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        3 days ago

        Copying from an older comment of mine:

        IPv6 is pretty much identical to IPv4 in terms of functionality.

        The biggest difference is that there is no more need for NAT with IPv6 because of the sheer amount of IPv6 addresses available. Every device in an IPv6 network gets their own public IP.

        For example: I get 1 public IPv4 address from my ISP but 4,722,366,482,869,645,213,696 IPv6 addresses. That’s a number I can’t even pronounce and it’s just for me.

        There are a few advantages that this brings:

        • Any client in the network can get a fresh IP every day to reduce tracking
        • It is pretty much impossible to run a full network scan on this amount of IP addresses
        • Every device can expose their own service on their own IP (For example: You can run multiple web servers on the same port without a reverse proxy or multiple people can host their own game server on the same port)

        There are some more smaller changes that improve performance compared to IPv4, but it’s minimal.

        My unifi kit can convert us to IPv6 but I’m hesitant without knowing what devices it will break.

        You don’t usually “convert” to IPv6 but run in dual stack, with both IPv4 and IPv6 working simultaneously. Make sure your ISP supports IPv6 first, there is little use to only run IPv6 internally.

  • 4grams@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    3 days ago

    I’m building services out for my family as things enshittify. Moved the family over to an immich instance, run a family blog on Wordpress (working on rolling my own since it’s over complicated and with all the Wordpress shenanigans…), plex (lifetime account, works for now). I have a number of self-built projects as well, a “momboard” like system that is integrated with my Wordpress blog for access and control, a pi based backup server that lives at my friends house and nails a VPN connection to my router and I’m playing with Meshtastic as an offline communication system for my kids scout troop when we’re camping without cell signal. Lots of home automation with home assistant as well.

    I host it all on Debian servers, raspberry pi’s and esp32 devices (Meshtastic and home automation). I used to run kubernoodles but it was more complicated than needed and for my use case, docker, ansible and bash scripts manage it all just fine.

    • eodur@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      How’s your experience with meshtastic been? I’ve just started experimenting with it. There are very few nodes in my area, so my potential use cases seem limited.

      • 4grams@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        Very limited so far. I don’t have much near me but there has been enough sproradic connectivity that I pick up the occasional chatter in the default channel and have about 145 nodes it’s aware of.

        Mostly been my son and I playing around. He wants to get his neighborhood friends involved :).

  • Little8Lost@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 days ago

    Yesterday i managed to successfully host a simple html safely (its more of a network test)
    The path is nginx->openwrt->router to internet Now i only need to:

    • backup
    • set up domain (managing via cloudflare)
    • set up certificates
    • properly documentbthe setup + some guides on stuff that i will repeat

    and then i can throw everything i want on it :D

  • refreeze@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 days ago

    I just set up wanderer and workout-tracker. Along with installing gadgetbridge on my phone, I now have a completely self hosted fitness/workout stack with routes, equipment tracking, heatmaps, general health metrics like HRV, heart rate, etc through my Garmin watch, without having Garmin Connect installed. Awesome!

    • bluegandalf@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Wait, is that possible? I thought gadgetbridge didn’t work with Garmin! Nedd to check this out. Thanks for the inspiration!

    • warmaster@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Holy shit! I didn’t know about GadgetBridge. Is there a way to connect it to Home Assistant?

    • tofu@lemmy.nocturnal.gardenOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      That sounds so cool! Not using any tracking/nav devices other than my phone but currently my routes just stay local without having any kind of management for them.

  • eodur@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    I recently setup Music Assistant and have been trying to make it work in my VLANs with my esp32 devices. It has been slow going. Nothing has the level of logging required to easily debug the issues I’ve encountered but I’m slowly working through it all.

  • cmc@lemmy.cmc.pub
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    3 days ago

    I also finally set up Lemmy on my home lab, as well as moving Authelia from Docker to bare metal.

    Other than that, I’ve been struggling to find any other self-hosted apps that would actually be useful to me.

  • SmokeyDope@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    3 days ago

    I just spent a good few hours optimizing my LLM rig. Disabling the graphical interface to squeeze 150mb of vram from xorg, setting programs cpu niceness to highest priority, tweaking settings to find memory limits.

    I was able to increase the token speed by half a second while doubling context size. I don’t have the budget for any big vram upgrade so I’m trying to make the most of what ive got.

    I have two desktop computers. One has better ram+CPU+overclocking but worse GPU. The other has better GPU but worse ram, CPU, no overclocking. I’m contemplating whether its worth swapping GPUs to really make the most of available hardware. Its bee years since I took apart a PC and I’m scared of doing somthing wrong and damaging everything. I dunno if its worth the time, effort, and risk for the squeeze.

    Otherwise I’m loving my self hosting llm hobby. Ive been very into l learning computers and ML for the past year. Crazy advancements, exciting stuff.

  • AustralianSimon@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    3 days ago

    Finally setup Synology surveillance station and got my local cameras all hooked in with motion events. Very swish.

    Attempted and failed to set up some sort of fail2ban between my Cloudflared container and my website I host at home.

  • non_burglar@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    Migrating from proxmox to incus, continued.

    • got a manually-built wireguard instance rolling and tested, it’s now “production”
    • setting up and testing backups now
    • going to export some NFS and iscsi to host video files to test playback over the network from jellyfin
    • building ansible playbooks to rebuild instances
    • looking into ansible to add system monitoring, should be easy enough

    Lots of fun, actually!

    • tofu@lemmy.nocturnal.gardenOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      What’s your motivation for the switch? Second time in a short while I’ve heard about people migrating to incus.

      • non_burglar@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        I’ve moved to all containers and I’m gradually automating everything. The metaphor for orchestration and provisioning is much clearer in incus than it was in lxd, and makes way more sense than proxmox.

        Proxmox is fine, I’ve used it for going on 8 years now, I’m still using it, in fact. But it’s geared toward a “safe” view of abstraction that makes lxc containers seem like virtual machines, and they absolutely aren’t, they are much, much more flexible and powerful than vms.

        There are also really annoying deficiencies in proxmox that I’ve taken for granted for a long time as well:

        • horrible builtin resource usage metrics. And I’m happy to run my influxdb/grafana stack to monitor, but users should be able to access those metrics locally and natively, especially if they’re going to be exported by the default metrics export anyway.
        • weird hangovers from early proxmox versions on io delay. Proxmox is still making users go chase down iostat rabbit holes to figure out why io_wait and “io delay” are not the same metric, and why the root cause is almost always disk, yet proxmox shows the io_wait stat as if it could be “anything”
        • integration of pass through devices is a solved problem, even for lxc, yet the bulk of questions for noobs is about just that. Pass through is solved for so many platforms, why proxmox just doesn’t have that as a GUI option for lxc is baffling.
        • no install choices for zfs on root on single disk (why???)
        • etc

        Ultimately, I have more flexibility with a vanilla bookworm install with incus.

        • tofu@lemmy.nocturnal.gardenOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          Thanks a lot for your response! I too was a bit misguided by the way Proxmox presents LXCs but I’m mostly on VMs and haven’t explored LXCs further so far.

          • non_burglar@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 days ago

            No worries. And don’t misunderstand: I think proxmox is great, I’ve simply moved on to a different way of doing thing.