• 7 Posts
  • 62 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle


  • deepdive@lemmy.worldtoLinux@lemmy.mlJust moved to Linux: a follow up
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 months ago

    Heyha ! Read about dd on makeuseof after reading your post, to see how it works.

    Restoring from an image seems exactly what I was looking for as a full backup restore.

    However this kind of 1 command backup isn’t going to work on databases (mariadb, mysql…). How should I procede with my home directory where all my containers live and most of them having running databases?

    Does it work with logical volumes? Is it possible to copy evrything except /home of the logical volumes?


  • Thank you for your insights and personal experiences :) I love Debian stable as server, never had any issues on a old Asus laptop ! I have only 2 years of “experience” and started with Ubuntu. Good introduction to linux but switched to Debian (<3)

    That’s way I’m asking arround I don’t wan’t to have a too bad experience with Debian as main personal PC !

    Thank you for your personal blog post and the wiki link :) will surely read through before making my final choice !





  • Do you consider testing a better choice than sid for desktop/gaming environment?

    I’m really not sure which one I should use. I only have experience with bare bone debian stable as server, I’m trying to find the best choice when switching from windaube to debian :)

    Thanks for your insights and personal experiences !


  • Thank you !!

    I’m currently looking into xfce vs KDE plasma, something I need to pay attention to is a DE with x11 because nvidia hasn’t fully supported wayland ?

    Am I right to consider it that way? Or do both support nvidia drivers?

    I’m sorry, I only use debian as bare bone on my server and currently considering to switch my main desktop from windaube to linux and alot of informations on the web seem contradictory or incomplete :/



  • You probably have your reason to run Debian testing but I read somewhere that testing is somehow a bad idea as desktop environment !

    If somehing is stuck and being updated in sid, and bugs are still happening, you could be stuck for month without the correct update in testing.

    Sorry if it’s not clear, but I read it somewhere in the official debian documentation.





  • Strange enough TLS 1.3 still doesn’t support signed ed25519 certificates :| P‐256, NIST P‐384 or NIST P‐521 curves are known to be “backdoored” or having deliberately chosen mathematical weakness. I’m not an expert and just a noob security/selfhoster enthusiast but I don’t want to depend on curves made by NSA or other spy agencies !

    I also wondering if the EU isn’t going to implement something similar with all their new spying laws currently discussed…


  • Certificate chain of trust: I assume you’re talking about PKI infrastructure and using root CAs + Derivative CAs? If yes, then I must note that I’m not planning to run derivative CAs because it’s just for my lab and I don’t need that much of infrastructure.

    An intermediate CA could potentially be useful, but isn’t really needed in self-signed CA. But in case you have to revoke your rootCA, you have to replace that certificate on all your devices, which can become a lot of hassle if you share that trusted root CA with family/friends. By having a intermediate CA and hiding your root CAs private key somewhere offline, you could take away that overheat by just revoking the intermediate CA and updating the server certificate with the newly signed Intermediate bundle and serving that new certificate through the proxy. (Hope that makes sense? :|)

    I do not know what X.509 extensions are and why I need them. Could you tell me more?

    This will probably give you some better explanation than I could :| I have everything written in a markdown file, and reading through my notes I remember I had to put some basic constraints TRUE in my certificates to make them work on my android root store ! Some are necessary to make your root CA work properly (like CA:True). Also if you want SAN certificates (multidomaine) you have to put them in your x509 extensions.

    ’m also considering client certificates as an alternative to SSO, am I right in considering them this way?

    Ohhh, I don’t know… I haven’t installed or used any SSO service and thinking of MFA/SSO with authelia in the future ! My guess would be that those are 2 different technologies and could work together? Having self-signed CA with a 2FA could possible work in a homelab but I have no idea how because I haven’t tested it out. But thinks to consider if you want clients certificates for your family/friends is to have a intermediate CA in case of revocation, you don’t have to replace the certificate in their root store every time you sign a new Intermediate CA.

    I’ll mention that I plan to run an instance of HAProxy per podman pod so that I terminate my encrypted traffic inside the pod and exclusively route unencrypted traffic through local host inside the pod.

    I have no idea about HAProxy and podman and how they work to encrypt traffic. All my traffic passes through a wireguard tunnel to my docker containers/proxy which I consider safe enough? Listening to all my traffic with wireshark seamed to do exactly what I’m expecting but I’m not an expert :L So I cannot help you further on that topic. But I will keep your idea in my notes to see If there could be further improvement in my setup with HAProxy and podman compared to docker and traefik through wireguard tunnel.

    Of course, that means that every pod on my network (hosting an HAProxy instance) will be given a distinct subdomain, and I will be producing certificates for specific subdomains, instead of using a wildcard.

    Openssl SAN certificates are going to be a life/time saver in your setup ! One certificat with multidomian !


    I’m just a hobby homelaber/tinkerer so take everything with caution and always double check with other sources ! :) Hope it helps !


    Edit

    Thinking of your use case I would personally create a rootCA and an intermediateCA + certificate bundle. Put the rootCA in the trusted store on all your devices and serve the intermediateCA/certificate bundle with your proxy of choice. Signing the certificate with SAN X.509 extension for all your domains. Save your rootCA’s key somwhere offline to keep it save !

    The links I gave you are very useful but every bit of information is a bit dispatched and you have to combine them by yourself, but it’s a gold mine of information !



  • If you want to run your own pki with self-signed certificate in your homelab I really encourage you to read through this tutorial. There is a lot to process and read and it will take you some time to set everything up and understand every terminology but after that:

    • Own self-signed certificate with SAN wildcards (https://*.home.lab)
    • Certificate chain of trust
    • CSR with your own configuration
    • CRL and certificate revocation
    • X509 extensions

    After everything is in place, you can write your own script that revoks, write and generates your certificate, but that is another story !

    Put everything behind your reverse proxy of choice (traefik in my case) and serve all your docker services with your own self-signed wildcard certificates ! It’s complex but if you have spare time and are willing to learn something new, it’s worth the effort !

    Keep in mind to never expose such certificates on the wild wild west ! Keep those certificate in a closed homelab you access through a secure tunnel on your LAN !

    edit

    Always take notes, to keep track of what you did and how you solved some issues and always make some visuals to have a better understanding on how things work !



  • Then, I tried ownCloud for the first time. Wow, it was fast! Uploading an 8GB folder took just 3 minutes compared to the 25 minutes it took with Nextcloud. Plus, everything was lightning quick on the same machine. I really loved using it. Unfortunately, there’s currently a vulnerability affecting it, which led me to uninstall it.

    I have no idea on how you access your self-hosted services but wireguard could help you out to access all your service from all your devices, with less security risks and only one point of failure (the wireguard port). Also this takes away most of the vulnerabilities you could be exposed to, because you access all your home services through a secure tunnel without directly exposing the api ports on your router !

    I personally run all my services with docker-compose + traefik + self signed CA certificats + adguardhome dns rewrite. And access all my services through https://service.home.lab on all my devices ! It took me some time to set everything up nicely but right now I’m pretty happy how everything works !

    About the current ownCloud vulnerability, they already took some measure and the new docker image has the phpinfo fix (uhhg). Also while I wouldn’t take their word for granted:

    "The importance of ownCloud’s open source in the enterprise and public-sector markets is embraced by both organizations.”