I dunno when it happened but I swear SBCs were the new best thing in the universe for a while and everyone was building cool little servers with their RockPis and OrangePis.

Now it’s all gone x86 and Proxmox with everyone shitting on Arm. What happened? What gives?

Is my small army of xPis pointless? What about my 2 Edge routers?

I’ve got about 6 xPis scattered round my flat - is there anything worth doing with them or should I just bin them?

All thoughts, feelings and information welcome. Thank you.

  • jkrtn@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    Do you think the used server market is worth the cost? It looks like I could have a giant chunk of DDR3 for not so much.

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      5 months ago

      I don’t (specially DDR3-era stuff) because old server hardware is way more expensive, won’t be of any particular advantage and older hardware, compared to new stuff, will use a LOT of power.

      Instead use regular desktop/laptop machines as they’ll probably be more than enough for homelabs. You can a good 9-10th gen Intel CPU and motherboard that is perfect to run servers (very high performance) but that people don’t want because they aren’t good to play the latest games. Modern hardware = less power consumption, cheaper, more performance.

      If you go really low end, let’s say i5-6500, this will probably cost around 80€ second hand with RAM. You can use https://www.cpubenchmark.net/compare/ to compare CPUs the server hardware you can get with modern hardware if you’re interested.

      Most DDR3-era server hardware comes with RAID controllers/cards and other things that nobody uses anymore, people have moved on the software RAID be it BRTFS or ZFS and you will want to do the same. Servers make a lot of noise - impractical for a home - and a CPU from that era will be around 150-200W, you can get a recent i5 with more performance that runs around 50W.

      Another thing to consider: you’re trying to build a NAS get a basic motherboard with 4 SATA ports and then add a PCI to 5 SATA port card and it will be much cheaper than whatever server hardware. BTRFS as your filesystem and its RAID if needed. Now you may be thinking something like “I want a faster CPU in order to have fast SMB”, just don’t - your gigabit network will saturate before an i5-6500 or any mechanical drive does and when this happens you’ll be at something like 10-20% CPU usage. Just don’t waste your money.

      • jkrtn@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        Thank you, really appreciate your advice. I was just struggling to install Proxmox on a new machine, and you made me take a step back. The kernel is messed up, do I really want this? Why am I jumping through hoops for this when Debian has zero issues installing? I’ll be trying the container software you mentioned instead.

        • 1371113@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          I’ve done the same thing as the person you replied to is suggesting for around 10 years now. It works very well for a home user because parts etc are readily available. Most hypervisors will run on x86/amd64 hardware without issue. Check out something other than proxmox. LXC is one suggestion. If you’re going to stick with Debian look into SAMBA with BIND to ensure ease of sharing and cross platform integration.

          Another reason to not get an old server is power, noise and thermals. They’re designed to live in an air conditioned room. Anyone who works in server rooms for any length of time will tell you to wear ear protection.