Hello,

I am going to upgrade my server, taking advantage of the fact that I am going to be able to put more hard disks, I wanted to take advantage of this to give a little more security (against loss) to my data.

Currently I have 2 hard drives in ext4 with information, and wanted to buy a third (same capacity all three) and place them in raid5, so that in the future, I can put more hard drives and increase the capacity.

Due to economic issues, right now I can only buy what would be the third disk, so it is impossible for me to back up the data I currently have.

The data itself is not valuable, in case any file gets corrupted, I could download it again, however there are enough teras (20) to make downloading everything a madness.

In principle I thought to put on this server (PC) a dietpi, a trimmed debian and maybe with mdadm make the raid. I have seen tutorials on how to do it (this for example https://ruan.dev/blog/2022/06/29/create-a-raid5-array-with-mdadm-on-linux ).

The question is, is there any way without having to format the hard drives with data?

Thank you and sorry for any mistakes I may make, English is not my mother language.

EDIT:

Thanks for yours answers!! I have several paths to investigate.

  • LoboAureo@lemm.eeOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    12 days ago

    That is a clever aproach, and its just my caseuse, two 12 TB, about 19TB used.

    And its for a personal project, so, i don’t have any hurry.

    Only for clarification several days could be 1 or 2 weeks or we are talking of more time?

    • chiisana@lemmy.chiisana.net
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      12 days ago

      I’m afraid I don’t have an answer for that.

      It is heavily dependent on drive speed and number of times you’d need to repeat. Each time you copy data into the RAID, the array would need to write the data plus figuring out the parity data; then, when you expand the array, the array would need to be rebuilt, which takes more time again.

      My only tangentially relatable experience with something similar scale is with raid expansion for my RAID6 (so two parity here compared to one on yours) from 5x8TB using 20 out of 24TB to 8x8TB. These are shucked white label WD red equivalents, so 5k RPM 256Mb cache SATA drives. Since it was a direct expansion, I didn’t need to do multiple passes of shrinking and expanding etc., but the expansion itself I think took my server a couple of days to rebuild.

      Someone else mentioned you could potentially move some data into the third drive and start with a larger initial chunk… I think that could help reduce the number of passes you’d need to do as well, may be worth considering.