Hi all,
I currently have a Linux install from an old 256GB SATA SSD that I inherited. It was originally used as a swap drive in another person’s RAID server for about 7 years, then it was given to me, where I put my own Linux install that I have been running for about 5 years.
About a year ago, I acquired a new computer that has an NVMe SSD. It originally ran windows, but I dropped in my SSD with my Linux install, installed grub on the NVMe SSD, and booted to the old SSD.
I am mildly concerned about with this SSD being so old, it could crap out on me eventually. I remember that being a topic of discussion when SSDs first hit the market (i.e. when the one that I am using was made). So I was thinking of wiping the 1TB NVMe SSD that is currently unused in this computer and migrating my install to it. Now, I know I could copy my whole disk with dd
, then expand the partition to make use of the space. But I was wondering if I could change the filesystem to something that had snapshots (such as btrfs).
Is it possible to do this, or to change filesystems do I need to create a new Linux install and copy all the files over that I want to keep?
Make the new filesystem, rsync the old SSD to the new one (making sure to use
rsync -ax
to copy everything properly, also add-H
if you use hardlinks), update fstab UUID, regenerate GRUB configuration and you’re good to go.I have a 10 year old install that’s survived moving several disks and computers, it works just fine.
Don’t forget to change the fstab filesystem type when updating the UUID as well (yes, I’ve made this oops before).
-x (alias --one-file-system) means “don’t cross filesystem boundaries”; is that what you meant? Or did you mean -X | --xattrs?
Edited because I wrote some things before that were incorrect.
Yep, that’s so you don’t end up potentially copying
/dev
,/sys
,/run
or any other mounted partitions.
This is likely what I will do now that I have given it some thought. This will bring over all of my installed apt and snap packages, right? And they will both be aware and know how to update from there?
I have the NVMe prepped. It has a fresh Ubuntu install of the same version, but on btrfs. I could probably even snapshot it before I get started to make sure I can roll back and try again if I fuck up. And worst case, I can just reinstall the OS on that partition, as it would touch my existing install. It feels pretty safe to try. Worst thing that can go wrong is I waste my time.
Yeah, from the software’s point of view unless you need some extra rsync flags as some have pointed, you end up with an identical view of the files on there, they’ll be mounted exactly at the same places and everything. Just a different filesystem and drive behind it. People have been doing that for decades, before even Linux.
As long as all the attributes like user/group/mode and symlinks are preserved, most distros won’t notice a thing with that method. There’s no filesystem-specific special sauce to make it work or hidden flags or anything, even snaps and flatpaks.
This is not like Windows where your options are clone the partition or reinstall. Linux is a lot simpler and only cares that the files are where they should be with the right permissions.
And maybe also -ASX for ACLs, sparse files and xattrs
-X is already included in -a, so no need to specify expliticly. Doesn’t hurt either.Nope, I was wrong, -X is not included in -a. Sorry!
Seconded this approach, I’ve got a Gentoo installation that has been going since 2005 across half a dozen different machines.
The amount of changes you’d need to make to get Linux to boot on a different partition format and drive would be a lot of work. It would be much faster to install a new copy of Linux to the nvme drive and copy the files from the ssd post install before decommissioning the old drive.
It’s really not that bad, unlike Windows you can pretty much just rsync the data over, update fstab and it’s good to go.
Thanks for the reply. I’m really dreading migrating files manually, because I use this as my server, so all my stuff would be down for an extended period of time while I migrated. :(
Is this mostly for fileserving or apps? If you’re using it as a Fileserver share the relevant parts of the ssd while you rsync all of it over to help ease downtime.
You can also install the nvme through a virtual machine and pass /dev/nvme_whatever to the vm. Then rsync everything over using ssh then reboot the whole machine using the nvme drive for the os (make sure to use UEFI for the vm on kvm).
For apps kinda the same vm deal leave the ssd up and configure the nvme install as needed then copy whatever data you need over before rebooting.
It’s more convoluted to do it that way but it will reduce downtime
It’s for apps. I have a Lemmy server and then a few discord bots that play music for a music community that my wife is an admin for.
I honestly might just need to schedule downtime on a day that they don’t have an event on. That’s the main thing that I want up all the time.
That is probably the best option since I don’t think lemmy has the ability to work as a cluster unfortunately
I disagree, you usually just need to get /boot and your EFI things right on the new disk, rsync stuff over and fix any references to old disks in /etc/fstab and maybe your grub config and you are done. I have done this migration>10 times over the years onto different filesystems, partition Layout and raid configurations and it’s never been particularly hard.
What’s the magic that’s needed to make EFI happy?
Most of the time, it’s enough to copy the whole EFI partition to the new machine and update whatever boot entries are in there to point to the right new partitions.
In case of a switch to something like zfs, it’s a bit more involved and you need to boot a live Linux, chroot into the new “/” with /boot mounted and /dev, /proc, /sys bind mounted into the chroot.
Then you can run the distro-appropriate command to reinstall/ update grub into the EFI partition and they will usually take care of adding the right drivers.
That’s true if everything is supported on the current kernel. I might just be very out of touch/date here but is btrfs built in to the kernel? I was thinking he’d need to have a different kernel/loaded modules on it
Btrfs is in the mainline kernel since 2.6.29, that’s 14 years ago my friend 😃
It’s included in every major distro for a long long time.
Well dang it’s been a while since I tried it then! I keep hearing how it’s unstable in comments so I tend to assume its fairly new even when I should know better lol
This is off topic, so I will leave it as a comment below, but last week I had the bright idea to wipe the NVMe SSD to prepare it for this migration. I totally forgot that I had my MBR and grub on that SSD. So the next time I rebooted, it wouldn’t boot back up. It took me like two hours to get grub back on it and boot it back up.
I would honestly just back up my files and start over with a fresh new install. I have never been successful in copying everything over, especially not if you are planning on changing the file systems. I know you mentioned that you don’t really want to do that, but that’s my 2 cents
If you’re gonna do btrfs snapshots, you may also want to create subvolumes for certain directories to exclude them from the snapshots, similar to https://rootco.de/2018-01-19-opensuse-btrfs-subvolumes/