After using Proxmox for a while now, I have nearly reached a point of running out of storage. So I decided to upgrade my NVMe from 256GB to 1TB. However, this task turned out to be not so trivial one.
I have cloned my drive and expanded the partition with empty space numerous times on Linux before, but there are some specifics of LVM that you need to consider. I have also spent a lot of time reading through documentation and forums before I finally found this forum post. Many thanks for the author for this solution.
I thought it would be a good idea to document the process in case if somebody is struggling with same issue.
Preparations and cloning the drive
First of all I connected by 1TB drive using a USB enclosure to my MiniPC server and booted it with LiveUSB Linux distro (you can use pretty much any distro of your choice in my case I used Ubuntu-based Pop!_OS). After booting in the terminal I used dd
to clone the contents of my NVMe drive to external USB enclosure:
sudo dd if=/dev/sdX of=/dev/nvme0n1 bs=64K conv=noerror,sync status=progress
Here if=
is your source and of=
is your destination. Please pay attention, check the source and destination carefully and read documentation if necessary.
The process of cloning took over 2 hours, after which I switched my 256GB drive to 1TB recently cloned and booted up the machine without any issues.
Resizing partitions
After booting up using the shell in Proxmox run fdisk /dev/nvme0n1
and check your the current partition table (press p
):
Device Start End Sectors Size Type
/dev/nvme0n1p1 34 2047 2014 1007K BIOS boot
/dev/nvme0n1p2 2048 1050623 1048576 512M EFI System
/dev/nvme0n1p3 1050624 488397134 487346511 232.4G Linux LVM
Now what you need to note the start sector of partition 3, delete it and recreate it again using the same starting sector and last sector of the NVMe as end sector. Do not remove LVM2_member signature!. Refer to fdisk
documentation if not sure how to use it!
After that your partition table will look something like this:
Device Start End Sectors Size Type
/dev/nvme0n1p1 34 2047 2014 1007K BIOS boot
/dev/nvme0n1p2 2048 1050623 1048576 512M EFI System
/dev/nvme0n1p3 1050624 1953525134 1952474511 931G Linux filesystem
Notifying OS of the partition changes
After expanding partition using fdisk
, you need notify the OS of the partition changes:
partx -u /dev/nvme0n1p3
Alternatively you can simply reboot the system.
Expanding LVM
Now, we need to grow the physical volume of our LVM
:
pvresize /dev/nvme0n1p3
Before moving on to expand the emtpy space, it is worth checking the available space in your volume group. You can do this by running the vgs
command. This will show you the VG (Volume Group) name, the total size of the VG, and the available free space.
vgs
I needed some more space for my pve-root
partiton, so I added extra 50gigs to it:
lvextend -L +50G /dev/pve/root
resize2fs /dev/pve/root
Please note that the resize2fs
command only works on ext2, ext3, and ext4 filesystems. If you’re using a different filesystem, you’ll need to use a different command to resize the filesystem.
Than I added the remaining free space to my pve-data
partition. This is used to create VM and CT virtual disks etc.
lvextend -l +100%FREE /dev/pve/data
You don’t need to resize the filesystem for pve-data
because it doesn’t have a filesystem. This is because pve-data
is a thin pool in LVM, which is a type of logical volume that can be expanded and contracted dynamically and is used to create other logical volumes. Instead, you just need to expand the pve-data
logical volume itself, and the logical volumes within it will be able to use the additional space.
Word of warning
Don’t follow this guide blindly. Always check your setup and documentation. This guide presumes that you have a default partition setup with LVM
, as well as pve-root
and pve-data
partitions, which is a default option in most cases. You may be running ZFS
or have a different setup, so please double check your settings and proceed with care.