01-24-2020, 01:25 PM
This is a report of the process I went through to resize my LVM root partition when @Neoon alerted me of the allocation of +25GB to my original disk space (ie 10GB.) I'll state all the relevant information for this process and the series of command I executed to claim the newly allocated 25GB free space.
I'm pretty sure there are other ways to do this, but I've just used the routines that I'm used to, but this time on a running out-of-reach/remote CentOS 8 system. Thus what follows assume a familiarity with the LVM-way of managing disk storage (in this sense, this is not a tutorial, just a HowTo.)
The Original Situation:
First a summary of my storage structure:
By the time I asked for a disk space increase, my LVM-based filesystem looked like this:
Note that my root partition has the XFS filesystem format.
Because I've still got other stuff to deploy in there, a need for a disk space increase felt urgent. @Neoon generously accepted my request for a 25GB increase, which showed in the fdisk query of the disk partitions/devices :
The output indicates that my storage (/dev/sda) has 36 GiB which means that there is an unallocated 25GB waiting to be claimed.
Claiming the Free space into our LVM device.
Next step is to either add another device (/dev/sda3), which is the safer approach OR increase the size of the /dev/sda2, which is a riskier when done on a running system. Well, I went for the second (of course,) and opted for the cfdisk utility to do the job. Went like a charm, and this is the result:
Now, my sda2 device has clearly increased in size but the LVM-based filesystem is still unaware of the change, as you can see below.
At this stage, I had to do a cautionary reboot just in case!...
Resizing the LVM partition itself.
After enlarging the partition with cfdisk, now is the time to expand the PV (physical volume) on /dev/sda2:
Now the PV and the VG have increased in size as the commands bellow attest:
But notice that the filesystem is still unaware of those changes:
Now we'll focuse on the logical volumes. Before any change we had:
The command to extend it to the maximum availabel is:
If we rerun the same command above to account for the size change in the cl/root volume:
Now, you should know that even at this point the filesystem is still the same; the proof:
That's because to effectively grow our filesystem, we had to get that free space all the way up to logical volume hosting our lvm root partition then issue the following command which is specific to the XFS format.
NOW, we FINALLY got that free space we needed :
Mission accomplished.
PS: I should say that there is a tool that abstract away all the LVM resizing logic. Never used it, but I should mention its existence for the newbies who don't like this LVM commands mess. It's the system-storage-manager (install it as such in RH-based system) which has ssm as the binary.
I'm pretty sure there are other ways to do this, but I've just used the routines that I'm used to, but this time on a running out-of-reach/remote CentOS 8 system. Thus what follows assume a familiarity with the LVM-way of managing disk storage (in this sense, this is not a tutorial, just a HowTo.)
The Original Situation:
First a summary of my storage structure:
Code: (Select All)
[root@vps ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 36G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 10G 0 part
├─cl-root 253:0 0 8.9G 0 lvm /
└─cl-swap 253:1 0 1.1G 0 lvm [SWAP]
sr0 11:0 1 343M 0 rom
By the time I asked for a disk space increase, my LVM-based filesystem looked like this:
Code: (Select All)
[root@vps ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 395M 0 395M 0% /dev
tmpfs tmpfs 411M 0 411M 0% /dev/shm
tmpfs tmpfs 411M 11M 400M 3% /run
tmpfs tmpfs 411M 0 411M 0% /sys/fs/cgroup
/dev/mapper/cl-root xfs 8.9G 7.0G 2.0G 79% /
/dev/sda1 ext4 976M 236M 674M 26% /boot
tmpfs tmpfs 83M 0 83M 0% /run/user/1000
Because I've still got other stuff to deploy in there, a need for a disk space increase felt urgent. @Neoon generously accepted my request for a 25GB increase, which showed in the fdisk query of the disk partitions/devices :
Code: (Select All)
[root@vps ~]# fdisk -l /dev/sda
Disk /dev/sda: 36 GiB, 38654705664 bytes, 75497472 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf1f62747
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 2099199 2097152 1G 83 Linux
/dev/sda2 2099200 23068671 20969472 10G 8e Linux LVM
The output indicates that my storage (/dev/sda) has 36 GiB which means that there is an unallocated 25GB waiting to be claimed.
Claiming the Free space into our LVM device.
Next step is to either add another device (/dev/sda3), which is the safer approach OR increase the size of the /dev/sda2, which is a riskier when done on a running system. Well, I went for the second (of course,) and opted for the cfdisk utility to do the job. Went like a charm, and this is the result:
Code: (Select All)
[root@vps ~]# fdisk -l /dev/sda
Disk /dev/sda: 36 GiB, 38654705664 bytes, 75497472 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf1f62747
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 2099199 2097152 1G 83 Linux
/dev/sda2 2099200 75497471 73398272 35G 8e Linux LVM
Now, my sda2 device has clearly increased in size but the LVM-based filesystem is still unaware of the change, as you can see below.
Code: (Select All)
[root@vps ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 395M 0 395M 0% /dev
tmpfs tmpfs 411M 0 411M 0% /dev/shm
tmpfs tmpfs 411M 27M 385M 7% /run
tmpfs tmpfs 411M 0 411M 0% /sys/fs/cgroup
/dev/mapper/cl-root xfs 8.9G 7.0G 1.9G 79% /
/dev/sda1 ext4 976M 236M 674M 26% /boot
tmpfs tmpfs 83M 0 83M 0% /run/user/1000
[root@vps ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 cl lvm2 a-- <10.00g 0
[root@vps ~]# pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name cl
PV Size <10.00 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 2559
Free PE 0
Allocated PE 2559
PV UUID h5d1R4-zak8-p3sG-a63K-TGOz-GG6o-AOvixe
At this stage, I had to do a cautionary reboot just in case!...
Resizing the LVM partition itself.
After enlarging the partition with cfdisk, now is the time to expand the PV (physical volume) on /dev/sda2:
Code: (Select All)
pvresize /dev/sda2
Physical volume "/dev/sda2" changed
1 physical volume(s) resized or updated / 0 physical volume(s) not resized
Now the PV and the VG have increased in size as the commands bellow attest:
Code: (Select All)
[root@vps ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 cl lvm2 a-- <35.00g 25.00g
[root@vps ~]# pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name cl
PV Size <35.00 GiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 8959
Free PE 6400
Allocated PE 2559
PV UUID h5d1R4-zak8-p3sG-a63K-TGOz-GG6o-AOvixe
[root@vps ~]# vgs
VG #PV #LV #SN Attr VSize VFree
cl 1 2 0 wz--n- <35.00g 25.00g
[root@vps ~]# vgdisplay cl
--- Volume group ---
VG Name cl
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <35.00 GiB
PE Size 4.00 MiB
Total PE 8959
Alloc PE / Size 2559 / <10.00 GiB
Free PE / Size 6400 / 25.00 GiB
VG UUID EyAmMV-FCWC-BUXd-uhjn-v5hs-ebbG-rZraDu
But notice that the filesystem is still unaware of those changes:
Code: (Select All)
[root@vps ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 395M 0 395M 0% /dev
tmpfs 411M 0 411M 0% /dev/shm
tmpfs 411M 5.6M 406M 2% /run
tmpfs 411M 0 411M 0% /sys/fs/cgroup
/dev/mapper/cl-root 8.9G 7.0G 1.9G 79% /
/dev/sda1 976M 236M 674M 26% /boot
tmpfs 83M 0 83M 0% /run/user/1000
Now we'll focuse on the logical volumes. Before any change we had:
Code: (Select All)
[root@vps ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root cl -wi-ao---- 8.89g
swap cl -wi-ao---- 1.10g
[root@vps ~]# lvdisplay /dev/cl/root
--- Logical volume ---
LV Path /dev/cl/root
LV Name root
VG Name cl
LV UUID BMdzi7-Wlr3-GnDl-LpW8-paPf-CecQ-xVIpxb
LV Write Access read/write
LV Creation host, time localhost, 2019-12-07 11:36:34 +0100
LV Status available
# open 1
LV Size 8.89 GiB
Current LE 2277
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
The command to extend it to the maximum availabel is:
Code: (Select All)
[root@vps ~]# lvextend /dev/mapper/cl-root -l+100%FREE
Size of logical volume cl/root changed from 8.89 GiB (2277 extents) to 33.89 GiB (8677 extents).
Logical volume cl/root successfully resized.
If we rerun the same command above to account for the size change in the cl/root volume:
Code: (Select All)
[root@vps ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root cl -wi-ao---- 33.89g
swap cl -wi-ao---- 1.10g
[root@vps ~]# lvdisplay /dev/cl/root
--- Logical volume ---
LV Path /dev/cl/root
LV Name root
VG Name cl
LV UUID BMdzi7-Wlr3-GnDl-LpW8-paPf-CecQ-xVIpxb
LV Write Access read/write
LV Creation host, time localhost, 2019-12-07 11:36:34 +0100
LV Status available
# open 1
LV Size 33.89 GiB
Current LE 8677
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
Now, you should know that even at this point the filesystem is still the same; the proof:
Code: (Select All)
[root@vps ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 395M 0 395M 0% /dev
tmpfs 411M 0 411M 0% /dev/shm
tmpfs 411M 5.6M 406M 2% /run
tmpfs 411M 0 411M 0% /sys/fs/cgroup
/dev/mapper/cl-root 8.9G 7.0G 1.9G 79% /
/dev/sda1 976M 236M 674M 26% /boot
tmpfs 83M 0 83M 0% /run/user/1000
That's because to effectively grow our filesystem, we had to get that free space all the way up to logical volume hosting our lvm root partition then issue the following command which is specific to the XFS format.
Code: (Select All)
[root@vps ~]# xfs_growfs /
meta-data=/dev/mapper/cl-root isize=512 agcount=4, agsize=582912 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=2331648, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 2331648 to 8885248
NOW, we FINALLY got that free space we needed :
Code: (Select All)
[root@vps ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 395M 0 395M 0% /dev
tmpfs tmpfs 411M 0 411M 0% /dev/shm
tmpfs tmpfs 411M 5.6M 406M 2% /run
tmpfs tmpfs 411M 0 411M 0% /sys/fs/cgroup
/dev/mapper/cl-root xfs 34G 7.2G 27G 22% /
/dev/sda1 ext4 976M 236M 674M 26% /boot
tmpfs tmpfs 83M 0 83M 0% /run/user/1000
Mission accomplished.
PS: I should say that there is a tool that abstract away all the LVM resizing logic. Never used it, but I should mention its existence for the newbies who don't like this LVM commands mess. It's the system-storage-manager (install it as such in RH-based system) which has ssm as the binary.