arrow_upward

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
How to Remotely Increase Your VPS Disk Space
#1
This is a report of the process I went through to resize my LVM root partition when @Neoon alerted me of the allocation of +25GB to my original disk space (ie 10GB.) I'll state all the relevant information for this process and the series of command I executed to claim the newly allocated 25GB free space.

I'm pretty sure there are other ways to do this, but I've just used the routines that I'm used to, but this time on a running out-of-reach/remote CentOS 8 system. Thus what follows assume a familiarity with the LVM-way of managing disk storage (in this sense, this is not a tutorial, just a HowTo.)

The Original Situation:
First a summary of my storage structure:
[root@vps ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0   36G  0 disk
├─sda1        8:1    0    1G  0 part /boot
└─sda2        8:2    0   10G  0 part
 ├─cl-root 253:0    0  8.9G  0 lvm  /
 └─cl-swap 253:1    0  1.1G  0 lvm  [SWAP]
sr0          11:0    1  343M  0 rom  

By the time I asked for a disk space increase, my LVM-based filesystem looked like this:
[root@vps ~]# df -Th
Filesystem          Type      Size  Used Avail Use% Mounted on
devtmpfs            devtmpfs  395M     0  395M   0% /dev
tmpfs               tmpfs     411M     0  411M   0% /dev/shm
tmpfs               tmpfs     411M   11M  400M   3% /run
tmpfs               tmpfs     411M     0  411M   0% /sys/fs/cgroup
/dev/mapper/cl-root xfs       8.9G  7.0G  2.0G  79% /
/dev/sda1           ext4      976M  236M  674M  26% /boot
tmpfs               tmpfs      83M     0   83M   0% /run/user/1000
Note that my root partition has the XFS filesystem format.

Because I've still got other stuff to deploy in there, a need for a disk space increase felt urgent. @Neoon generously accepted my request for a 25GB increase, which showed in the fdisk query of the disk partitions/devices :
[root@vps ~]# fdisk -l /dev/sda
Disk /dev/sda: 36 GiB, 38654705664 bytes, 75497472 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf1f62747

Device     Boot   Start      End  Sectors Size Id Type
/dev/sda1  *       2048  2099199  2097152   1G 83 Linux
/dev/sda2       2099200 23068671 20969472  10G 8e Linux LVM

The output indicates that my storage (/dev/sda) has 36 GiB which means that there is an unallocated 25GB waiting to be claimed.

Claiming the Free space into our LVM device.
Next step is to either add another device (/dev/sda3), which is the safer approach OR increase the size of the /dev/sda2, which is a riskier when done on a running system. Well, I went for the second (of course,) and opted for the cfdisk utility to do the job. Went like a charm, and this is the result:
[root@vps ~]# fdisk -l /dev/sda
Disk /dev/sda: 36 GiB, 38654705664 bytes, 75497472 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf1f62747

Device     Boot   Start      End  Sectors Size Id Type
/dev/sda1  *       2048  2099199  2097152   1G 83 Linux
/dev/sda2       2099200 75497471 73398272  35G 8e Linux LVM

Now, my sda2 device has clearly increased in size but the LVM-based filesystem is still unaware of the change, as you can see below.
[root@vps ~]# df -Th
Filesystem          Type      Size  Used Avail Use% Mounted on
devtmpfs            devtmpfs  395M     0  395M   0% /dev
tmpfs               tmpfs     411M     0  411M   0% /dev/shm
tmpfs               tmpfs     411M   27M  385M   7% /run
tmpfs               tmpfs     411M     0  411M   0% /sys/fs/cgroup
/dev/mapper/cl-root xfs       8.9G  7.0G  1.9G  79% /
/dev/sda1           ext4      976M  236M  674M  26% /boot
tmpfs               tmpfs      83M     0   83M   0% /run/user/1000

[root@vps ~]# pvs
 PV         VG Fmt  Attr PSize   PFree
 /dev/sda2  cl lvm2 a--  <10.00g    0

[root@vps ~]# pvdisplay
 --- Physical volume ---
 PV Name               /dev/sda2
 VG Name               cl
 PV Size               <10.00 GiB / not usable 3.00 MiB
 Allocatable           yes (but full)
 PE Size               4.00 MiB
 Total PE              2559
 Free PE               0
 Allocated PE          2559
 PV UUID               h5d1R4-zak8-p3sG-a63K-TGOz-GG6o-AOvixe

At this stage, I had to do a cautionary reboot just in case!...

Resizing the LVM partition itself.
After enlarging the partition with cfdisk, now is the time to expand the PV (physical volume) on /dev/sda2:
pvresize /dev/sda2
 Physical volume "/dev/sda2" changed
 1 physical volume(s) resized or updated / 0 physical volume(s) not resized

Now the PV and the VG have increased in size as the commands bellow attest:
[root@vps ~]# pvs
 PV         VG Fmt  Attr PSize   PFree
 /dev/sda2  cl lvm2 a--  <35.00g 25.00g

[root@vps ~]# pvdisplay
 --- Physical volume ---
 PV Name               /dev/sda2
 VG Name               cl
 PV Size               <35.00 GiB / not usable 2.00 MiB
 Allocatable           yes
 PE Size               4.00 MiB
 Total PE              8959
 Free PE               6400
 Allocated PE          2559
 PV UUID               h5d1R4-zak8-p3sG-a63K-TGOz-GG6o-AOvixe
 
[root@vps ~]# vgs
 VG #PV #LV #SN Attr   VSize   VFree
 cl   1   2   0 wz--n- <35.00g 25.00g

[root@vps ~]# vgdisplay cl
 --- Volume group ---
 VG Name               cl
 System ID            
 Format                lvm2
 Metadata Areas        1
 Metadata Sequence No  4
 VG Access             read/write
 VG Status             resizable
 MAX LV                0
 Cur LV                2
 Open LV               2
 Max PV                0
 Cur PV                1
 Act PV                1
 VG Size               <35.00 GiB
 PE Size               4.00 MiB
 Total PE              8959
 Alloc PE / Size       2559 / <10.00 GiB
 Free  PE / Size       6400 / 25.00 GiB
 VG UUID               EyAmMV-FCWC-BUXd-uhjn-v5hs-ebbG-rZraDu

But notice that the filesystem is still unaware of those changes:
[root@vps ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             395M     0  395M   0% /dev
tmpfs                411M     0  411M   0% /dev/shm
tmpfs                411M  5.6M  406M   2% /run
tmpfs                411M     0  411M   0% /sys/fs/cgroup
/dev/mapper/cl-root  8.9G  7.0G  1.9G  79% /
/dev/sda1            976M  236M  674M  26% /boot
tmpfs                 83M     0   83M   0% /run/user/1000

Now we'll focuse on the logical volumes. Before any change we had:
[root@vps ~]# lvs
 LV   VG Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
 root cl -wi-ao---- 8.89g                                                    
 swap cl -wi-ao---- 1.10g                                                    
[root@vps ~]# lvdisplay /dev/cl/root
 --- Logical volume ---
 LV Path                /dev/cl/root
 LV Name                root
 VG Name                cl
 LV UUID                BMdzi7-Wlr3-GnDl-LpW8-paPf-CecQ-xVIpxb
 LV Write Access        read/write
 LV Creation host, time localhost, 2019-12-07 11:36:34 +0100
 LV Status              available
 # open                 1
 LV Size                8.89 GiB
 Current LE             2277
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     8192
 Block device           253:0

The command to extend it to the maximum availabel is:
[root@vps ~]# lvextend /dev/mapper/cl-root -l+100%FREE
 Size of logical volume cl/root changed from 8.89 GiB (2277 extents) to 33.89 GiB (8677 extents).
 Logical volume cl/root successfully resized.

If we rerun the same command above to account for the size change in the cl/root volume:
[root@vps ~]# lvs
 LV   VG Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
 root cl -wi-ao---- 33.89g                                                    
 swap cl -wi-ao----  1.10g                                                    
[root@vps ~]# lvdisplay /dev/cl/root
 --- Logical volume ---
 LV Path                /dev/cl/root
 LV Name                root
 VG Name                cl
 LV UUID                BMdzi7-Wlr3-GnDl-LpW8-paPf-CecQ-xVIpxb
 LV Write Access        read/write
 LV Creation host, time localhost, 2019-12-07 11:36:34 +0100
 LV Status              available
 # open                 1
 LV Size                33.89 GiB
 Current LE             8677
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     8192
 Block device           253:0

Now, you should know that even at this point the filesystem is still the same; the proof:
[root@vps ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             395M     0  395M   0% /dev
tmpfs                411M     0  411M   0% /dev/shm
tmpfs                411M  5.6M  406M   2% /run
tmpfs                411M     0  411M   0% /sys/fs/cgroup
/dev/mapper/cl-root  8.9G  7.0G  1.9G  79% /
/dev/sda1            976M  236M  674M  26% /boot
tmpfs                 83M     0   83M   0% /run/user/1000

That's because to effectively grow our filesystem, we had to get that free space all the way up to logical volume hosting our lvm root partition then issue the following command which is specific to the XFS format.
[root@vps ~]# xfs_growfs /
meta-data=/dev/mapper/cl-root    isize=512    agcount=4, agsize=582912 blks
        =                       sectsz=512   attr=2, projid32bit=1
        =                       crc=1        finobt=1, sparse=1, rmapbt=0
        =                       reflink=1
data     =                       bsize=4096   blocks=2331648, imaxpct=25
        =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
        =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 2331648 to 8885248

NOW, we FINALLY got that free space we needed :
[root@vps ~]# df -Th
Filesystem          Type      Size  Used Avail Use% Mounted on
devtmpfs            devtmpfs  395M     0  395M   0% /dev
tmpfs               tmpfs     411M     0  411M   0% /dev/shm
tmpfs               tmpfs     411M  5.6M  406M   2% /run
tmpfs               tmpfs     411M     0  411M   0% /sys/fs/cgroup
/dev/mapper/cl-root xfs        34G  7.2G   27G  22% /
/dev/sda1           ext4      976M  236M  674M  26% /boot
tmpfs               tmpfs      83M     0   83M   0% /run/user/1000

Mission accomplished.

PS: I should say that there is a tool that abstract away all the LVM resizing logic. Never used it, but I should mention its existence for the newbies who don't like this LVM commands mess. It's the system-storage-manager (install it as such in RH-based system) which has ssm as the binary.
VirMach's Buffalo_VPS-9 Holder (Dec. 20 - July 21)
microLXC's Container Holder (july 20 - ?)
VirMach's Phoenix_VPS-9 Holder (Apr. 20 - June 20)
NanoKVM's NAT-VPS Holder (jan. 20 - ?)
#2
thanks for taking time to share all the steps. I have a question for you. why did you choose for xfs for root in this case ? what are the benefits you were aiming at and if you are getting those. i mean i want to know the impact of xfs in your case instead of ext4.

personally, i never use lvm myself. almost never is better i think.
Sincere Thanks to VirMach for my VPS9. Also many thanks to Shadow Hosting and cubedata for the experiences I had with their VPSs.
#3
Excellent tutorial @fChk. I myself use this method of disk space increase whenever I need to. Glad that you posted this tutorial for the benefit of everyone and still dunno why I didn't post a similar one till date.

Well, I know this is a quite time-consuming and complex process of increasing disk space, and I'm curious to know if someone might have a better method of doing it, and I'd love to know alternate methods of doing this as well.

About the tutorial, I liked the consistency with which you've written this tutorial explaining everywhere what is used and when is it to be used. Moreover, the detailed presentation of outputs and their relevant explanation is quite appealing to be. Overall, you did a great job in writing this tutorial.

Regards,
Sayan Bhattacharyya,

Heartiest thanks to Post4VPS and Virmach for my wonderful VPS 9!
#4
@sohamb03
Thanks for the kind words :-)

(01-24-2020, 04:20 PM)rudra Wrote: thanks for taking time to share all the steps.
The pleasure is mine :-)

(01-24-2020, 04:20 PM)rudra Wrote: why did you choose for xfs for root in this case ? what are the benefits you were aiming at and if you are getting those. i mean i want to know the impact of xfs in your case instead of ext4.
Actually I didn't choose the filesystem type[1] and, most probably, neither @Neoon when setting up the VPS. It's just the default filesystem used by CentOS since v7. In Fedora, the server version has XFS filesystem by default since v22.

It basically means that RedHat opted for XFS in their server editions (RHEL, CentOS and Fedora) since 2015. The why? and the difference between XFS and Ext4 can be found in a lot more details in this 'How to Choose Your Red Hat Enterprise Linux File System' knowledgebase article. I'm sure it will answer your questions :-)

When I reinstalled my Fedora Server 25 back in 2016, it came as a shock when I realized that the filesystem used was XFS (I did left the Anaconda installer do the layout automatically.) But I really never regret it. Prior to that re-installation I've always just upgraded my Fedora all the way back to Fedora 12, so I've never been aware of the transition to XFS.


(01-24-2020, 04:20 PM)rudra Wrote: personally, i never use lvm myself. almost never is better i think.
Most (if not all) modern distributions use LVM (Logical Volume Management) by default. Fedora was the first distribution to introduce it, It did take time to get the hang of its logic but, like everything else, we get use to it and we acknowledge its strength, that is the fact that LVM allows for very flexible disk space management; this thread is a testimony for that.

------------------
[1]- If I did have to select the filesystem type, I would have opted for XFS too.
VirMach's Buffalo_VPS-9 Holder (Dec. 20 - July 21)
microLXC's Container Holder (july 20 - ?)
VirMach's Phoenix_VPS-9 Holder (Apr. 20 - June 20)
NanoKVM's NAT-VPS Holder (jan. 20 - ?)
#5
great tutorial . before this tutorial i always use gparted gui for doing anyhing with partition never cli . i dont use lvm myself

i have disk space issue but it isnt related with lvm and that command but it is still great turial
Terminal
humanpuff69@FPAX:~$ Thanks To Shadow Hosting And Post4VPS for VPS 5
#6
fChk

i skimmed through the links and i thought they confirmed what i had remembered. xfs is very good on high throughput systems with big files , multiple threads for read write, high MBps and iops systems. But as we are usually on a vps with one or two core and high contention for resources, i expected ext4 to be better or at least equal. that i was not losing any benefits. i dont have any benchmarks that i can claim or show you from. so basically this is totally a guess on my part.

also thanks for egging me to reread on lvm. seems pretty impressive list of features. but i think i will refrain from using them on vps too. cause i like minimal approach and i don't need to cram any more virtual layers in there. so yes, lvm is pretty great if and when one needs it. but i am not sure all those features matter for a user using an isolated vps for simple projects.. I'm talking about me.

thanks for the tutorial and links.
Sincere Thanks to VirMach for my VPS9. Also many thanks to Shadow Hosting and cubedata for the experiences I had with their VPSs.
#7
(01-24-2020, 06:27 PM)sohamb03 Wrote: Well, I know this is a quite time-consuming and complex process of increasing disk space, and I'm curious to know if someone might have a better method of doing it, and I'd love to know alternate methods of doing this as well.

I don't think it's time-consuming at all, but it's definitely error-prone. That's why system-storage-manager (ssm) -mentioned in my OP- was developed as a sugar-layer over the native lvm-commands.

I didn't use SSM because I've never used it before, so I can't trust it for an online job, but it definetly cut off the verbosity of lvm commands. Thus, I would recommend you watch this Youtube's demo on "How To Manage Linux Storage Using System Storage Manager SSM With LVM".

Then there is GParted but it needs a GUI. @Neoon's setup allows to boot on GParted and the use of VNC, but I didn't bother with that alternative at all. Nothing match typing the command yourself and see the immediate result to them... Yes, cumbersome BUT effective.

(01-25-2020, 04:07 PM)rudra Wrote: i skimmed through the links and i thought they confirmed what i had remembered. xfs is very good on high throughput systems with big files , multiple threads for read write, high MBps and iops systems. But as we are usually on a vps with one or two core and high contention for resources, i expected ext4 to be better or at least equal. that i was not losing any benefits. i dont have any benchmarks that i can claim or show you from. so basically this is totally a guess on my part.
You don't need any benchmarks to convince me that Ext4 is more suitable for the hardware environment that the VPS is in :-) I totally agree with you on that.

I don't think, in my own situation too, it would make much of a difference if the filesystem is either XFS or Ext4. BUT, I would nevertheless lean towards XFS, as it saved me at least 2 times since I started using (2016), from situations where I was starting to think that my data is gone!.. It's just this thing that we -Humans- call TRUST :-)
VirMach's Buffalo_VPS-9 Holder (Dec. 20 - July 21)
microLXC's Container Holder (july 20 - ?)
VirMach's Phoenix_VPS-9 Holder (Apr. 20 - June 20)
NanoKVM's NAT-VPS Holder (jan. 20 - ?)
#8
@fChk Actually that was exactly my point. I feel I might've been a little ambiguous in the choice of words, largely because I'd posted the reply just before leaving for school. I meant time-consuming in the sense that the process is prone to errors, and if someone not so experienced tries to use this method and gets errors, he's gonna be having a hard time fixing those.

Also, I myself never used SSM; the LVM commands serve my purpose. And yeah, they're actually the sugar layer on LVM, making it easy to use the native commands. 

I was quite interested in knowing if someone has another method to do the same thing we're achieving which might be easier and beginner-friendly. Let's see if someone comes up with other ways. 

Regards,
Sayan Bhattacharyya,

Heartiest thanks to Post4VPS and Virmach for my wonderful VPS 9!
#9
Great tutorial, I am sure it will be of use to many people in the community, personally I thought the sponsor themselves allocated the extra storage
Thanks to ShadowHosting and Post4VPS for my VPS 5!


Possibly Related Threads…
Thread
Author
Replies
Views
Last Post

person_pin_circle Users browsing this thread: 1 Guest(s)
Sponsors: VirMach - Host4Fun - CubeData - Evolution-Host - HostDare - Hyper Expert - Shadow Hosting - Bladenode - Hostlease - RackNerd - ReadyDedis - Limitless Hosting