It is not a good idea to partition disks with
fdisk. It is better to allow LVM work with whole physical disks, let’s LVM manage the disks. If disks were not whole owned by LVM the following procedure would fail and a disk would not be able to “grow”.
# pvs PV VG Fmt Attr PSize PFree /dev/sda2 vg_sys lvm2 a-- 69.51g 6.91g /dev/sdb vg_u01 lvm2 a-- 30.00g 10.00g /dev/sdc vg_sshi lvm2 a-- 30.00g 10.00g /dev/sdd soa_vg lvm2 a-- 16.00g 1020.00m
There is about 1GB left in the
soa_vg but the application owner demands 15GB more. It is easy to address this request as long as the disks are not partitioned with
fdisk which is my case. Using vmware console, we “grow” the appropriate disk by the additional 15GB. Next, the operating system must be instructed to re-scan its disks, which is achieved with these steps.
# cd /sys/class/scsi_disk # for i in `ls`; do echo "1" > $i/device/rescan; done
Now, let the kernel know to “re-size” the disk.
# pvresize -v /dev/sdd DEGRADED MODE. Incomplete RAID LVs will be processed. Using physical volume(s) on command line Archiving volume group "soa_vg" metadata (seqno 4). Resizing volume "/dev/sdd" to 62912512 sectors. No change to size of physical volume /dev/sdd. Updating physical volume "/dev/sdd" Creating volume group backup "/etc/lvm/backup/soa_vg" (seqno 5). Physical volume "/dev/sdd" changed 1 physical volume(s) resized / 0 physical volume(s) not resized
Were we successful?
# pvs PV VG Fmt Attr PSize PFree /dev/sda2 vg_sys lvm2 a-- 69.51g 6.91g /dev/sdb vg_u01 lvm2 a-- 30.00g 10.00g /dev/sdc vg_sshi lvm2 a-- 30.00g 10.00g /dev/sdd soa_vg lvm2 a-- 30.00g 15.00g
Now, we can increase the size of the file systems in