I had to make a quick decision, what to do with 97% disk full and I decided to EXTEND the existing partition. Here's how I did it:
SYSTEM: Ubuntu 8.04 LTS
Default install with LVM.
All this was done as root
1.) Powered down ZIMBRA virtual machine in my ESXi host and made FULL BACKUP.
2.) Edited virtual machine settings and increased DISK SIZE for 200 GB.
3.) Powered ON the virtual machine.
4.) Identified the device name, which was in my case was /dev/sda and looked up the size:
5.) Then I created new primary partition:
# fdisk -l
Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0006f39d
Device Boot Start End Blocks Id System
/dev/sda1 * 1 31 248976 83 Linux
/dev/sda2 32 32635 261891630 5 Extended
/dev/sda5 32 32635 261891598+ 8e Linux LVM
- pressed p to print partition table to identify partitions. There were sda1, sda2 and sda5
- pressed n to create new primary partition
- pressed p for primary
- pressed 3 for the partition number (remember, sda1 and sda2 were already there, so sda3 is next in my case)
- pressed ENTER two times to accept suggested Start and End positions
- pressed w to write the changes to the partition table
NOTE: Don't panic when you get "WARNING: Re-reading the partition table failed with error 16: Device or resource busy."
This is normal, you only need to reboot machine.
6.) Restarted the virtual machine
7.) Verified that the changes were written to partition table and that new partition is of type 83:
8.) Then I converted the new partition to a physical volume:
8.1.) Checked the name of my volume group:
# pvcreate /dev/sda3
Remember this name!
# vgdisplay | grep "Name"
VG Name zimbra
9.) Run the command to extend the physical volume:
Here "zimbra" is the name, which you grepped in previous step.
# vgextend zimbra /dev/sda3
10.) Verify how many physical extents are available to the Volume Group:
Ok, now we have a bit more than 200 GB to extend, but we'll stick to 200 GB to be on the safe side.
# vgdisplay zimbra | grep "Free"
Free PE / Size 51224 / 200.11 GiB
10.1.) Get the name of your Logical Volume:
Now, here you see 2 Logical Volumes: root and swap_1. We want to extend root volume and leave swap_1 unchanged.
# lvdisplay | grep "Name"
LV Name /dev/zimbra/root
VG Name zimbra
LV Name /dev/zimbra/swap_1
VG Name zimbra
11.) Now extend the Logical Volume:
The 200G is the size, which we determined to be free in step 10.
# lvextend -L+200G /dev/zimbra/root
12.) Now expand the ext3 filesystem online, to fill the Logical Volume:
13.) See the new space available:
# resize2fs /dev/zimbra/root
14.) Finally, you need to REBOOT once again. But be aware that this might tahe HOURS to complete!
Upon reboot, fsck will be forced with the message:
This is normal. E2fs forces a fsck if it notices the backup superblocks are different from the primary superblock, to avoid corrupting a valid backup before copying the primary superblock to the backup superblocks.
/dev/mapper/zimbra-root primary superblock features different from backup, check forced.
It is still running, in my case seems it will take 3-4 hours. Hope no "bad" words will be seen in report.
**EDIT** fsck finished within 1 hour, reported to corrects some minor errors (probably updating superblocks after resizing), forced one REBOOT and after that Zimbra started normally, no issues, performance OK. Also disk space graphs in Admin console updated accordingly. Kewl.
Seems we're back in business, with smile on my face