On Fri, 2009-09-04 at 14:26 +0200, Sylvain Beucler wrote:
> Hi,
>
> We're running out of disk space for /var/lib/vservers/, though there
> is free space on the other partition:
>
> /dev/mapper/vg--disk6-vservers
> 136G 134G 2.2G 99% /var/lib/vservers
> /dev/mapper/vg--disk2-bart
> 135G 80G 55G 60% /var/lib/vservers/bart
>
> What we do?
(Next episode: Petzi has 4 brand new 250G disks instead of the old
150G)
Now we have:
petzi:~# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
lisa vg-raid1 -wi-ao 142.00G
bart vg-raid2 -wi-ao 120.00G
ns9 vg-raid2 -wi-ao 10.00G
The vg-raid1 VG is attached to the first RAID1 pair of disks (md1=sda
+sdb).The vg-raid2 VG is attached to the second RAID1 pair of disks
(md2=sdc+sdd):
petzi:~# vgs
VG #PV #LV #SN Attr VSize VFree
vg-raid1 1 1 0 wz--n- 230.02G 88.02G
vg-raid2 1 2 0 wz--n- 230.02G 100.02G
There's room to extend the LVs, but since it's painful to reduce them,
I suggest we only extend them when strictly needed, and by reasonnable
steps.
Right now the I/O separation 'lisa on first RAID, bart on second RAID'
is a bit crude but that's still much better than before. I guess a
better set up would be on one LV per chroot (subversion, arch, download,
etc), then with a few statistics rebalance 2 or three of them between
vg-raid1 and vg-raid2 for optimal I/O usage. That's why I like to have
some comfortable free room on the VGs.
Oh and cool to see:
petzi:~# free -m
total used free
Mem: 3967 3944 23
-/+ buffers/cache: 617 3350
Swap: 1945 0 1945
_______________________________________________
Project mailing list
[email protected]
https://mail.gna.org/listinfo/project