On Thu, Sep 29, 2005 at 11:54:42AM +0400, Al Nikolov wrote: > I wonder, how this could happen.. /proc/mdstat consists of: > > md1 : active raid5 sdc2[0] sda2[6] sdb2[5] sdg2[4] sdf2[3] sde2[2] sdd2[1] > 178240640 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
This blocks are 1k. So the md device is 356481280 sectors long.
> raid5 {
> id = "7nexDh-trZu-7Sjy-XGsx-P7hn-6DkM-2WEEoR"
> seqno = 1
> status = ["RESIZEABLE", "READ", "WRITE"]
> system_id = "bilbo1085513760"
> extent_size = 8192 # 4 Megabytes
This is the count of 512 byte sectors per physical extend.
> physical_volumes {
>
> pv0 {
> id = "GbIjZa-z7WJ-ufBX-WR8B-sAq4-JHNU-xh1zee"
> device = "/dev/md1" # Hint only
>
> status = ["ALLOCATABLE"]
> pe_start = 8832
Location of first physical extend on the volume in sectors.
> pe_count = 43563 # 170,168 Gigabytes
> }
> }
The PV have 43563 * 8192 + 8832 = 356876928 sectors. This is 395648
sectors larger than the size of the md device.
The only way I know to fix this is
- vgchange -an $vg
- vgexport $vg
- vgcfgbackup $vg
- copy the group config backup and edit it to match the real size.
- vgcfgrestore -f $config $vg
- vgimport $vg
- vgchange -ay $vg
Bastian
--
I'm a soldier, not a diplomat. I can only tell the truth.
-- Kirk, "Errand of Mercy", stardate 3198.9
signature.asc
Description: Digital signature

