On Friday June 15, [EMAIL PROTECTED] wrote:
> There appears to be a discrepancy between the true state of affairs on my
> RAID partitions and what df reports;
>
> [root /]# sfdisk -l /dev/hda
>
> Disk /dev/hda: 38792 cylinders, 16 heads, 63 sectors/track
> Units = cylinders of 516096 bytes, blocks of 1024 bytes, counting from 0
>
> Device Boot Start End #cyls #blocks Id System
> /dev/hda1 0+ 1523 1524- 768095+ fd Linux raid autodetect
> /dev/hda2 1524 1845 322 162288 5 Extended
> /dev/hda3 1846 2252 407 205128 fd Linux raid autodetect
> /dev/hda4 2253 38791 36539 18415656 fd Linux raid autodetect
> /dev/hda5 1524+ 1584 61- 30743+ 83 Linux
> /dev/hda6 1585+ 1845 261- 131543+ 82 Linux swap
>
> [root /]# df
> Filesystem 1k-blocks Used Available Use% Mounted on
> /dev/md1 755920 666748 50772 93% /
> WRONG
> /dev/md3 198313 13405 174656 7% /var
> WRONG
> /dev/md4 18126088 118288 17087024 1% /home WRONG
>
> These figures are clearly wrong. Can anyone suggest where I should start
> looking for an explanation?
How can figures be wrong? They are just figures.
What do you think is wrong about them??
Anyway, for a more useful response...
I assume that md[134] are RAID1 arrays, with one mirror on hda.
Lets take md1 made in part of hda1
hda1 has 768095 1K blocks.
md/raid rounds down to a multiple of 64K, and then removes the last
64k for the raid super block, leaving
768000 1K blocks.
ext2fs uses some of this for metad, and reports the rest as the
available space.
The overhead space compises the superblocks, the block group
descriptors, and inode bitmaps, the block bitmaps, and the inode
tables.
This seems to add up to 12080K on this filesystem, about 1.6%.
NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]