On Thu, Dec 31, 2009 at 06:16:35PM +0200, Denis Doroshenko wrote:

> hi,
> 
> this message may be a little too long, the most intriguing part is the
> difference between sizes reported by the kernel (in dmesg) and bioctl.
> any idea, why bioctl reports size 1 TB smaller?
> 
> i've got HP proliant dl140 with "Hewlett-Packard Smart Array" card in it.
> put a couple of 1.5 TB disks put to RAID0 for testing.
> i think i saw it reporting logical drive 2.9 TB or something, which
> was expected.
> 
> the kernel (29 dec i386 snapshot) reports:
> 
> ciss0 at pci7 dev 8 function 0 "Hewlett-Packard Smart Array" rev 0x00:
> apic 8 int 16 (irq 7)
> ciss0: 1 LD, HW rev 0, FW 1.66/1.66
> scsibus0 at ciss0: 1 targets
> sd0 at scsibus0 targ 0 lun 0: <HP, LOGICAL VOLUME, 1.66> SCSI3 0/direct fixed
> sd0: 2861534MB, 512 bytes/sec, 5860422960 sec total
> 
> 5'860'422'960 sectors sounds very like it.
> 
> bioctl says:
> 
> # bioctl ciss0
> Volume  Status               Size Device
> ciss0 0 Online      2199023255040 sd0     RAID0
>       0 Online      1500301910016 0:0.0   noencl <ATA     ST31500341AS    >
>       1 Online      1500301910016 0:1.0   noencl <ATA     ST31500341AS    >
> #
> 
> well, while sizes of physical disks are reported correctly, size of
> sd0 is smaller: about 2 TB.

No idea what's going on here.

> 
> fdisk says:
> 
> # fdisk sd0
> Disk: sd0       geometry: 718189/255/32 [1565455664 Sectors]
> Offset: 0       Signature: 0xAA55
>             Starting         Ending         LBA Info:
>  #: id      C   H   S -      C   H   S [       start:        size ]
> -------------------------------------------------------------------------------
>  0: 00      0   0   0 -      0   0   0 [           0:           0 ] unused
>  1: 00      0   0   0 -      0   0   0 [           0:           0 ] unused
>  2: 00      0   0   0 -      0   0   0 [           0:           0 ] unused
> *3: A6      0   1  32 - 191844  39  26 [          63:  1565448251 ] OpenBSD
> #
> 
> so it is about 750 GB here. i wouldn't care about fdisk that much. let
> alone LBA48 and possibly other stuff, apparently the MBR still uses 32
> bit fields to hold the start sector and the size
> (http://en.wikipedia.org/wiki/Master_boot_record). And the number the
> fdisk gives looks very like "5860422960 - 2^32". so perhaps the size
> the fdisk gives is an overflown value (it could be better equal to
> 3^32-1, closer to reality).
> 
> disklable seems to have the right number, however for OpenBSD area
> boudaries it still believes to what fdisk says:
> 
> # disklabel sd0
> # /dev/rsd0c:
> type: SCSI
> disk: SCSI disk
> label: LOGICAL VOLUME
> flags:
> bytes/sector: 512
> sectors/track: 255
> tracks/cylinder: 511
> sectors/cylinder: 130305
> cylinders: 44974
> total sectors: 5860422960
> rpm: 3600
> interleave: 1
> boundstart: 63
> boundend: 1565448314
> drivedata: 0
> 
> 16 partitions:
> #                size           offset  fstype [fsize bsize  cpg]
>   a:           651525         12639585  4.2BSD   2048 16384    1 # /
>   b:         12639522               63    swap
>   c:       5860422960                0  unused
>   d:          2215185         13291110  4.2BSD   2048 16384    1 # /usr
>   e:          2215185         15506295  4.2BSD   2048 16384    1 # /var
> #

That is correct, initially, disklabel believes fdisk, which just isn't
capable of handling more than 2^32 sectors.

> 
> luckily enough, it allows us to say we want to use the whole disk ("*"
> for size rocks!), and there i have it:
> 
>   f:       5842701480         17721480  4.2BSD   8192 65536    1
> 
> newfs needs to be instructed to use Enhanced Fast File System (FFS2),
> otherwise it gives somewhat funny message:
> 
> # newfs /dev/rsd0f
> newfs: preposterous size 5842701480, max is 2147483647
> #
> 
> it could just say "the size is bigger than 2147483647, switching to
> FFS2" and go further. a little chage to parameters (freed 0.2 TB for
> me) and there it is shining brightly:

I don't think an automatic switch is a good solution, imo people should
make a conscious decision to use ffs2. See below.

> 
> # df -h /mnt
> Filesystem     Size    Used   Avail Capacity  Mounted on
> /dev/sd0f      2.7T    8.0K    2.7T     0%    /mnt
> #
> 
> this the first time i mounted somthing bigger than 300 GB, so it is
> "wow" for me :-)
> thanks for your time!

There is a big caveat to using filesystems this large, see the faq: 
http://www.openbsd.org/faq/faq14.html#LargeDrive

There's one inaccuracy here, amd64 systems should be able to allocate
up to 8G to a process, but you are entering untested territory here.
It's better to stay on the safe side, and not create filesystems that
are too large.

I have some code to estimate the amount of memory needed to run a
fsck, which I could use during newfs time to warn against creating
filesystems we know you cannot fsck. But I didn't have the time to
actually verify the guesses are correct so far. 

        -Otto

Reply via email to