On 05/23/2012 01:04 PM, Alan McKay wrote:
Hey folks,

I have a Sun J4400 SAS1 disk array with 24 x 1T drives in it connected
to a Sunfire x2250 running RHEL5.8

I used 'arcconf' to create a big RAID60 out of (see below).

But then I mount it and it is way too small :
[root@solexa1 StorMan]# df -h /dev/sdb1
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb1             186G   60M  176G   1% /mnt/J4400-1


Here is how I created it :

./arcconf create 1 logicaldrive name J4400-1-RAID60 max 60 0 0 0 1 0 2
0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 11 0 12 0 13 0 14 0 15 0 16 0 17 0
18 0 19 0 20 0 21 0 22 0 23 noprompt

[root@solexa1 StorMan]# ./arcconf getconfig 1 ld
Controllers found: 1
----------------------------------------------------------------------
Logical device information
----------------------------------------------------------------------
Logical device number 0
    Logical device name                      : J4400-1-RAID60
    RAID level                               : 60 XOR
    Status of logical device                 : Impacted
    Size                                     : 19066880 MB
    Stripe-unit size                         : 256 KB
    Read-cache mode                          : Enabled
    Write-cache mode                         : Enabled (write-back)
    Write-cache setting                      : Enabled (write-back)
when protected by battery
    Partitioned                              : Yes
    Protected by Hot-Spare                   : No
    Bootable                                 : Yes
    Failed stripes                           : No
    --------------------------------------------------------
    Logical device segment information
    --------------------------------------------------------
    Group 0, Segment 0                       : Present (0,0) 9QJ3ZAYQ
    Group 0, Segment 1                       : Present (0,1) 9QJ3ZP3Y
    Group 0, Segment 2                       : Present (0,2) 9QJ3X7GR
    Group 0, Segment 3                       : Present (0,3) 9QJ3XJQW
    Group 0, Segment 4                       : Present (0,4) 9QJ3TPK2
    Group 0, Segment 5                       : Present (0,5) 9QJ40PHP
    Group 0, Segment 6                       : Present (0,6) GTE002PBHJEDBE
    Group 0, Segment 7                       : Present (0,7) 9QJ3ZHE0
    Group 0, Segment 8                       : Present (0,8) 9QJ3Z053
    Group 0, Segment 9                       : Present (0,9) 9QJ3ZEX6
    Group 0, Segment 10                      : Present (0,10) 9QJ33XGG
    Group 0, Segment 11                      : Present (0,11) 9QJ3X88X
    Group 1, Segment 0                       : Present (0,12) 9QJ3YLR2
    Group 1, Segment 1                       : Present (0,13) GTE002PBHHNVZE
    Group 1, Segment 2                       : Present (0,14) 9QJ3ZGM2
    Group 1, Segment 3                       : Present (0,15) GTE002PBGP9VZE
    Group 1, Segment 4                       : Present (0,16) 9QJ3ZB4X
    Group 1, Segment 5                       : Present (0,17) 9QJ3ZAE0
    Group 1, Segment 6                       : Present (0,18) 9QJ3Y8C8
    Group 1, Segment 7                       : Present (0,19) GTE002PBH30GKE
    Group 1, Segment 8                       : Present (0,20) GTE002PAKXKDPE
    Group 1, Segment 9                       : Present (0,21) 9QJ3VXEL
    Group 1, Segment 10                      : Present (0,22) 9QJ3W4W6
    Group 1, Segment 11                      : Present (0,23) 9QJ3TPGR



sfdisk /dev/sdb<<EOF
,,L
EOF

to make 1 big partition on it.

[root@solexa1 StorMan]# sfdisk -l /dev/sdb

Disk /dev/sdb: 2430685 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

    Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sdb1          0+  24540-  24541- 197124430   83  Linux
/dev/sdb2          0       -       0          0    0  Empty
/dev/sdb3          0       -       0          0    0  Empty
/dev/sdb4          0       -       0          0    0  Empty

And then make an ext4 filesystem on that :

[root@solexa1 StorMan]# mke4fs /dev/sdb1
mke4fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
12320768 inodes, 49281107 blocks
2464055 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
1504 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first.  Use tune4fs -c or -i to override.






sfdisk's man page says "sfdisk doesn't understand the GUID Partition Table (GPT) format and it is not designed for large partitions. In these cases use the more advanced GNU parted(8)."

I'd try putting a gpt label on it and then redoing the mkfs

parted --script /dev/sdb mklabel gpt
parted --script /dev/sdb mkpart ext4 2048s -1s
mke4fs /dev/sdb1

Taking the block size * num blocks from the mke4fs output (and converting to GB) matches up with the 186GB that df is reporting.


Hugh

_______________________________________________
rhelv5-list mailing list
rhelv5-list@redhat.com
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to