2012-05-15 19:17, casper....@oracle.com wrote:
Your old release of Solaris (nearly three years old) doesn't support
disks over 2TB, I would think.

(A 3TB is 3E12, the 2TB limit is 2^41 and the difference is around 800Gb)

While this was proven correct by my initial experiments,
it seems that things are even weirder: as I wrote, I did
boot the Thumper into oi_151a3 yesterday, and it saw the
big disk as 2.73Tb.

I made a GPT partition for the whole disk size and booted
back into OpenSolaris SXCE snv_117. I wrote that it still
sees the disk as being smaller, and it does in the headers
of fdisk and format programs. The partition is seen as
"EFI" by snv_117 fdisk, sized "48725 cylinders of 32130
(512 byte) blocks" each, which computes to 801553536000

However, when I drilled down into the partition/slice
table today, format complained a bit but saw the whole
disk. So I laid it out as 2.5Tb and 250Gb slices and
will give them a go as test pools to see if writing
to one would corrupt another.

If this works, I guess I should DD the GPT table around
to new 3Tb drives in the IDEA7 setup...

Format's complaints:
1) When opening the disk:
Error: can't open disk '/dev/rdsk/c1t2d0p0'.
No Solaris fdisk partition found.

Error: can't open disk '/dev/rdsk/c1t2d0p0'.
No Solaris fdisk partition found.

2) When labeling the disk:

partition> label
Ready to label disk, continue? y

no reserved partition found


Here's my new slice table (no slice8 indeed - unlike old disks):

partition> p
Current partition table (unnamed):
Total disk sectors available: 5860516750 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector          Size          Last Sector
0 usr wm 256 2.50TB 5372126207 1 usr wm 5372126415 232.87GB 5860500366
  2 unassigned    wm                 0            0                0
  3 unassigned    wm                 0            0                0
  4 unassigned    wm                 0            0                0
  5 unassigned    wm                 0            0                0
6 usr wm 5860500367 8.00MB 5860516750

This table did get saved, test pools created with no hiccups:

# zpool create test c1t2d0s0
# zpool create test2 c1t2d0s1

# zpool status

  pool: test
 state: ONLINE
 scrub: none requested

        NAME        STATE     READ WRITE CKSUM
        test        ONLINE       0     0     0
          c1t2d0s0  ONLINE       0     0     0

errors: No known data errors

  pool: test2
 state: ONLINE
 scrub: none requested

        NAME        STATE     READ WRITE CKSUM
        test2       ONLINE       0     0     0
          c1t2d0s1  ONLINE       0     0     0

# zpool list
pond   10.2T  9.49T   724G    93%  ONLINE  -
rpool   232G   120G   112G    51%  ONLINE  -
test   2.50T  76.5K  2.50T     0%  ONLINE  -
test2   232G  76.5K   232G     0%  ONLINE  -

Now writing stuff into the new test pools to see if any
conflicts arise in snv_117's support of the disk size.

zfs-discuss mailing list

Reply via email to