Hi David,
Its a life-long curse to describe the format utility. Trust me. :-)
I think you want to relabel some disks with an EFI label to SMI label
to be used in your ZFS root pool, and you have overlapping slices
on one disk. I don't think ZFS would let you attach this disk.
To fix the overlapping slice problem and repartition this disk
so that you can use it in a root rpool, review the steps here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Replacing/Relabeling the Root Pool Disk
Set the free hog partition to s0 so that all your usable disk space
will go into s0 and zero out the rest of the slices as described. This
should fix the overlapping slice problem and remove the /usr slice.
Even though your disks are identical, you might check the format
entries from the previous position, AVAILABLE DISK SELECTIONS, to
confirm that the disk mfg info is identical. On later Solaris
releases (b117, I think) you can attach disks that are the same
general size without needing the same geometry.
If your disks contain a s0 then run the installgrub command against
s0 as directed. Also make sure that the BIOS setting matches the
primary boot disk.
Thanks,
Cindy
On 02/17/10 21:32, David Dyer-Bennet wrote:
Since this seems to be a ubiquitous problem for people running ZFS, even
though it's really a general Solaris admin issue, I'm guessing the
expertise is actually here, so I'm asking here.
I found lots of online pages telling how to do it.
None of them were correct or complete. I think. I seem to have
accomplished it in a somewhat hackish fashion, possibly not cleanly, and
I'm now trying to really understand this (I've always found SunOS' idea
of overlapping partitions so insanely stupid that it turns my brain off,
and combining that with x86-style real disk partitions and calling them
both the same thing except when we don't has probably induced permanent
brain damage by this point).
First step: invoke format -e and use "label" to write an SMI label to
the disk. This part is sort-of documented, and if you let format
present you the list of disks and choose one, everything works out.
What about the other syntax, where you specify the "disk" to format on
the command line? What are valid device files? Is it any file that ends
up pointing to some portion of the correct physical disk?
But after this, it appears to be necessary to manually set up partitions
(slices). At least, without doing that, I couldn't attach the disk to
my zpool, which was my goal. Am I missing something?
And, when manually setting up partitions, I have no idea if what I did
is right. Well, a bit of idea; I know that installgrub did NOT
overwrite anything that a scrub detected, so that means I left enough
blank space somewhere. Not sure it's the right place, though. Did I
have to do this? Every way I tried to avoid this resulted in failure to
attach, but none of the instructions listed this step.
This is how format prints the partitions I created:
partition> p
Current partition table (original):
Total disk cylinders available: 19454 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 1 - 19453 149.02GB (19453/0/0) 312512445
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 19453 149.03GB (19454/0/0) 312528510
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 7.84MB (1/0/0) 16065
9 unassigned wm 0 0 (0/0/0) 0
And here's the one on the other disk -- yikes, it looks like it ended up
with a completely different geometry! (these are two identical drives).
Current partition table (original):
Total disk cylinders available: 152615 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 1 - 152614 149.04GB (152614/0/0)
312553472
1 swap wu 0 0
(0/0/0) 0
2 backup wu 0 - 152616 149.04GB (152617/0/0)
312559616
3 unassigned wm 0 0
(0/0/0) 0
4 unassigned wm 0 0
(0/0/0) 0
5 unassigned wm 0 0
(0/0/0) 0
6 usr wm 1 - 152614 149.04GB (152614/0/0)
312553472
7 unassigned wm 0 0
(0/0/0) 0
8 boot wu 0 - 0 1.00MB (1/0/0)
2048
9 alternates wm 0 0
(0/0/0) 0
How do I fix that? For scsi disks (these are SATA disks on an SAS
controller, does that count?) it's supposed to figure that out itself, I
thought? I certainly never entered disk geometry figures.
The pool is using s0:
pool: rpool
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t0d0s0 ONLINE 0 0 0
c4t1d0s0 ONLINE 0 0 0
c9t0d0s0 ONLINE 0 0 0
c9t2d0s0 ONLINE 0 0 0
errors: No known data errors
Once I decide that these
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss