Hi John,

The message below is a ZFS message, but its not enough to figure out
what is going on in an LDOM environment. I don't know of any LDOMs
experts that hang out on this list so you might post this on the
ldoms-discuss list, if only to get some more troubleshooting data.

I think you are saying that the LDOM that is not coming up has a
mirrored root pool on 2 devices...

You might be able to use the following command to see if the disk
labels are coherent for each these devices for the broken LDOM's
root pool:

# zdb -l /dev/dsk/xxxxx

Maybe someone else has some better ideas ...

Cindy

On 01/19/10 07:48, John wrote:
I've got an LDOM that has raw disks exported through redundant service domains 
from a J4400. Actually, I've got seven of these LDOM's. On Friday, we had to 
power the rack down for UPS maintenance. We gracefully shutdown all the Solaris 
instances, waited about 15 min, then powered down the storage.

All of the LDOM's came up except one. They each have a mirrored RPOOL on 2 of 
these drives. The one with the problem shows the rpool as a mirror, with both 
devices online but the pool's unavailable. There's a message at the bottom that 
says:

"Additional devices are known to be part of this pool, though their
exact configuration cannot be determined."

Only two disks were EVER part of this LDOM, and it's an rpool, so no raidz or 
stripe even possible. Anyone know how I can get past this?
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to