[zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Rainer Orth
For quite some time I'm bitten by the fact that on my laptop (currently
running self-built snv_147) zpool status rpool and format disagree about
the device name of the root disk:

r...@masaya 14  zpool status rpool
  pool: rpool
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scan: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c1t0d0s3  ONLINE   0 0 0

errors: No known data errors

r...@masaya 3 # format -e
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c3t0d134583970 drive type unknown
  /p...@0,0/pci8086,2...@1e/pci17aa,2...@0,2/blk...@0
   1. c11t0d0 ATA-ST9160821AS-Ccyl 19454 alt 2 hd 255 sec 63
  /p...@0,0/pci17aa,2...@1f,2/d...@0,0
Specify disk (enter its number): 

zpool status thinks rpool is on c1t0d0s3, while format (and the kernel)
correctly believe it's c11t0d0(s3) instead.

This has the unfortunate consequence that beadm activate newbe fails
in a quite non-obvious way.

Running it under truss, I find that it invokes installgrub, which
fails.  The manual equivalent is

r...@masaya 266 # installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 
/dev/rdsk/c1t0d0s3
cannot read MBR on /dev/rdsk/c1t0d0p0
open: No such file or directory
r...@masaya 267 # installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 
/dev/rdsk/c11t0d0s3
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 273 sectors starting at 50 (abs 16115)

For the time being, I'm working around this by replacing installgrub by
a script, but obviously this shouldn't happen and the problem isn't easy
to find.

I thought I'd seen a zfs CR for this, but cannot find it right now,
especially with search on bugs.opensolaris.org being only partially
functional.

Any suggestions?

Thanks.
Rainer

-- 
-
Rainer Orth, Center for Biotechnology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread LaoTsao 老曹


hi
may be boot a livecd then export and import the zpool?
regards

On 8/27/2010 8:27 AM, Rainer Orth wrote:

For quite some time I'm bitten by the fact that on my laptop (currently
running self-built snv_147) zpool status rpool and format disagree about
the device name of the root disk:

r...@masaya 14  zpool status rpool
   pool: rpool
  state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
 still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
 pool will no longer be accessible on older software versions.
  scan: none requested
config:

 NAMESTATE READ WRITE CKSUM
 rpool   ONLINE   0 0 0
   c1t0d0s3  ONLINE   0 0 0

errors: No known data errors

r...@masaya 3 # format -e
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c3t0d134583970drive type unknown
   /p...@0,0/pci8086,2...@1e/pci17aa,2...@0,2/blk...@0
1. c11t0d0ATA-ST9160821AS-Ccyl 19454 alt 2 hd 255 sec 63
   /p...@0,0/pci17aa,2...@1f,2/d...@0,0
Specify disk (enter its number):

zpool status thinks rpool is on c1t0d0s3, while format (and the kernel)
correctly believe it's c11t0d0(s3) instead.

This has the unfortunate consequence that beadm activatenewbe  fails
in a quite non-obvious way.

Running it under truss, I find that it invokes installgrub, which
fails.  The manual equivalent is

r...@masaya 266 # installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 
/dev/rdsk/c1t0d0s3
cannot read MBR on /dev/rdsk/c1t0d0p0
open: No such file or directory
r...@masaya 267 # installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 
/dev/rdsk/c11t0d0s3
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 273 sectors starting at 50 (abs 16115)

For the time being, I'm working around this by replacing installgrub by
a script, but obviously this shouldn't happen and the problem isn't easy
to find.

I thought I'd seen a zfs CR for this, but cannot find it right now,
especially with search on bugs.opensolaris.org being only partially
functional.

Any suggestions?

Thanks.
Rainer

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Mark J Musante

On Fri, 27 Aug 2010, Rainer Orth wrote:

zpool status thinks rpool is on c1t0d0s3, while format (and the kernel)
correctly believe it's c11t0d0(s3) instead.

Any suggestions?


Try removing the symlinks or using 'devfsadm -C' as suggested here:

https://defect.opensolaris.org/bz/show_bug.cgi?id=14999


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Rainer Orth
LaoTsao 老曹 laot...@gmail.com writes:

 may be boot a livecd then export and import the zpool?

I've already tried all sorts of contortions to regenerate
/etc/path_to_inst to no avail.  This is simply a case of `should not
happen'.

Rainer

-- 
-
Rainer Orth, Center for Biotechnology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Rainer Orth
Mark J Musante mark.musa...@oracle.com writes:

 On Fri, 27 Aug 2010, Rainer Orth wrote:
 zpool status thinks rpool is on c1t0d0s3, while format (and the kernel)
 correctly believe it's c11t0d0(s3) instead.

 Any suggestions?

 Try removing the symlinks or using 'devfsadm -C' as suggested here:

 https://defect.opensolaris.org/bz/show_bug.cgi?id=14999

devfsadm -C alone didn't make a difference, but clearing out /dev/*dsk
and running devfsadm -Cv did help.

Thanks a lot.

Rainer

-- 
-
Rainer Orth, Center for Biotechnology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Sean Sprague

Rainer,


devfsadm -C alone didn't make a difference, but clearing out /dev/*dsk
and running devfsadm -Cv did help.
   


I am glad it helped; but removing anything from /dev/*dsk is a kludge 
that cannot be accepted/condoned/supported.


Regards... Sean.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Rainer Orth
Sean,

 I am glad it helped; but removing anything from /dev/*dsk is a kludge that
 cannot be accepted/condoned/supported.

no doubt about this: two parts of the kernel (zfs vs. devfs?) disagreeing
about devices mustn't happen.

Rainer

-- 
-
Rainer Orth, Center for Biotechnology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Cindy Swearingen

Hi Rainer,

I'm no device expert but we see this problem when firmware updates or
other device/controller changes change the device ID associated with
the devices in the pool.

In general, ZFS can handle controller/device changes if the driver
generates or fabricates device IDs. You can view device IDs with this
command:

# zdb -l /dev/dsk/cvtxdysz

If you are unsure what impact device changes will have your pool, then
export the pool first. If you see the device ID has changed when the
pool is exported (use prtconf -v to view device IDs while the pool is
exported) with the hardware change, then the resulting pool behavior is
unknown.

Importing the root pool is more complex but would probably prevent
this from happening again.

Thanks,

Cindy


On 08/27/10 08:43, Rainer Orth wrote:

Sean,


I am glad it helped; but removing anything from /dev/*dsk is a kludge that
cannot be accepted/condoned/supported.


no doubt about this: two parts of the kernel (zfs vs. devfs?) disagreeing
about devices mustn't happen.

Rainer


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Rainer Orth
Hi Cindy,

I'll investigate more next week since I'm in a hurry to leave, but one
point now:

 I'm no device expert but we see this problem when firmware updates or
 other device/controller changes change the device ID associated with
 the devices in the pool.

This is the internal disk in a laptop, so no device or controller change
should happen here and cause a rename from c1d0d0 to c11t0d0.

Rainer

-- 
-
Rainer Orth, Center for Biotechnology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss