Re: [zfs-discuss] Booting fails with `Can not read the pool label' error

2010-12-02 Thread Rainer Orth
Hi Cindy,

 I haven't seen this in a while but I wonder if you just need to set the
 bootfs property on your new root pool and/or reapplying the bootblocks.

 Can you import this pool booting from a LiveCD and to review the
 bootfs property value? I would also install the boot blocks on the
 rpool2 disk.

 I would also check the grub entries in /rpool2/boot/grub/menu.lst.

I've now repeated everything with snv_151a and it worked out of the box
on the Sun Fire V880, and (on second try) also on my Blade 1500: it
seems the first time round I had the devalias for the second IDE disk
wrong:

/p...@1e,60/i...@d/d...@0,1 instead of /p...@1e,60/i...@d/d...@1,0

I'm now happyling running snv_151a on both machines (and still using
Xsun on the Blade 1500, so still usable as a desktop :-)

Rainer

-- 
-
Rainer Orth, Center for Biotechnology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Booting fails with `Can not read the pool label' error

2010-11-15 Thread Rainer Orth
Hi Cindy,

 I haven't seen this in a while but I wonder if you just need to set the
 bootfs property on your new root pool and/or reapplying the bootblocks.

I've created the new BE using beadm create, which did this for me:

$ zpool get bootfs rpool2
NAMEPROPERTY  VALUE   SOURCE
rpool2  bootfsrpool2/ROOT/snv_134-pkg143  local

I'm pretty sure I've applied the bootblock since I've booted off the
second disk.

 Can you import this pool booting from a LiveCD and to review the
 bootfs property value? I would also install the boot blocks on the
 rpool2 disk.

I'll try that.  Will have to burn a LiveCD (probably snv_151a) first.

 I would also check the grub entries in /rpool2/boot/grub/menu.lst.

Looks good to me:

title snv_134-pkg143
bootfs rpool2/ROOT/snv_134-pkg143

Thanks.
Rainer

-- 
-
Rainer Orth, Center for Biotechnology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Booting fails with `Can not read the pool label' error

2010-11-11 Thread Rainer Orth
I'm still trying to find a fix/workaround for the problem described in

Unable to mount root pool dataset 
http://opensolaris.org/jive/thread.jspa?messageID=492460

Since the Blade 1500's rpool is mirrored, I've decided to detach the
second half of the mirror, relabel the disk, create an alternative rpool
(rpool2) there, copy the current BE (snv_134) using beadm create -p
rpool2 snv_be-pkg143, activate the new BE and boot from the second disk.

Unfortunately, this doesn't work: the kernel panics with

Can not read the pool label from device
spa_import_rootpool: Error 5

I've booted under kmdb, set a breakpoint in zfs`zfs_mountroot, and found
that this happens because

zfs_devid = spa_get_bootprop(diskdevid);

returns NULL.  I've no idea how/why this can happen, and found no code
in the OpenSolaris sources that sets diskdevid for SPARC, so I must
assume that this is directly from OBP (there's only some code in Grub,
which of course is irrelevant here).

Any suggestions what can cause this or what I'm doing wrong?

There's a second machine suffering from the same problem (a Sun Fire
V880 serving as fileserver for the GCC-on-Solaris project), so it would
be extremely valuable to get this fixed/worked around.

Thanks.
Rainer

-- 
-
Rainer Orth, Center for Biotechnology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Rainer Orth
For quite some time I'm bitten by the fact that on my laptop (currently
running self-built snv_147) zpool status rpool and format disagree about
the device name of the root disk:

r...@masaya 14  zpool status rpool
  pool: rpool
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scan: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c1t0d0s3  ONLINE   0 0 0

errors: No known data errors

r...@masaya 3 # format -e
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c3t0d134583970 drive type unknown
  /p...@0,0/pci8086,2...@1e/pci17aa,2...@0,2/blk...@0
   1. c11t0d0 ATA-ST9160821AS-Ccyl 19454 alt 2 hd 255 sec 63
  /p...@0,0/pci17aa,2...@1f,2/d...@0,0
Specify disk (enter its number): 

zpool status thinks rpool is on c1t0d0s3, while format (and the kernel)
correctly believe it's c11t0d0(s3) instead.

This has the unfortunate consequence that beadm activate newbe fails
in a quite non-obvious way.

Running it under truss, I find that it invokes installgrub, which
fails.  The manual equivalent is

r...@masaya 266 # installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 
/dev/rdsk/c1t0d0s3
cannot read MBR on /dev/rdsk/c1t0d0p0
open: No such file or directory
r...@masaya 267 # installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 
/dev/rdsk/c11t0d0s3
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 273 sectors starting at 50 (abs 16115)

For the time being, I'm working around this by replacing installgrub by
a script, but obviously this shouldn't happen and the problem isn't easy
to find.

I thought I'd seen a zfs CR for this, but cannot find it right now,
especially with search on bugs.opensolaris.org being only partially
functional.

Any suggestions?

Thanks.
Rainer

-- 
-
Rainer Orth, Center for Biotechnology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Rainer Orth
LaoTsao 老曹 laot...@gmail.com writes:

 may be boot a livecd then export and import the zpool?

I've already tried all sorts of contortions to regenerate
/etc/path_to_inst to no avail.  This is simply a case of `should not
happen'.

Rainer

-- 
-
Rainer Orth, Center for Biotechnology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Rainer Orth
Mark J Musante mark.musa...@oracle.com writes:

 On Fri, 27 Aug 2010, Rainer Orth wrote:
 zpool status thinks rpool is on c1t0d0s3, while format (and the kernel)
 correctly believe it's c11t0d0(s3) instead.

 Any suggestions?

 Try removing the symlinks or using 'devfsadm -C' as suggested here:

 https://defect.opensolaris.org/bz/show_bug.cgi?id=14999

devfsadm -C alone didn't make a difference, but clearing out /dev/*dsk
and running devfsadm -Cv did help.

Thanks a lot.

Rainer

-- 
-
Rainer Orth, Center for Biotechnology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Rainer Orth
Sean,

 I am glad it helped; but removing anything from /dev/*dsk is a kludge that
 cannot be accepted/condoned/supported.

no doubt about this: two parts of the kernel (zfs vs. devfs?) disagreeing
about devices mustn't happen.

Rainer

-- 
-
Rainer Orth, Center for Biotechnology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Rainer Orth
Hi Cindy,

I'll investigate more next week since I'm in a hurry to leave, but one
point now:

 I'm no device expert but we see this problem when firmware updates or
 other device/controller changes change the device ID associated with
 the devices in the pool.

This is the internal disk in a laptop, so no device or controller change
should happen here and cause a rename from c1d0d0 to c11t0d0.

Rainer

-- 
-
Rainer Orth, Center for Biotechnology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Unable to mount root pool dataset

2010-07-19 Thread Rainer Orth
I'm currently running a Sun Fire V880 with snv_134, but would like to
upgrade the machine to a self-built snv_144.  Unfortunately, boot
environment creation fails:

# beadm create snv_134-svr4
Unable to create snv_134-svr4.
Mount failed.

In truss output, I find

2514:   mount(rpool, /rpool, MS_OPTIONSTR, zfs, 0x, 0, 
0xFFBFA170, 1024) Err#2 ENOENT

I can reproduce the failure with

# zfs mount rpool
cannot mount 'rpool': No such file or directory

but /rpool exists and is empty.

Non-default properties for the dataset are

# zfs get -s local all rpool
NAME   PROPERTY  VALUE  SOURCE
rpool  mountpoint/rpool local
rpool  compression   on local
rpool  canmount  noauto local

No defaulting of mountpoint and/or canmount did make a difference.  This
very much looks like

6741948 ZFS pool root dataset sometimes fails to be mounted after zpool import

which was closed as not reproducible.

I've no idea how to avoid/fix this issue.  Suggestions?

Thanks.
Rainer

-- 
-
Rainer Orth, Center for Biotechnology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-19 Thread Rainer Orth
Richard Elling richard.ell...@gmail.com writes:

 George would probably have the latest info, but there were a number of
 things which circled around the notorious Stop looking and start ganging
 bug report,
 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6596237

Indeed: we were seriously bitten by this one, taking three Solaris 10
fileservers down for about a week until the problem was diagnosed by Sun
Service and an IDR provided.  Unfortunately, this issue (seriously
fragmented pools or pools beyond ca. 90% full cause file servers to grind
to a halt) were only announced/acknowledged publicly after our incident,
although the problem seems to have been reported almost two years ago.
While a fix has been integrated into snv_114, there's still no patch for
S10, only various IDRs.

It's unclear what the state of the related CR 4854312 (need to defragment
storage pool, submitted in 2003!) is.  I suppose this might be dealt with
by the vdev removal code, but overall it's scary that dealing with such
fundamental issues takes so long.

Rainer

-- 
-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Rainer Orth
Lori Alt [EMAIL PROTECTED] writes:

 use of swap/dump zvols?  If your existing swap/dump slice
 is contiguous with your root pool, you can grow the root
 pool into that space (using format to merge the slices.
 A reboot or re-import of the pool will cause it to grow into
 the newly-available space).

That had been my plan, and that's how I laid out my slices for zpools and
UFS BEs before ZFS boot came along.  Unfortunately, at least once this
resizing exercise went wrong, fatally, it seems, but so far nobody cared to
comment:

http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/049180.html

And on SPARC, the hopefully safe method from a failsafe environment is
hampered by

http://mail.opensolaris.org/pipermail/install-discuss/2008-July/006754.html

I think at least the second issue needs to be resolved before ZFS root is
appropriate for general use.

Rainer

-- 
-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moving ZFS root pool to different system breaks boot

2008-07-23 Thread Rainer Orth
=?UTF-8?Q?J=C3=BCrgen_Keil?= [EMAIL PROTECTED] writes:

  Recently, I needed to move the boot disks containing a ZFS root pool in an
  Ultra 1/170E running snv_93 to a different system (same hardware) because
  the original system was broken/unreliable.
  
  To my dismay, unlike with UFS, the new machine wouldn't boot:
  
  WARNING: pool 'root' could not be loaded as it was
  last accessed by another system (host:  hostid:
  0x808f7fd8).  See: http://www.sun.com/msg/ZFS-8000-EY
  
  panic[cpu0]/thread=180e000: BAD TRAP: type=31 rp=180acc0 addr=0 mmu_fsr=0 
  occurred in module unix due to a NULL pointer dereference
 ...
  suffering from the absence of SPARC failsafe archives after liveupgrade
  (recently mentioned on install-discuss), I'd have been completely stuck.
[...]
 I guess that on SPARC you could boot from the installation optical media
 (or from a network server), and zpool import -f the root pool; that should
 put the correct hostid into the root pool's label.

That's what I did with the snv_93 UFS BE I had still around, with the
exception that I used zpool import -f -R /mnt to avoid pathname clashes
between the miniroot and the imported pool.  I think I even exported the
pool afterwards, but I'm no longer certain about this: I seem to remember
problems with exported root pools being no longer bootable.

Rainer

-- 
-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-23 Thread Rainer Orth
Richard Elling writes:

  I've found out what the problem was: I didn't specify the -F zfs option to
  installboot, so only half of the ZFS bootblock was written.  This is a
  combination of two documentation bugs and a terrible interface:

 
 Mainly because there is no -F option?

Huh?  From /usr/sbin/installboot:

COUNT=15

while getopts F: a; do
case $a in
F) case $OPTARG in
   ufs) COUNT=15;;
   hsfs) COUNT=15;;
   zfs) COUNT=31;;
   *) away 1 $OPTARG: Unknown fstype;;
   esac;;

Without -F zfs, only part of the zfs bootblock would be copied.

 I think that it should be very unusual that installboot would be run
 interactively.  That is really no excuse for making it only slightly

Indeed: it should mostly be run behind the scenes e.g. by live upgrade, but
obviously there are scenarios where it is necessary (like this one).

 smarter than dd, but it might be hard to justify changes unless some
 kind person were to submit a bug with an improved implementation
 (would make a good short project for someone :-)

The problem here might be that an improved implementation would probably
mean an incompatible change (like doing away with the explicit bootblk
argument).

Unfortunately, I've too many other issues on my plate right now to attack
this one.

Rainer

-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-23 Thread Rainer Orth
Cindy,

 Sorry for your trouble.

no problem.

 I'm updating the installboot example in the ZFS Admin Guide with the
 -F zfs syntax now. We'll fix the installboot man page as well.

Great, thanks.

Rainer

-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering corrupted root pool

2008-07-22 Thread Rainer Orth
Rainer Orth [EMAIL PROTECTED] writes:

 Yesterday evening, I tried Live Upgrade on a Sun Fire V60x running SX:CE 90
 to SX:CE 93 with ZFS root (mirrored root pool called root).  The LU itself
 ran without problems, but before rebooting the machine, I wanted to add
 some space to the root pool that had previously been in use for an UFS BE.
 
 Both disks (c0t0d0 and c0t1d0) were partitioned as follows:
 
 Part  TagFlag Cylinders SizeBlocks
   0   rootwm   1 - 18810   25.91GB(18810/0/0) 54342090
   1 unassignedwm   18811 - 246188.00GB(5808/0/0)  16779312
   2 backupwm   0 - 24618   33.91GB(24619/0/0) 71124291
   3 unassignedwu   00 (0/0/0)0
   4 unassignedwu   00 (0/0/0)0
   5 unassignedwu   00 (0/0/0)0
   6 unassignedwu   00 (0/0/0)0
   7 unassignedwu   00 (0/0/0)0
   8   bootwu   0 - 01.41MB(1/0/0) 2889
   9 unassignedwu   00 (0/0/0)0
 
 Slice 0 is used by the root pool, slice 1 was used by the UFS BE.  To
 achieve this, I ludeleted the now unused UFS BE and used 
 
 # NOINUSE_CHECK=1 format
 
 to extend slice 0 by the size of slice 1, deleting the latter afterwards.
 I'm pretty sure that I've done this successfully before, even on a live
 system, but this time something went wrong: I remember an FMA message about
 one side of the root pool mirror being broken (something about an
 inconsistent label, unfortunately I didn't write down the exact message).
 Nonetheless, I rebooted the machine after luactivate sol_nv_93 (the new ZFS
 BE), but the machine didn't come up:
 
 SunOS Release 5.11 Version snv_93 32-bit
 Copyright 1983-2008 Sun Microsystems, Inc.  All rights reserved.
 Use is subject to license terms.
 NOTICE:
 spa_import_rootpool: error 22
 
 
 panic[cpu0]/thread=fec1cfe0: cannot mount root path /[EMAIL 
 PROTECTED],0/pci8086,[EMAIL PROTECTED]/pci8086,[EMAIL 
 PROTECTED]/pci8086,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a /[EMAIL 
 PROTECTED],0/pci8086,[EMAIL PROTECTED]/pci8086,[EMAIL 
 PROTECTED]/pci8086,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a
 
 fec351ac genunix:rootconf+10b (c0f040, 1, fec1c750)
 fec351d0 genunix:vfs_mountroot+54 (fe800010, fec30fd8,)
 fec351e4 genunix:main+b4 ()
 
 panic: entering debugger (no dump device, continue to reboot)
 skipping system dump - no dump device configured
 rebooting...
 
 I've managed a failsafe boot (from the same pool), and zpool import reveals
 
   pool: root
 id: 14475053522795106129
  state: UNAVAIL
 status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-EY
 config:
 
 root  UNAVAIL  insufficient replicas
   mirror  UNAVAIL  corrupted data
 c0t1d0s0  ONLINE
 c0t0d0s0  ONLINE
 
 Even restoring slice 1 on both disks to its old size and shrinking slice 0
 accordingly doesn't help.  I'm sure I've done this correctly since I could
 boot from the old sol_nv_b90_ufs BE, which was still on c0t0d0s1.
 
 I didn't have much success to find out what's going on here: I tried to
 remove either of the disks in case both sides of the mirror are
 inconsistent, but to no avail.  I didn't have much luck with zdb either.
 Here's the output of zdb -l /dev/rdsk/c0t0d0s0 and /dev/rdsk/c0t1d0s0:
 
 c0t0d0s0:
 
 
 LABEL 0
 
 version=10
 name='root'
 state=0
 txg=14643945
 pool_guid=14475053522795106129
 hostid=336880771
 hostname='erebus'
 top_guid=17627503873514720747
 guid=6121143629633742955
 vdev_tree
 type='mirror'
 id=0
 guid=17627503873514720747
 whole_disk=0
 metaslab_array=13
 metaslab_shift=28
 ashift=9
 asize=36409180160
 is_log=0
 children[0]
 type='disk'
 id=0
 guid=1526746004928780410
 path='/dev/dsk/c0t1d0s0'
 devid='id1,[EMAIL PROTECTED]/a'
 phys_path='/[EMAIL PROTECTED],0/pci8086,[EMAIL 
 PROTECTED]/pci8086,[EMAIL PROTECTED]/pci8086,[EMAIL PROTECTED],1/[EMAIL 
 PROTECTED],0:a'
 whole_disk=0
 DTL=160
 children[1]
 type='disk'
 id=1
 guid=6121143629633742955
 path='/dev/dsk/c0t0d0s0'
 devid='id1,[EMAIL PROTECTED]/a'
 phys_path='/[EMAIL PROTECTED],0/pci8086,[EMAIL 
 PROTECTED]/pci8086,[EMAIL PROTECTED]/pci8086,[EMAIL PROTECTED],1/[EMAIL 
 PROTECTED],0:a'
 whole_disk=0

[zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-22 Thread Rainer Orth
I just wanted to attach a second mirror to a ZFS root pool on an Ultra
1/170E running snv_93.

I've followed the workarounds for CR 6680633 and 6680633 from the ZFS Admin
Guide, but booting from the newly attached mirror fails like so:

Boot device: disk  File and args: 

Can't mount root
Fast Data Access MMU Miss

while the original side of the mirror works just fine.

Any advice on what could be wrong here?

Rainer

-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Moving ZFS root pool to different system breaks boot

2008-07-22 Thread Rainer Orth
.  In the absence of this BE, and
suffering from the absence of SPARC failsafe archives after liveupgrade
(recently mentioned on install-discuss), I'd have been completely stuck.

Rainer

-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-22 Thread Rainer Orth
Mark J Musante writes:

 On Tue, 22 Jul 2008, Rainer Orth wrote:
 
  I just wanted to attach a second mirror to a ZFS root pool on an Ultra 
  1/170E running snv_93.
 
  I've followed the workarounds for CR 6680633 and 6680633 from the ZFS 
  Admin Guide, but booting from the newly attached mirror fails like so:
 
 I think you're running into CR 6668666.  I'd try manually running 

oops, cut-and-paste error on my part: 6668666 was one of the two CRs
mentioned in the zfs admin guide which I worked around.

 instlalboot on the new disk and see if that fixes it.

Unfortunately, it didn't.  Reconsidering now, I see that I ran installboot
against slice 0 (reduced by 1 sector as required by CR 6680633) instead of
slice 2 (whole disk).  Doing so doesn't fix the problem either, though.

Regards.
Rainer

-
Rainer Orth, Faculty of Technology, Bielefeld University

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recovering corrupted root pool

2008-07-11 Thread Rainer Orth
=14549625
+txg=14643900
 pool_guid=14475053522795106129
 hostid=336880771
-hostname=''
+hostname='erebus'
 top_guid=17627503873514720747
-guid=6121143629633742955
+guid=1526746004928780410
 vdev_tree
 type='mirror'
 id=0
@@ -124,12 +124,12 @@
 version=10
 name='root'
 state=0
-txg=14549625
+txg=14643900
 pool_guid=14475053522795106129
 hostid=336880771
-hostname=''
+hostname='erebus'
 top_guid=17627503873514720747
-guid=6121143629633742955
+guid=1526746004928780410
 vdev_tree
 type='mirror'
 id=0

Other invocations of zdb didn't have much success, unfortunately:

# zdb -u -e root
zdb: More than one matching pool - specify guid/devid/device path.
# zdb -u -e /dev/rdsk/c0t0d0s0
zdb: can't open /dev/rdsk/c0t0d0s0: No such file or directory
# zdb -u -e 14475053522795106129
zdb: can't open 14475053522795106129: Invalid argument

I have no idea why device path or guid (from zpool import) don't work
here. 

Is there any chance to recover the pool contents (which of course contains
other data besides the O/S installation), or a least to understand why this
resize exercise went terribly wrong here?

Regards.
Rainer

-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Heads up: 'zpool history' on-disk version change

2007-03-21 Thread Rainer Orth
eric kustarz [EMAIL PROTECTED] writes:

 I just integrated into snv_62:
 6529406 zpool history needs to bump the on-disk version
 
 The original CR for 'zpool history':
 6343741 want to store a command history on disk
 was integrated into snv_51.
 
 Both of these are planned to make s10u4.
 
 But wait, 'zpool history' has existed for several months, aren't we  
 revising, um, history here?
 
 It was originally believed that even though 'zpool history' added  
 additional on-disk information it didn't need to bump the on-disk  
 version.  My testing verified this to be true.  Turns out my testing  
 was incomplete.  There is an edge case where if you have a pool with  
 history recorded then you cannot move that pool to a pre-snv_51  
 machine if that machine is also a different endianness.  If you do,  
 then the load/import of the pool will cause a panic.  Please note:  
 there is no corruption here.  If the machine is the same endianness,  
 then the load/import will work just fine.

Is this the same panic I observed when moving a FireWire disk from a SPARC
system running snv_57 to an x86 laptop with snv_42a?

6533369 panic in dnode_buf_byteswap importing zpool

Rainer

-- 
-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS performance using slices vs. entire disk?

2006-08-03 Thread Rainer Orth
Robert Milkowski [EMAIL PROTECTED] writes:

 Additionally keep in mind that outer region of a disk is much faster.
 So if you want to put OS and then designate rest of the disk for
 application then probably putting ZFS on a slice beginning on cyl 0 is
 best in most scenarios.

This has the additional advantage that with the advent of ZFS boot, you can
simply move / to a zfs file system and extend slice 0 to cover the whole
disk.

Rainer

-- 
-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 10 ZFS Update

2006-07-31 Thread Rainer Orth
George Wilson [EMAIL PROTECTED] writes:

 We have putback a significant number of fixes and features from 
 OpenSolaris into what will become Solaris 10 11/06. For reference here's 
 the list:

That's great news, thanks.

 Bug Fixes:

I notice this one

6405330 swap on zvol isn't added during boot

is missing.  Any chance to get this in?  Do I need an escalation for that,
or is it waiting to go in with ZFS boot in U4 only?

Rainer

-- 
-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ON build on Blade 1500 ATA disk extremely slow

2006-07-25 Thread Rainer Orth
I've recently started doing ON nightly builds on zfs filesystems on the
internal ATA disk of a Blade 1500 running snv_42.  Unfortunately, the
builds are extremely slow compared to building on an external IEEE 1394
disk attached to the same machine:

ATA disk:

 Elapsed build time (DEBUG) 

real 21:40:57.7
user  4:32:15.6
sys   8:22:24.1

IEEE 1394 disk:

 Elapsed build time (DEBUG) 

real  6:14:11.4
user  4:28:54.1
sys 36:04.1

Running kernel profile with lockstat (lockstat -kIW -D 20 sleep 300), I
find in the ATA case:

Profiling interrupt: 29117 events in 300.142 seconds (97 events/sec)

Count indv cuml rcnt nsec Hottest CPU+PILCaller  
---
15082  52%  52% 0.00 1492 cpu[0] (usermode)  
 9565  33%  85% 0.00  318 cpu[0] usec_delay  

compared to IEEE 1394:

Profiling interrupt: 29195 events in 300.969 seconds (97 events/sec)

Count indv cuml rcnt nsec Hottest CPU+PILCaller  
---
20042  69%  69% 0.00 2000 cpu[0] (usermode)  
 5414  19%  87% 0.00  317 cpu[0] usec_delay  

At other times, the kernel time can be even as high as 80%.  Unfortunately,
I've not been able to investigate how usec_delay is called since there's no
fbt provider for that function (nor for the alternative entry point
drv_usecwait found in uts/sun4[uv]/cpu/common_asm.s), so I'm a bit stuck
how to further investigate this.  I suspect that the dad(7D) driver is the
culprit, but it is only included in the closed tarball.  In the EDU S9
sources, I find that dcd_flush_cache() calls drv_usecwait(100), which
might be the cause of this.

How should I proceed to further investigate this, and can this be fixed
somehow?  This way, the machine is almost unusable as a build machine.

Rainer

-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ON build on Blade 1500 ATA disk extremely slow

2006-07-25 Thread Rainer Orth
Bill,

 In the future, you can try:
 
 # lockstat -s 10 -I sleep 10
 
 which aggregates on the full stack trace, not just the caller, during
 profiling interrupts.  (-s 10 sets the stack depth; tweak up or down to
 taste).

nice.  Perhaps lockstat(1M) should be updated to include something like
this in the EXAMPLES section.

  How should I proceed to further investigate this, and can this be fixed
  somehow?  This way, the machine is almost unusable as a build machine.
 
 you've rediscovered 
 
 6421427 netra x1 slagged by NFS over ZFS leading to long spins in the
 ATA driver code
 
 I've updated the bug to indicate that this wass seen on the Sun Blade
 1500 as well.

Ok, thanks.  One important difference compared to that CR is that those
were local accesses to the FS, but the stack traces from lockstat are
identical.

Any word when this might be fixed?

Thanks.
Rainer

-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ON build on Blade 1500 ATA disk extremely slow

2006-07-25 Thread Rainer Orth
Bill,

 On Tue, 2006-07-25 at 14:36, Rainer Orth wrote:
   Perhaps lockstat(1M) should be updated to include something like
  this in the EXAMPLES section.
 
 I filed 6452661 with this suggestion.

excellent, thanks.

  Any word when this might be fixed?
 
 I can't comment in terms of time, but the engineer working on it has a
 partially tested fix; he needs to complete testing and integrate the
 fix..  not clear how long this will take. 

No problem: I can use that IEEE 1394 disk for now.  Good to know that this
is being worked on, though.

Rainer

-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] legato support

2006-07-21 Thread Rainer Orth
Anne Wong [EMAIL PROTECTED] writes:

 The EMC/Legato NetWorker (a.k.a. Sun StorEdge EBS) support for ZFS 
 NFSv4/ACLs will be in NetWorker 7.3.2 release currently targeting for 
 September release.

Any word on equivalent support in VERITAS/Symantec NetBackup?

Rainer

-- 
-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] legato support

2006-07-21 Thread Rainer Orth
Gregory,

 I've been backing up ZFS with NetBackup 5.1 without issue.   I won't  
 say it does everything, but I am able to backup and restore  
 individual files.

I know: we're actually using 4.5 at the moment ;-)  My question was
specificialy about ACL support.  I think the ZFS Admin Guide mentions two
CRs for this, one for Legato and another for NetBackup.

Rainer

-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sucking down my memory!?

2006-07-21 Thread Rainer Orth
Bill Moore [EMAIL PROTECTED] writes:

 On Sat, Jul 22, 2006 at 12:44:16AM +0800, Darren Reed wrote:
  Bart Smaalders wrote:
  
  I just swap on a zvol w/ my ZFS root machine.
  
  
  I haven't been watching...what's the current status of using
  ZFS for swap/dump?
  
  Is a/the swap solution to use mkswap and then specify that file
  in vfstab?
 
 ZFS currently support swap, but not dump.  For swap, just make a zvol
 and add that to vfstab.

There are two caveats, though: 

* Before SXCR b43, you'll need the fix from CR 6405330 so the zvol is added
  after a reboot.  The fix hasn't been backported to S10 U2 (yet?), so it
  is equally affected.

* A Live Upgrade comments the zvol entry in /etc/vfstab, so you (sort of)
  loose swap after an upgrade ;-(

Rainer

-- 
-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] add_install_client and ZFS and SMF incompatibility

2006-06-23 Thread Rainer Orth
Constantin Gonzalez [EMAIL PROTECTED] writes:

 Is this a known issue?

Yes, I've raised this during ZFS Beta as SDR-0192.  For some reason, I
don't have a CR here.

Rainer

-- 
-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[4]: [zfs-discuss] question about ZFS performance for webserving/java

2006-06-02 Thread Rainer Orth
Robert Milkowski [EMAIL PROTECTED] writes:

 So it can look like:
[...]
c0t2d0s1c0t2d0s1  SVM mirror, SWAP SWAP/s1 size =
sizeof(/ + /var + 
 /opt)

You can avoid this by swapping to a zvol, though at the moment this
requires a fix for CR 6405330.  Unfortunately, since one cannot yet dump to
a zvol, one needs a dedicated dump device in this case ;-(

Rainer

-- 
-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RFE filesystem ownership

2006-05-24 Thread Rainer Orth
Mark Shellenbaum [EMAIL PROTECTED] writes:

  Yes we do need something like this.
  
  This is already covered by the following CRs 6280676, 6421209.
 
 These RFE's are currently being investigated.   The basic idea is that 
 an adminstrator will be allowed to grant specific users/groups to 
 perform various zfs adminstrative tasks, such as create, destroy, clone, 
 changing properties and so on.
 
 After the zfs team is in agreement as to what the interfaces should be, 
 I will forward it to zfs-discuss for further feedback.

In addition to this, what I think will become necessary is a way to perform
this sort of end-user zfs administration securely over the network (maybe
with an RPC service secured with RPCSEC_GSS?): I don't want to grant every
single student login to the fileservers to admin their zfs filesystems ;-(

Rainer

-- 
-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss