Re: [zfs-discuss] NOINUSE_CHECK not working on ZFS

2007-09-17 Thread yu larry liu
It is weird. Did you run label subcommand after modifying the partition 
table? Did you try unset NOINUSE_CHECK before running format?

Larry

Bill Casale wrote:
 Sun Fire 280R

 Solaris 10 11/06, KU Generic_125100-08

 Created a  ZFS pool with disk c5t0d5, format c5t0d5 shows the disk is 
 part of a ZFS pool.
 Then ran format=partition=modify and was able to change the 
 partition for it. This
 resulted in panic and crash when a zpool status was run. From what I 
 can tell
 NOINUSE_CHECK should prevent the modification of a partition that's 
 part of a ZFS
 pool.  I verified that NOINUSE_CHECK=1 is not set in the environment. 
 Also this is
 on a non clustered system.

 Any idea's on why this is happening?

 -- 
 Thanks,
 Bill


 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-17 Thread Kent Watsen






  Probably not, my box has 10 drives and two very thirsty FX74 processors
and it draws 450W max.

At 1500W, I'd be more concerned about power bills and cooling than the UPS!
  


Yeah - good point, but I need my TV! - or so I tell my wife so I can
play with all this gear  :-X 

Cheers,
Kent



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS or NFS?

2007-09-17 Thread Ian Collins
I have a build 62 system with a zone that NFS mounts an ZFS filesystem. 

From the zone, I keep seeing issues with .nfs files remaining in
otherwise empty directories preventing their deletion.  The files appear
to be immediately replaced when they are deleted.

Is this an NFS or a ZFS issue?

Ian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] change uid/god below 100

2007-09-17 Thread Claus Guttesen
Hi.

Only indirectly related to zfs. I need to test diskusage/performance
on zfs shared via nfs. I have installed nevada b64a. Historically
uid/gid for user www has been 16/16 but when I try to add uid/gid www
via smc with the value 16 I'm not allowed to do so.

I'm coming from a FreeBSD backgroup. Here I alter uid using vipw and
edit /etc/group afterwards.

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS or NFS?

2007-09-17 Thread Darren J Moffat
Ian Collins wrote:
 I have a build 62 system with a zone that NFS mounts an ZFS filesystem. 
 
From the zone, I keep seeing issues with .nfs files remaining in
 otherwise empty directories preventing their deletion.  The files appear
 to be immediately replaced when they are deleted.
 
 Is this an NFS or a ZFS issue?

It is NFS that is doing that.  It happens when a process on the NFS 
client still has the file open.  fuser(1) is your friend here.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] question about uberblock blkptr

2007-09-17 Thread [EMAIL PROTECTED]
Hi All,
I have modified mdb so that I can examine data structures on disk using 
::print.
This works fine for disks containing ufs file systems.  It also works 
for zfs file systems, but...
I use the dva block number from the uberblock_t to print what is at the 
block
on disk.  The problem I am having is that I can not figure out what (if 
any) structure to use.
All of the xxx_phys_t types that I try do not look right.  So, the 
question is, just what is
the structure that the uberblock_t dva's refer to on the disk?

Here is an example:

First, I use zdb to get the dva for the rootbp (should match the value 
in the uberblock_t(?)).

# zdb - usbhard | grep -i dva
Dataset mos [META], ID 0, cr_txg 4, 1003K, 167 objects, rootbp [L0 DMU 
objset] 400L/200P DVA[0]=0:111f79000:200 DVA[1]=0:506bde00:200 
DVA[2]=0:36a286e00:200 fletcher4 lzjb LE contiguous birth=621838 
fill=167 cksum=84daa9667:365cb5b02b0:b4e531085e90:197eb9d99a3beb
bp = [L0 DMU objset] 400L/200P DVA[0]=0:111f6ae00:200 
DVA[1]=0:502efe00:200 DVA[2]=0:36a284e00:200 fletcher4 lzjb LE 
contiguous birth=621838 fill=34026 
cksum=cd0d51959:4fef8f217c3:10036508a5cc4:2320f4b2cde529
Dataset usbhard [ZPL], ID 5, cr_txg 4, 15.7G, 34026 objects, rootbp [L0 
DMU objset] 400L/200P DVA[0]=0:111f6ae00:200 DVA[1]=0:502efe00:200 
DVA[2]=0:36a284e00:200 fletcher4 lzjb LE contiguous birth=621838 
fill=34026 cksum=cd0d51959:4fef8f217c3:10036508a5cc4:2320f4b2cde529
first block: [L0 ZIL intent log] 9000L/9000P 
DVA[0]=0:36aef6000:9000 zilog uncompressed LE contiguous birth=263950 
fill=0 cksum=97a624646cebdadb:fd7b50f37b55153b:5:1
^C
#

Then I run my modified mdb on the vdev containing the usbhard pool
# ./mdb /dev/rdsk/c4t0d0s0

I am using the DVA[0} for the META data set above.  Note that I have 
tried all of the xxx_phys_t structures
that I can find in zfs source, but none of them look right.  Here is 
example output dumping the data as a objset_phys_t.
(The shift by 9 and adding 40 is from the zfs on-disk format paper, 
I have tried without the addition, without the shift,
in all combinations, but the output still does not make sense).

  (111f790009)+40::print zfs`objset_phys_t
{
os_meta_dnode = {
dn_type = 0x4f
dn_indblkshift = 0x75
dn_nlevels = 0x82
dn_nblkptr = 0x25
dn_bonustype = 0x47
dn_checksum = 0x52
dn_compress = 0x1f
dn_flags = 0x82
dn_datablkszsec = 0x5e13
dn_bonuslen = 0x63c1
dn_pad2 = [ 0x2e, 0xb9, 0xaa, 0x22 ]
dn_maxblkid = 0x20a34fa97f3ff2a6
dn_used = 0xac2ea261cef045ff
dn_pad3 = [ 0x9c2b4541ab9f78c0, 0xdb27e70dce903053, 
0x315efac9cb693387, 0x2d56c54db5da75bf ]
dn_blkptr = [
{
blk_dva = [
{
dva_word = [ 0x87c9ed7672454887, 
0x760f569622246efe ]
}
{
dva_word = [ 0xce26ac20a6a5315c, 
0x38802e5d7cce495f ]
}
{
dva_word = [ 0x9241150676798b95, 
0x9c6985f95335742c ]
}
]
None of this looks believable.  So, just what is the rootbp in the 
uberblock_t referring to?

thanks,
max


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS or NFS?

2007-09-17 Thread Robert Thurlow
Ian Collins wrote:
 I have a build 62 system with a zone that NFS mounts an ZFS filesystem. 
 
From the zone, I keep seeing issues with .nfs files remaining in
 otherwise empty directories preventing their deletion.  The files appear
 to be immediately replaced when they are deleted.
 
 Is this an NFS or a ZFS issue?

This is the NFS client keeping unlinked but open files around.
You need to find out what process has the files open (perhaps
with fuser -c) and persuade them to close the files before
you can unmount gracefully.  You may also use umount -f if
you don't care what happens to the processes.

Rob T
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS or NFS?

2007-09-17 Thread Paul Kraus
On 9/17/07, Darren J Moffat [EMAIL PROTECTED] wrote:

 It is NFS that is doing that.  It happens when a process on the NFS
 client still has the file open.  fuser(1) is your friend here.

... and if fuser doesn't tell you what you need to know, you can use
lsof ( http://freshmeat.net/projects/lsof/ I usually just get it
precompiled from http://www.sunfreeware.com/ ). I have found lsof to
be more reliable that fuser in listing what has a file open.

-- 
Paul Kraus
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Strange behavior zfs and soalris cluster

2007-09-17 Thread Ulf Björklund
Hi All,

Two and three-node clusters with SC3.2 and S10u3 (120011-14).
If a node is rebooted when using SCSI3-PGR the node is not
able to take the zpool by HAStoragePlus due to reservation conflict.
SCSI2-PGRE is okay.
Using the same SAN-LUN:s in a metaset (SVM) and HAStoragePlus
works okay with PGR and PGRE. (both SMI and EFI-labled disks)

If using scshutdown and restart all nodes then it will work.
Also, (interesting) If I reboot a node and then run: update_drv -f ssd ,
then the node will be able to take SCSI3-PGR zpools.

Storage or Solaris/Cluster issue ?
What is the differences between SVM and ZFS from
the ssd point of view in this case? 

/Regards
Ulf
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] change uid/god below 100

2007-09-17 Thread Claus Guttesen
 Only indirectly related to zfs. I need to test diskusage/performance
 on zfs shared via nfs. I have installed nevada b64a. Historically
 uid/gid for user www has been 16/16 but when I try to add uid/gid www
 via smc with the value 16 I'm not allowed to do so.

 I'm coming from a FreeBSD backgroup. Here I alter uid using vipw and
 edit /etc/group afterwards.

vipw was in /usr/ucb. I added the group using groupadd -g 16 www and
useradd -u 16 -g www plus homedir-related information. Works now.

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Evil Tuning Guide

2007-09-17 Thread Roch - PAE

Tuning should not be done in general and Best practices
should be followed.

So get very much acquainted with this first :

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

Then if you must, this could soothe or sting : 

http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide

So drive carefully.

-r


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] PLOGI errors

2007-09-17 Thread Tim Cook
What lead you to the assumption it's ONLY those switches?  Just because the 
patch is ONLY for those switches doesn't mean that the bug is only for them.  
The reason you only see the patch for 3xxx and newer is because the 2xxx was 
EOL before the patch was released...

FabOS is FabOS, the nature of the issue is not hardware related, it's software 
related.  2850 or 3850 makes no difference.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool create -f not applicable to hot spares

2007-09-17 Thread Robert Milkowski
Hello zfs-discuss,

  If you do 'zpool create -f test A B C spare D E' and D or E contains
  UFS filesystem then despite of -f zpool command will complain that
  there is UFS file system on D.

  workaround: create a test pool with -f on D and E, destroy it and
  that create first pool with D and E as hotspares.

  I've tested it on s10u3 + patches - can someone confirm it on latest
  nv?

-- 
Best regards,
 Robert Milkowski mailto:[EMAIL PROTECTED]
 http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] change uid/god below 100

2007-09-17 Thread Darren J Moffat
Claus Guttesen wrote:
 Only indirectly related to zfs. I need to test diskusage/performance
 on zfs shared via nfs. I have installed nevada b64a. Historically
 uid/gid for user www has been 16/16 but when I try to add uid/gid www
 via smc with the value 16 I'm not allowed to do so.

 I'm coming from a FreeBSD backgroup. Here I alter uid using vipw and
 edit /etc/group afterwards.
 
 vipw was in /usr/ucb. I added the group using groupadd -g 16 www and
 useradd -u 16 -g www plus homedir-related information. Works now.

Why not use the already assigned webservd/webserved 80/80 uid/gid pair ?

Note that ALL uid and gid values below 100 are explicitly reserved for 
use by the operating system itself and should not be used by end admins. 
  This is why smc failed to make the change.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool create -f not applicable to hot spares

2007-09-17 Thread Mark J Musante
On Mon, 17 Sep 2007, Robert Milkowski wrote:

 If you do 'zpool create -f test A B C spare D E' and D or E contains UFS
 filesystem then despite of -f zpool command will complain that there is
 UFS file system on D.

This was fixed recently in build 73.  See CR 6573276.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] OT zfs system UPS sizing was Re: hardware sizing for a zfs-based system?

2007-09-17 Thread Al Hopper
On Mon, 17 Sep 2007, Kent Watsen wrote:

... snip ...
 (Incidentally, I rarely see these discussions touch upon what sort of UPS is 
 being used. Power fluctuations are a great source of correlated disk 
 failures.)


 Glad you brought that up - I currently have an APC 2200XL
 (http://www.apcc.com/resource/include/techspec_index.cfm?base_sku=SU2200XLNET)
 - its rated for 1600 watts, but my current case selections are saying
 they have a 1500W 3+1, should I be worried?

Bear in mind that you must not exceed *either* the VA or the Wattage 
ratings.  So, for example, if your UPS is 2200VA/1600W and your 
combined systems consumed 2000VA and 1700W - its a no go (exceeds the 
wattage rating).  This is usually not an issue with newer power 
supplies with power factor correction (PFC).  If the PFC = 1.0 (ideal) 
then VA rating = Wattage (rating).

Recommendation: Measure it with a Seasonic Power Angel (froogle for 
seasonic ssm-1508ra) which works well, or try the kill-a-watt (have 
no experience with it).

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] change uid/god below 100

2007-09-17 Thread Claus Guttesen
  Only indirectly related to zfs. I need to test diskusage/performance
  on zfs shared via nfs. I have installed nevada b64a. Historically
  uid/gid for user www has been 16/16 but when I try to add uid/gid www
  via smc with the value 16 I'm not allowed to do so.
 
  I'm coming from a FreeBSD backgroup. Here I alter uid using vipw and
  edit /etc/group afterwards.
 
  vipw was in /usr/ucb. I added the group using groupadd -g 16 www and
  useradd -u 16 -g www plus homedir-related information. Works now.

 Why not use the already assigned webservd/webserved 80/80 uid/gid pair ?

As mentioed there are historical reasons. User- and groupd-id 16 was
default in an older release of RedHat (5.2??), a few years before I
became sysadmin. Now we have some 80 TB of data (images) and changing
uid/gid has to be planned carefully  since I probably need to take the
partition off-line before I do a chown -R.

 Note that ALL uid and gid values below 100 are explicitly reserved for
 use by the operating system itself and should not be used by end admins.
   This is why smc failed to make the change.

FreeBSD defaults to 80 for user www as well.

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Evil Tuning Guide

2007-09-17 Thread Pawel Jakub Dawidek
On Mon, Sep 17, 2007 at 03:40:05PM +0200, Roch - PAE wrote:
 
 Tuning should not be done in general and Best practices
 should be followed.
 
 So get very much acquainted with this first :
 
   http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
 
 Then if you must, this could soothe or sting : 
 
   http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
 
 So drive carefully.

If some LUNs exposed to ZFS are not protected by NVRAM, then this
tuning can lead to data loss or application level corruption.  However
the ZFS pool integrity itself is NOT compromised by this tuning.

Are you sure? Once you turn off flushing cache, how can you tell that
your disk didn't reorder writes and uberblock was updated before new
blocks were written? Will ZFS go the the previous blocks when the newest
uberblock points at corrupted data?

-- 
Pawel Jakub Dawidek   http://www.wheel.pl
[EMAIL PROTECTED]   http://www.FreeBSD.org
FreeBSD committer Am I Evil? Yes, I Am!


pgpLDBZ4zRFkC.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Evil Tuning Guide

2007-09-17 Thread Roch - PAE

Pawel Jakub Dawidek writes:
  On Mon, Sep 17, 2007 at 03:40:05PM +0200, Roch - PAE wrote:
   
   Tuning should not be done in general and Best practices
   should be followed.
   
   So get very much acquainted with this first :
   
  http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
   
   Then if you must, this could soothe or sting : 
   
  http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
   
   So drive carefully.
  
  If some LUNs exposed to ZFS are not protected by NVRAM, then this
  tuning can lead to data loss or application level corruption.  However
  the ZFS pool integrity itself is NOT compromised by this tuning.
  
  Are you sure? Once you turn off flushing cache, how can you tell that
  your disk didn't reorder writes and uberblock was updated before new
  blocks were written? Will ZFS go the the previous blocks when the newest
  uberblock points at corrupted data?
  

Good point. I'll fix this. I don't know if we look for
alternate uberblock but even if we did, I guess the 'out of
sync' can occur lower down the tree.


-r


  -- 
  Pawel Jakub Dawidek   http://www.wheel.pl
  [EMAIL PROTECTED]   http://www.FreeBSD.org
  FreeBSD committer Am I Evil? Yes, I Am!
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Evil Tuning Guide

2007-09-17 Thread Victor Latushkin
Roch - PAE wrote:
 Pawel Jakub Dawidek writes:
   On Mon, Sep 17, 2007 at 03:40:05PM +0200, Roch - PAE wrote:

Tuning should not be done in general and Best practices
should be followed.

So get very much acquainted with this first :

 http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

Then if you must, this could soothe or sting : 

 http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide

So drive carefully.
   
   If some LUNs exposed to ZFS are not protected by NVRAM, then this
   tuning can lead to data loss or application level corruption.  However
   the ZFS pool integrity itself is NOT compromised by this tuning.
   
   Are you sure? Once you turn off flushing cache, how can you tell that
   your disk didn't reorder writes and uberblock was updated before new
   blocks were written? Will ZFS go the the previous blocks when the newest
   uberblock points at corrupted data?
   
 
 Good point. I'll fix this. I don't know if we look for
 alternate uberblock but even if we did, I guess the 'out of
 sync' can occur lower down the tree.

I think that it would also be nice to add to section on limiting ARC 
size a note on how to view current limit and other ARC statistics with kstat

Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Evil Tuning Guide

2007-09-17 Thread Nicolas Williams
On Mon, Sep 17, 2007 at 05:22:04PM +0200, Pawel Jakub Dawidek wrote:
 On Mon, Sep 17, 2007 at 03:40:05PM +0200, Roch - PAE wrote:
  
  Tuning should not be done in general and Best practices
  should be followed.
  
  So get very much acquainted with this first :
  
  http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
  
  Then if you must, this could soothe or sting : 
  
  http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
  
  So drive carefully.
 
 If some LUNs exposed to ZFS are not protected by NVRAM, then this
 tuning can lead to data loss or application level corruption.  However
 the ZFS pool integrity itself is NOT compromised by this tuning.
 
 Are you sure? Once you turn off flushing cache, how can you tell that
 your disk didn't reorder writes and uberblock was updated before new
 blocks were written? Will ZFS go the the previous blocks when the newest
 uberblock points at corrupted data?

I think Roch must have meant that ZFS will detect the data loss and
recover as best it can.  But the loss could be significant enough to
render the filesystem useless to you, so I suppose that the distinction
Roch draws here isn't very helpful.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS or NFS?

2007-09-17 Thread Richard Elling
Ian Collins wrote:
 I have a build 62 system with a zone that NFS mounts an ZFS filesystem. 
 
From the zone, I keep seeing issues with .nfs files remaining in
 otherwise empty directories preventing their deletion.  The files appear
 to be immediately replaced when they are deleted.
 
 Is this an NFS or a ZFS issue?

That is how NFS deals with files that are unlinked while open.  In a local
file system, unlinked while open files will simply not be deleted until the
close.  For remote file systems, like NFS, you have to remove the file from
the namespace, but not remove the file's content.  The client will do this
by creating .nfs files.  A more detailed explanation is at:
http://nfs.sourceforge.net/

  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] change uid/gid below 100

2007-09-17 Thread Paul Kraus
On 9/17/07, Darren J Moffat [EMAIL PROTECTED] wrote:

 Why not use the already assigned webservd/webserved 80/80 uid/gid pair ?

 Note that ALL uid and gid values below 100 are explicitly reserved for
 use by the operating system itself and should not be used by end admins.
   This is why smc failed to make the change.

Calling the Sun ONE Web Server (the reservation of UID/GID 80)
part of the operating system is a stretch. Is there a definitive list
of what users and services all of the UID/GIDs below 100 are reserved
for anywhere ?

-- 
Paul Kraus
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Would a device list output be a reasonable feature for zpool(1)?

2007-09-17 Thread Ellis, Mike
Yup...

With Leadville/MPXIO targets in the 32-digit range, identifying the new
storage/LUNs is not a trivial operatrion.

 -- MikeE 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Russ
Petruzzelli
Sent: Monday, September 17, 2007 1:51 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Would a device list output be a reasonable
feature for zpool(1)?

Seconded!

MC wrote:
 With the arrival of ZFS, the format command is well on its way to
deprecation station.  But how else do you list the devices that zpool
can create pools out of?

 Would it be reasonable to enhance zpool to list the vdevs that are
available to it?  Perhaps as part of the help output to zpool create?
  
   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool create -f not applicable to hot spares

2007-09-17 Thread Robert Milkowski
Hello Mark,

Monday, September 17, 2007, 3:04:03 PM, you wrote:

MJM On Mon, 17 Sep 2007, Robert Milkowski wrote:

 If you do 'zpool create -f test A B C spare D E' and D or E contains UFS
 filesystem then despite of -f zpool command will complain that there is
 UFS file system on D.

MJM This was fixed recently in build 73.  See CR 6573276.


Thanks.

-- 
Best regards,
 Robert Milkowski  mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Would a device list output be a reasonable feature for zpool(1)?

2007-09-17 Thread MC
Just to answer one of my questions, df seems to work pretty well.  That said 
I still think the zpool creation tool would do well to list what it can create 
zpools out of.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Would a device list output be a reasonable feature for zpool(1)?

2007-09-17 Thread Russ Petruzzelli
Seconded!

MC wrote:
 With the arrival of ZFS, the format command is well on its way to 
 deprecation station.  But how else do you list the devices that zpool can 
 create pools out of?

 Would it be reasonable to enhance zpool to list the vdevs that are available 
 to it?  Perhaps as part of the help output to zpool create?
  
   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mixing SATA PATA Drives

2007-09-17 Thread Christopher Gibbs
Great, that's the answer I was looking for.

My current emphasis is on storage rather than performance. So I just
wanted to make sure that mixing the two speeds would be just as safe
as using only one kind.

Thanks!

On 9/17/07, Eric Schrock [EMAIL PROTECTED] wrote:
 Yes, the pool would run at the speed of the slowest drive.  There is an
 open RFE to better balance allocations acros variable latency toplevel
 vdevs, but within a toplevel vdev there's not much we can do; we need to
 make sure your data is on disk with sufficient replication before
 returning success.

 - Eric

 On Mon, Sep 17, 2007 at 01:22:40PM -0500, Christopher Gibbs wrote:
  Anyone?
 
  On 9/14/07, Christopher Gibbs [EMAIL PROTECTED] wrote:
   I suspect it's probably not a good idea but I was wondering if someone
   could clarify the details.
  
   I have 4 250G SATA(150) disks and 1 250G PATA(133) disk.  Would it
   cause problems if I created a raidz1 pool across all 5 drives?
  
   I know the PATA drive is slower so would it slow the access across the
   whole pool or just when accessing that disk?
  
   Thanks for your input.
  
   - Chris
  
 
 
  --
  Christopher Gibbs
  Email / LDAP Administrator
  Web Integration  Programming
  Abilene Christian University
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 --
 Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] change uid/gid below 100

2007-09-17 Thread Cindy . Swearingen
Paul,

Scroll down a bit in this section to the default passwd/group tables:

http://docs.sun.com/app/docs/doc/819-2379/6n4m1vl99?a=view

Cindy
Paul Kraus wrote:
 On 9/17/07, Darren J Moffat [EMAIL PROTECTED] wrote:
 
 
Why not use the already assigned webservd/webserved 80/80 uid/gid pair ?

Note that ALL uid and gid values below 100 are explicitly reserved for
use by the operating system itself and should not be used by end admins.
  This is why smc failed to make the change.
 
 
 Calling the Sun ONE Web Server (the reservation of UID/GID 80)
 part of the operating system is a stretch. Is there a definitive list
 of what users and services all of the UID/GIDs below 100 are reserved
 for anywhere ?
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mixing SATA PATA Drives

2007-09-17 Thread Tim Spriggs

I'm far from an expert but my understanding is that the zil is spread 
across the whole pool by default so in theory the one drive could slow 
everything down. I don't know what it would mean in this respect to keep 
the PATA drive as a hot spare though.

-Tim

Christopher Gibbs wrote:
 Anyone?

 On 9/14/07, Christopher Gibbs [EMAIL PROTECTED] wrote:
   
 I suspect it's probably not a good idea but I was wondering if someone
 could clarify the details.

 I have 4 250G SATA(150) disks and 1 250G PATA(133) disk.  Would it
 cause problems if I created a raidz1 pool across all 5 drives?

 I know the PATA drive is slower so would it slow the access across the
 whole pool or just when accessing that disk?

 Thanks for your input.

 - Chris

 


   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mixing SATA PATA Drives

2007-09-17 Thread Neil Perrin
Yes performance will suffer, but it's a bit difficult to say by how much.
Both pool transaction group writes and zil writes are spread across 
all devices. It depends on what applications you will run as to how much
use is made of the zil. Maybe you should experiment and see if performance
is good enough.

Neil.

Tim Spriggs wrote:
 I'm far from an expert but my understanding is that the zil is spread 
 across the whole pool by default so in theory the one drive could slow 
 everything down. I don't know what it would mean in this respect to keep 
 the PATA drive as a hot spare though.
 
 -Tim
 
 Christopher Gibbs wrote:
 Anyone?

 On 9/14/07, Christopher Gibbs [EMAIL PROTECTED] wrote:
   
 I suspect it's probably not a good idea but I was wondering if someone
 could clarify the details.

 I have 4 250G SATA(150) disks and 1 250G PATA(133) disk.  Would it
 cause problems if I created a raidz1 pool across all 5 drives?

 I know the PATA drive is slower so would it slow the access across the
 whole pool or just when accessing that disk?

 Thanks for your input.

 - Chris

 

   
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mixing SATA PATA Drives

2007-09-17 Thread Eric Schrock
Yes, the pool would run at the speed of the slowest drive.  There is an
open RFE to better balance allocations acros variable latency toplevel
vdevs, but within a toplevel vdev there's not much we can do; we need to
make sure your data is on disk with sufficient replication before
returning success.

- Eric

On Mon, Sep 17, 2007 at 01:22:40PM -0500, Christopher Gibbs wrote:
 Anyone?
 
 On 9/14/07, Christopher Gibbs [EMAIL PROTECTED] wrote:
  I suspect it's probably not a good idea but I was wondering if someone
  could clarify the details.
 
  I have 4 250G SATA(150) disks and 1 250G PATA(133) disk.  Would it
  cause problems if I created a raidz1 pool across all 5 drives?
 
  I know the PATA drive is slower so would it slow the access across the
  whole pool or just when accessing that disk?
 
  Thanks for your input.
 
  - Chris
 
 
 
 -- 
 Christopher Gibbs
 Email / LDAP Administrator
 Web Integration  Programming
 Abilene Christian University
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] reccomended disk configuration

2007-09-17 Thread Peter Schuller
 I also wanted to test a recovery of my pool, so my took two disk raidz pool 
 onto a friends freebsd box.  It seems both systems use zfs version 6, but the 
 import failed.  I noticed on the boot logs:
 
 GEOM: ad6: corrupt or invalid GPT detected.
 GEOM: ad6: GPT rejected -- may not be recoverable.
 
 Is that a solaris or freebsd problem do you think?

This has to do with the GPT
(http://en.wikipedia.org/wiki/GUID_Partition_Table) support rather
than ZFS. IIRC the GPT:s written by Solaris are valid, just not
recognized properly by FreeBSD (but I am out of date and don't
remember the source of this information).

AFAIK the ZFS pools themselves are fully portable.

-- 
/ Peter Schuller

PGP userID: 0xE9758B7D or 'Peter Schuller [EMAIL PROTECTED]'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org



pgpFiIJzXKiig.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] reccomended disk configuration

2007-09-17 Thread Richard Elling
Mario Goebbels wrote:
 Hi, thanks for the tips.  I currently using a 2 disk raidz configuration and 
 it seems to work fine, but I'll probably take your advice and use mirrors 
 because I'm finding the raidz a bit slow.
 
 What? How would a two disk RAID-Z work, anyway? A three disk RAID-Z
 missing a disk? 50% of the total diskspace parity (which would be a
 crippled mirror)?

It works like a 2-way mirror that cannot be expanded to a 3-way mirror.
I'm not sure I would consider it crippled, but it is confusing the
intention of the sys admin.
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Would a device list output be a reasonable feature for zpool(1)?

2007-09-17 Thread James C. McPherson
Ellis, Mike wrote:
 With Leadville/MPXIO targets in the 32-digit range, identifying the new
 storage/LUNs is not a trivial operatrion.

Have a look at my devid/guid presentation for some details on
how we use them with ZFS/SVM:

http://www.jmcp.homeunix.com/~jmcp/WhatIsAGuide.pdf


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Would a device list output be a reasonable feature for zpool(1)?

2007-09-17 Thread James C. McPherson
A Darren Dunham wrote:
 On Tue, Sep 18, 2007 at 10:11:11AM +1000, James C. McPherson wrote:
 Have a look at my devid/guid presentation for some details on
 how we use them with ZFS/SVM:

 http://www.jmcp.homeunix.com/~jmcp/WhatIsAGuide.pdf
 
 Ah, a very silent 'e'...  :-)
 
 http://www.jmcp.homeunix.com/~jmcp/WhatIsAGuid.pdf

Thanks for picking that up.  the hand is quicker
than the eye, etc etc.


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS panic in space_map.c line 125

2007-09-17 Thread Matty
One of our Solaris 10 update 3 servers paniced today with the following error:

Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after
panic: assertion failed: ss != NULL, file:
../../common/fs/zfs/space_map.c, line: 125

The server saved a core file, and the resulting backtrace is listed below:

$ mdb unix.0 vmcore.0
 $c
vpanic()
0xfb9b49f3()
space_map_remove+0x239()
space_map_load+0x17d()
metaslab_activate+0x6f()
metaslab_group_alloc+0x187()
metaslab_alloc_dva+0xab()
metaslab_alloc+0x51()
zio_dva_allocate+0x3f()
zio_next_stage+0x72()
zio_checksum_generate+0x5f()
zio_next_stage+0x72()
zio_write_compress+0x136()
zio_next_stage+0x72()
zio_wait_for_children+0x49()
zio_wait_children_ready+0x15()
zio_next_stage_async+0xae()
zio_wait+0x2d()
arc_write+0xcc()
dmu_objset_sync+0x141()
dsl_dataset_sync+0x23()
dsl_pool_sync+0x7b()
spa_sync+0x116()
txg_sync_thread+0x115()
thread_start+8()

It appears ZFS is still able to read the labels from the drive:

$ zdb -lv  /dev/rdsk/c3t50002AC00039040Bd0p0

LABEL 0

version=3
name='fpool0'
state=0
txg=4
pool_guid=10406529929620343615
top_guid=3365726235666077346
guid=3365726235666077346
vdev_tree
type='disk'
id=0
guid=3365726235666077346
path='/dev/dsk/c3t50002AC00039040Bd0p0'
devid='id1,[EMAIL PROTECTED]/q'
whole_disk=0
metaslab_array=13
metaslab_shift=31
ashift=9
asize=322117566464

LABEL 1

version=3
name='fpool0'
state=0
txg=4
pool_guid=10406529929620343615
top_guid=3365726235666077346
guid=3365726235666077346
vdev_tree
type='disk'
id=0
guid=3365726235666077346
path='/dev/dsk/c3t50002AC00039040Bd0p0'
devid='id1,[EMAIL PROTECTED]/q'
whole_disk=0
metaslab_array=13
metaslab_shift=31
ashift=9
asize=322117566464

LABEL 2

version=3
name='fpool0'
state=0
txg=4
pool_guid=10406529929620343615
top_guid=3365726235666077346
guid=3365726235666077346
vdev_tree
type='disk'
id=0
guid=3365726235666077346
path='/dev/dsk/c3t50002AC00039040Bd0p0'
devid='id1,[EMAIL PROTECTED]/q'
whole_disk=0
metaslab_array=13
metaslab_shift=31
ashift=9
asize=322117566464

LABEL 3

version=3
name='fpool0'
state=0
txg=4
pool_guid=10406529929620343615
top_guid=3365726235666077346
guid=3365726235666077346
vdev_tree
type='disk'
id=0
guid=3365726235666077346
path='/dev/dsk/c3t50002AC00039040Bd0p0'
devid='id1,[EMAIL PROTECTED]/q'
whole_disk=0
metaslab_array=13
metaslab_shift=31
ashift=9
asize=322117566464

But for some reason it is unable to open the pool:

$ zdb -c fpool0
zdb: can't open fpool0: error 2

I saw several bugs related to space_map.c, but the stack traces listed
in the bug reports were different than the one listed above.  Has
anyone seen this bug before? Is there anyway to recover from it?

Thanks for any insight,
- Ryan
-- 
UNIX Administrator
http://prefetch.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about uberblock blkptr

2007-09-17 Thread Jim Mauro

Hey Max - Check out the on-disk specification document at
http://opensolaris.org/os/community/zfs/docs/.

Page 32 illustration shows the rootbp pointing to a dnode_phys_t
object (the first member of a objset_phys_t data structure).

The source code indicates ub_rootbp is a blkptr_t, which contains
a 3 member array of dva_t 's called blk_dva (blk_dva[3]).
Each dva_t is a 2 member array of 64-bit unsigned ints (dva_word[2]).

So it looks like each blk_dva contains 3 128-bit DVA's

You probably figured all this out alreadydid you try using
a objset_phys_t to format the data?

Thanks,
/jim



[EMAIL PROTECTED] wrote:
 Hi All,
 I have modified mdb so that I can examine data structures on disk using 
 ::print.
 This works fine for disks containing ufs file systems.  It also works 
 for zfs file systems, but...
 I use the dva block number from the uberblock_t to print what is at the 
 block
 on disk.  The problem I am having is that I can not figure out what (if 
 any) structure to use.
 All of the xxx_phys_t types that I try do not look right.  So, the 
 question is, just what is
 the structure that the uberblock_t dva's refer to on the disk?

 Here is an example:

 First, I use zdb to get the dva for the rootbp (should match the value 
 in the uberblock_t(?)).

 # zdb - usbhard | grep -i dva
 Dataset mos [META], ID 0, cr_txg 4, 1003K, 167 objects, rootbp [L0 DMU 
 objset] 400L/200P DVA[0]=0:111f79000:200 DVA[1]=0:506bde00:200 
 DVA[2]=0:36a286e00:200 fletcher4 lzjb LE contiguous birth=621838 
 fill=167 cksum=84daa9667:365cb5b02b0:b4e531085e90:197eb9d99a3beb
 bp = [L0 DMU objset] 400L/200P DVA[0]=0:111f6ae00:200 
 DVA[1]=0:502efe00:200 DVA[2]=0:36a284e00:200 fletcher4 lzjb LE 
 contiguous birth=621838 fill=34026 
 cksum=cd0d51959:4fef8f217c3:10036508a5cc4:2320f4b2cde529
 Dataset usbhard [ZPL], ID 5, cr_txg 4, 15.7G, 34026 objects, rootbp [L0 
 DMU objset] 400L/200P DVA[0]=0:111f6ae00:200 DVA[1]=0:502efe00:200 
 DVA[2]=0:36a284e00:200 fletcher4 lzjb LE contiguous birth=621838 
 fill=34026 cksum=cd0d51959:4fef8f217c3:10036508a5cc4:2320f4b2cde529
 first block: [L0 ZIL intent log] 9000L/9000P 
 DVA[0]=0:36aef6000:9000 zilog uncompressed LE contiguous birth=263950 
 fill=0 cksum=97a624646cebdadb:fd7b50f37b55153b:5:1
 ^C
 #

 Then I run my modified mdb on the vdev containing the usbhard pool
 # ./mdb /dev/rdsk/c4t0d0s0

 I am using the DVA[0} for the META data set above.  Note that I have 
 tried all of the xxx_phys_t structures
 that I can find in zfs source, but none of them look right.  Here is 
 example output dumping the data as a objset_phys_t.
 (The shift by 9 and adding 40 is from the zfs on-disk format paper, 
 I have tried without the addition, without the shift,
 in all combinations, but the output still does not make sense).

   (111f790009)+40::print zfs`objset_phys_t
 {
 os_meta_dnode = {
 dn_type = 0x4f
 dn_indblkshift = 0x75
 dn_nlevels = 0x82
 dn_nblkptr = 0x25
 dn_bonustype = 0x47
 dn_checksum = 0x52
 dn_compress = 0x1f
 dn_flags = 0x82
 dn_datablkszsec = 0x5e13
 dn_bonuslen = 0x63c1
 dn_pad2 = [ 0x2e, 0xb9, 0xaa, 0x22 ]
 dn_maxblkid = 0x20a34fa97f3ff2a6
 dn_used = 0xac2ea261cef045ff
 dn_pad3 = [ 0x9c2b4541ab9f78c0, 0xdb27e70dce903053, 
 0x315efac9cb693387, 0x2d56c54db5da75bf ]
 dn_blkptr = [
 {
 blk_dva = [
 {
 dva_word = [ 0x87c9ed7672454887, 
 0x760f569622246efe ]
 }
 {
 dva_word = [ 0xce26ac20a6a5315c, 
 0x38802e5d7cce495f ]
 }
 {
 dva_word = [ 0x9241150676798b95, 
 0x9c6985f95335742c ]
 }
 ]
 None of this looks believable.  So, just what is the rootbp in the 
 uberblock_t referring to?

 thanks,
 max


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mixing SATA PATA Drives

2007-09-17 Thread Daniel Carosone
If your priorities were different, or for others pondering a similar question, 
the PATA disk might be a hotspare.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool history not found

2007-09-17 Thread sunnie
my system is currently running ZFS versionnn 3. 
And I just can't find the zpool history command. 
can anyone help me with the problem?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool history not found

2007-09-17 Thread Robin Guo
Hi, Sunnie,

  'zpool history' is only introduced from the ZFS version 4.
You could check the update info and pick the bits after Build 62
corresponded

# zpool upgrade -v
This system is currently running ZFS pool version 8.

The following versions are supported:

VER  DESCRIPTION
---  
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
For more information on a particular version, including supported 
releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.

sunnie wrote:
 my system is currently running ZFS versionnn 3. 
 And I just can't find the zpool history command. 
 can anyone help me with the problem?
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   


-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool history not found

2007-09-17 Thread Prabahar Jeyaram
'zpool history' is the 4th feature of ZFS in S10. You should get it  
in S10U4.

--
Prabahar.

On Sep 17, 2007, at 8:01 PM, sunnie wrote:

 my system is currently running ZFS versionnn 3.
 And I just can't find the zpool history command.
 can anyone help me with the problem?


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS panic in space_map.c line 125

2007-09-17 Thread Robin Guo
Hi Matty,

  From the stack I saw, that is 6454482.
But this defect has been marked as 'Not reproducible', I have no idea 
about how to recover
from it, but looks like new update will not hit this issue.

Matty wrote:
 One of our Solaris 10 update 3 servers paniced today with the following error:

 Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after
 panic: assertion failed: ss != NULL, file:
 ../../common/fs/zfs/space_map.c, line: 125

 The server saved a core file, and the resulting backtrace is listed below:

 $ mdb unix.0 vmcore.0
   
 $c
 
 vpanic()
 0xfb9b49f3()
 space_map_remove+0x239()
 space_map_load+0x17d()
 metaslab_activate+0x6f()
 metaslab_group_alloc+0x187()
 metaslab_alloc_dva+0xab()
 metaslab_alloc+0x51()
 zio_dva_allocate+0x3f()
 zio_next_stage+0x72()
 zio_checksum_generate+0x5f()
 zio_next_stage+0x72()
 zio_write_compress+0x136()
 zio_next_stage+0x72()
 zio_wait_for_children+0x49()
 zio_wait_children_ready+0x15()
 zio_next_stage_async+0xae()
 zio_wait+0x2d()
 arc_write+0xcc()
 dmu_objset_sync+0x141()
 dsl_dataset_sync+0x23()
 dsl_pool_sync+0x7b()
 spa_sync+0x116()
 txg_sync_thread+0x115()
 thread_start+8()

 It appears ZFS is still able to read the labels from the drive:

 $ zdb -lv  /dev/rdsk/c3t50002AC00039040Bd0p0
 
 LABEL 0
 
 version=3
 name='fpool0'
 state=0
 txg=4
 pool_guid=10406529929620343615
 top_guid=3365726235666077346
 guid=3365726235666077346
 vdev_tree
 type='disk'
 id=0
 guid=3365726235666077346
 path='/dev/dsk/c3t50002AC00039040Bd0p0'
 devid='id1,[EMAIL PROTECTED]/q'
 whole_disk=0
 metaslab_array=13
 metaslab_shift=31
 ashift=9
 asize=322117566464
 
 LABEL 1
 
 version=3
 name='fpool0'
 state=0
 txg=4
 pool_guid=10406529929620343615
 top_guid=3365726235666077346
 guid=3365726235666077346
 vdev_tree
 type='disk'
 id=0
 guid=3365726235666077346
 path='/dev/dsk/c3t50002AC00039040Bd0p0'
 devid='id1,[EMAIL PROTECTED]/q'
 whole_disk=0
 metaslab_array=13
 metaslab_shift=31
 ashift=9
 asize=322117566464
 
 LABEL 2
 
 version=3
 name='fpool0'
 state=0
 txg=4
 pool_guid=10406529929620343615
 top_guid=3365726235666077346
 guid=3365726235666077346
 vdev_tree
 type='disk'
 id=0
 guid=3365726235666077346
 path='/dev/dsk/c3t50002AC00039040Bd0p0'
 devid='id1,[EMAIL PROTECTED]/q'
 whole_disk=0
 metaslab_array=13
 metaslab_shift=31
 ashift=9
 asize=322117566464
 
 LABEL 3
 
 version=3
 name='fpool0'
 state=0
 txg=4
 pool_guid=10406529929620343615
 top_guid=3365726235666077346
 guid=3365726235666077346
 vdev_tree
 type='disk'
 id=0
 guid=3365726235666077346
 path='/dev/dsk/c3t50002AC00039040Bd0p0'
 devid='id1,[EMAIL PROTECTED]/q'
 whole_disk=0
 metaslab_array=13
 metaslab_shift=31
 ashift=9
 asize=322117566464

 But for some reason it is unable to open the pool:

 $ zdb -c fpool0
 zdb: can't open fpool0: error 2

 I saw several bugs related to space_map.c, but the stack traces listed
 in the bug reports were different than the one listed above.  Has
 anyone seen this bug before? Is there anyway to recover from it?

 Thanks for any insight,
 - Ryan
   


-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss