Re: [zfs-discuss] Solaris 10u9 with zpool version 22, but no DEDUP (version 21 reserved)

2010-09-11 Thread Prabahar Jeyaram
 
 What, if you will use Zpools created with OSOL and Dedup on Solaris 10u9
 

Not supported. You are on your own, if you encounter any issues.

--
Prabahar.


On Sep 10, 2010, at 10:23 PM, Hans Foertsch wrote:

 bash-3.00# uname -a
 SunOS testxx10 5.10 Generic_142910-17 i86pc i386 i86pc
 
 bash-3.00# zpool upgrade -v
 This system is currently running ZFS pool version 22.
 
 The following versions are supported:
 
 VER  DESCRIPTION
 ---  
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  Snapshot user holds
 19  Log device removal
 20  Compression using zle (zero-length encoding)
 21  Reserved
 22  Received properties
 
 For more information on a particular version, including supported releases,
 see the ZFS Administration Guide.
 
 this is an interesting condition..
 
 What, if you will use Zpools created with OSOL and Dedup on Solaris 10u9
 
 Hans Foertsch
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 10u9 with zpool version 22, but no DEDUP (version 21 reserved)

2010-09-11 Thread Prabahar Jeyaram

On Sep 11, 2010, at 6:04 PM, P-O Yliniemi wrote:

 Will dedup ever be supported on ZFS/Solaris ?
 

Yes in the next major release of Solaris.

--
Prabahar.


 If not, will any possible problems be avoided if I remove (transfer data away 
 from) any filesystems with dedup=on ?
 
 /PeO
 
 Prabahar Jeyaram skrev 2010-09-11 18:39:
 What, if you will use Zpools created with OSOL and Dedup on Solaris 10u9
 
 Not supported. You are on your own, if you encounter any issues.
 
 --
 Prabahar.
 
 
 On Sep 10, 2010, at 10:23 PM, Hans Foertsch wrote:
 
 bash-3.00# uname -a
 SunOS testxx10 5.10 Generic_142910-17 i86pc i386 i86pc
 
 bash-3.00# zpool upgrade -v
 This system is currently running ZFS pool version 22.
 
 The following versions are supported:
 
 VER  DESCRIPTION
 ---  
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  Snapshot user holds
 19  Log device removal
 20  Compression using zle (zero-length encoding)
 21  Reserved
 22  Received properties
 
 For more information on a particular version, including supported releases,
 see the ZFS Administration Guide.
 
 this is an interesting condition..
 
 What, if you will use Zpools created with OSOL and Dedup on Solaris 10u9
 
 Hans Foertsch
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs improvements to compression in Solaris 10?

2009-10-30 Thread Prabahar Jeyaram
On Fri, Oct 30, 2009 at 09:48:39AM +0100, Ga?tan Lehmann wrote:
 
 Le 4 ao?t 09 ? 20:25, Prabahar Jeyaram a ?crit :
 
 On Tue, Aug 04, 2009 at 01:01:40PM -0500, Bob Friesenhahn wrote:
 On Tue, 4 Aug 2009, Prabahar Jeyaram wrote:
 
 You seem to be hitting :
 
 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6586537
 
 The fix is available in OpenSolaris build 115 and later not for
 Solaris 10 yet.
 
 It is interesting that this is a simple thread priority issue.  The
 system has a ton of available CPU but the higher priority compression
 thread seems to cause scheduling lockout.  The Perfmeter tool
 shows that
 compression is a very short-term spike in CPU. Of course since
 Perfmeter
 and other apps stop running, it might be missing some sample data.
 
 I could put the X11 server into the real-time scheduling class
 but hate to
 think about what would happen as soon as Firefox visits a web
 site. :-)
 
 Compression is only used for the intermittently-used backup pool
 so it
 would be a shame to reduce overall system performance for the
 rest of the
 time.
 
 Do you know if this fix is planned to be integrated into a future
 Solaris
 10 update?
 
 
 Yes. It is planned for S10U9.
 
 
 In the mean time, is there a patch available for Solaris 10?

NO. Not yet.

 I can't find it on sunsolve.
 

--
Prabahar.

 Thanks,
 
 Ga?tan
 
 -- 
 Ga?tan Lehmann
 Biologie du D?veloppement et de la Reproduction
 INRA de Jouy-en-Josas (France)
 tel: +33 1 34 65 29 66fax: 01 34 65 29 09
 http://voxel.jouy.inra.fr  http://www.itk.org
 http://www.mandriva.org  http://www.bepo.fr
 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs improvements to compression in Solaris 10?

2009-08-04 Thread Prabahar Jeyaram
You seem to be hitting :

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6586537

The fix is available in OpenSolaris build 115 and later not for Solaris 10 yet.

--
Prabahar.

On Tue, Aug 04, 2009 at 10:08:37AM -0500, Bob Friesenhahn wrote:
 Are there any improvements in the Solaris 10 pipeline for how compression 
 is implemented?

 I changed my USB-based backup pool to use gzip compression (with default 
 level 6) rather than the lzjb compression which was used before.  When 
 lzjb compression was used, it would case the X11 session to become jerky 
 and unresponsive while data was copied to the backup pool.  With gzip 
 compression, the system just goes away for as long as eight seconds at a 
 time.  It goes away for so long that the Perfmeter tool pops up a window 
 saying that it can't contact localhost.

 I have done some simple testing to verify that the issue is not specific 
 to the X11 server since this little test loop shows the (up to) 8 second 
 delays in execution:

 while true
 do
   date
   sleep 1
 done  times.txt

 Does current OpenSolaris do better in this area?

 Bob
 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs improvements to compression in Solaris 10?

2009-08-04 Thread Prabahar Jeyaram
On Tue, Aug 04, 2009 at 01:01:40PM -0500, Bob Friesenhahn wrote:
 On Tue, 4 Aug 2009, Prabahar Jeyaram wrote:

 You seem to be hitting :

 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6586537

 The fix is available in OpenSolaris build 115 and later not for Solaris 10 
 yet.

 It is interesting that this is a simple thread priority issue.  The  
 system has a ton of available CPU but the higher priority compression  
 thread seems to cause scheduling lockout.  The Perfmeter tool shows that 
 compression is a very short-term spike in CPU. Of course since Perfmeter 
 and other apps stop running, it might be missing some sample data.

 I could put the X11 server into the real-time scheduling class but hate to 
 think about what would happen as soon as Firefox visits a web site. :-)

 Compression is only used for the intermittently-used backup pool so it  
 would be a shame to reduce overall system performance for the rest of the 
 time.

 Do you know if this fix is planned to be integrated into a future Solaris 
 10 update?


Yes. It is planned for S10U9. 

--
Prabahar.

 Bob
 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs source documentation

2008-12-16 Thread Prabahar Jeyaram
There is ZFS source tour :

URL : http://www.opensolaris.org/os/community/zfs/source/

--
Prabahar.

On Dec 14, 2008, at 10:25 PM, kavita wrote:

 Is there a documentation available for zfs source code?
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Long delays in txg_wait_open() - Waiting for transaction group to open

2008-11-23 Thread Prabahar Jeyaram
The reason rfs3_{write|create} waiting longer in txg_wait_open is because
there is a syncing txg taking longer to complete.

You may want to trace and track the syncing txg to get the reason for the
delay.

--
Prabahar.

On Sun, Nov 23, 2008 at 05:51:44PM -0800, Amer Ather wrote:
 IHAC who is seeing very slow NFS transactions over ZFS. rfs3_write(), 
 rfs3_create() and others are taking in the order of 17-20 seconds to 
 complete. Profiling these transactions showing most of the time is spent 
 in txg_wait_open() - waiting for transaction group to open.
 
 We tried zfs_nocacheflush: but it did n't help. iostat showing good 
 service time (10ms).
 
 System is running Solaris 10 - 137137-09
 
 0301b616ec00  3000813e168  3017b9e3798   2  59  6002296f966
PC: cv_wait+0x38CMD:
mv camshot_081118_29.jpg camshot_081118_59.jpg 
 camshot_081118_000129.j
p
stack pointer for thread 301b616ec00: 2a104a9ceb1
[ 02a104a9ceb1 cv_wait+0x38() ]
  txg_wait_open+0x54()
  zfs_write+0x34c()
  fop_write+0x20()
  write+0x268()
  syscall_trap32+0xcc()
 
 Timing txg_wait_open() shows:
 
 txg_wait_open delay
DELAY
 value  - Distribution - count
4294967296 | 0
8589934592 |@1
   17179869184 |@@@  33
   34359738368 | 0
 
 txg_wait_open delay
DELAY
 value  - Distribution - count
 134217728 | 0
 268435456 |@@   2
 536870912 | 0
1073741824 | 0
2147483648 |@1
4294967296 | 0
8589934592 |@@@  7
   17179869184 |@@   29
   34359738368 | 0
 
 DTrace script:
 #!/usr/sbin/dtrace -qs
 fbt::txg_wait_open:entry
 {
   self-t = timestamp;
 }
 fbt::txg_wait_open:return
 /self-t/
 {
@a[DELAY] = quantize(timestamp - self-t);
 }
 tick-5sec {
printf(\ntxg_wait_open delay);
printa(@a);
trunc(@a);
 }
 
 
 I have also profiled rfs3_write(), rfs3_create() and other using DTrace 
 and taken time delta and stack at each frame on all functions called by 
 rfs3_* and see txg_wait_open() is the one taking majority of the time.
 
 Thanks,
 Amer.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] `zfs list` doesn't show my snapshot

2008-11-21 Thread Prabahar Jeyaram
'zfs list' by default does not list the snapshots.

You need to use '-t snapshot' option with zfs list to view the snapshots.

--
Prabahar.

On Sat, Nov 22, 2008 at 12:14:47AM +0100, Pawel Tecza wrote:
 Hello All,
 
 This is my zfs list:
 
 # zfs list
 NAME   USED  AVAIL  REFER  MOUNTPOINT
 rpool 10,5G  3,85G61K  /rpool
 rpool/ROOT9,04G  3,85G18K  legacy
 rpool/ROOT/opensolaris89,7M  3,85G  5,44G  legacy
 rpool/ROOT/opensolaris-1  8,95G  3,85G  5,52G  legacy
 rpool/dump 256M  3,85G   256M  -
 rpool/export   747M  3,85G19K  /export
 rpool/export/home  747M  3,85G   747M  /export/home
 rpool/swap 524M  3,85G   524M  -
 
 Today I've created one snapshot as below:
 
 # zfs snapshot rpool/ROOT/[EMAIL PROTECTED]
 
 Ufortunately I can't see it, because `zfs list` command doesn't show it:
 
 # zfs list
 NAME   USED  AVAIL  REFER  MOUNTPOINT
 rpool 10,5G  3,85G61K  /rpool
 rpool/ROOT9,04G  3,85G18K  legacy
 rpool/ROOT/opensolaris89,7M  3,85G  5,44G  legacy
 rpool/ROOT/opensolaris-1  8,95G  3,85G  5,52G  legacy
 rpool/dump 256M  3,85G   256M  -
 rpool/export   747M  3,85G19K  /export
 rpool/export/home  747M  3,85G   747M  /export/home
 rpool/swap 524M  3,85G   524M  -
 
 I know the snapshot exists, because I can't create the same again:
 
 # zfs snapshot rpool/ROOT/[EMAIL PROTECTED]
 cannot create snapshot 'rpool/ROOT/[EMAIL PROTECTED]': dataset 
 already exists
 
 Is it a strange? How can you explain that?
 
 I use OpenSolaris 2008.11 snv_101a:
 
 # uname -a
 SunOS oklahoma 5.11 snv_101a i86pc i386 i86pc Solaris
 
 My best regards,
 
 Pawel
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Q: ZFS Boot Install / LU support ETA?

2008-05-16 Thread Prabahar Jeyaram
The write throttling improvement is in build 87.

--
Prabahar.

Lori Alt wrote:
 Actually, I only meant that zfs boot was integrated
 into build 90.  I don't know about the improved
 write throttling.

 I will check into why there was no mention of this
 on the heads up page.

 Lori

 Andrew Pattison wrote:
   
 Were both of these items (ZFS boot install support and improved write 
 throttling) integrated into build 90? I don't see any mention of this 
 on the Nevada head up page.

 Thanks

 Andrew.

 On Fri, May 16, 2008 at 5:21 PM, Lori Alt [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:



 It has been integrated into Nevada build 90.

 Lori

 andrew wrote:

 What is the current estimated ETA on the integration of
 install support for ZFS boot/root support to Nevada?

 Also, do you have an idea when we can expect the improved ZFS
 write throttling to integrate?

 Thanks

 Andrew.
   This message posted from opensolaris.org
 http://opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  





 -- 
 Andrew Pattison
 andrum04 at gmail dot com 
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS panics solaris while switching a volume to read-only

2008-05-16 Thread Prabahar Jeyaram
The fix is already in Solaris 10 U6. A patch for S10U5 will only be 
available when S10U6 is released.

--
Prabahar.

Veltror wrote:
 Is there any possibility that the psarc 2007/567 can be made as a patch to 
 Soalris 10 U5. We are planning to dispose of Veritas as quickly as possible 
 but since all storage on production machines is on EMC Symmetrix with 
 back-end mirroring, this panic is a showstopper for us.  Or is it so 
 intertwined that a back port of this PSARC to U5 is out of the question.
 
 Thanks
 
 
 Roman
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS commands sudden slow down, cpu spiked

2008-02-21 Thread Prabahar Jeyaram
Hi Max,

You might be hitting the BUG 6513209 (Contributer to the 'zpool import' 
delay). There is going to be an official patch soon. Currently it is in 
T-Patch state.

You should be able to get the T-Patch through your support channel.

--
Prabahar.



Max Holm wrote:
 Hi,
 
 I have a 3-node(SunFire V890) VCS cluster running Solaris 10 u4
 with LUNs from some Sun 6130,6140 and IBM 8100 arrays. It has been
 working well. But one of the nodes started to have troubles
 in running ZFS commands this Tue, 2/19. Any ZFS command, e.g., 
 'zpool import' can take hours to complete. Sometimes it took 4-5 
 minutes, and run it again, it can take 60 minutes. On the other 2 
 nodes that share the same set of LUNs are still normal so far - 
 take some 5-10 seconds or less for the same commands.  
 I haven't noticed any error messages from the arrays or SAN switches
 and other than the HBAs and switch ports, they are virtually identical.
 (other commands like cfgadm, format,... seems normal, so I suspect
 the culprit might be related to ZFS. I open a case with Sun, this route
 seems take forever for this kind of issue and I haven't got any answer yet.)
 
 The host is not down or crashed. I rebooted it once today, not sure if
 it's fixed by reboot, 'zpool import' can still take minutes rather than
 seconds to complete). I still need to create some test LUNs and pools
 for more tests.  It seems everything is still normal except the ZFS.  
 Most zfs commands also cause cpu loads well up till completed, 
 as seen in vmstast,mpstat, or top.  This has been causing us troubles 
 as our home grown VCS ZFS agent would consider the zpool is dead 
 after some consecutive failures in probing the pool (zpool status 
 takes forever to complete).
 
 Does anyone has same problem or know what might be the cause/fix?
 Thanks.
 
 Max Holm
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on arc_buf_remove_ref() assertion

2008-02-18 Thread Prabahar Jeyaram
The patches (127728-06 : sparc, 127729-07 : x86) which has the fix for 
this panic is in temporary state and will be released via SunSolve soon.

Please contact your support channel to get these patches.

--
Prabahar.

Stuart Anderson wrote:
 On Mon, Feb 18, 2008 at 06:28:31PM -0800, Stuart Anderson wrote:
 Is this kernel panic a known ZFS bug, or should I open a new ticket?

 Feb 18 17:55:18 thumper1 genunix: [ID 403854 kern.notice] assertion failed: 
 arc_buf_remove_ref(db-db_buf, db) == 0, file: ../../common/fs/zfs/dbuf.c, 
 line: 1692
 
 It looks like this might be bug 6523336,
 http://sunsolve.sun.com/search/document.do?assetkey=1-66-201229-1
 
 Does anyone know when the Binary relief for this and other Sol10 ZFS
 kernel panics will be released as normal kernel patches?
 
 Thanks.
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on arc_buf_remove_ref() assertion

2008-02-18 Thread Prabahar Jeyaram
Any IDRXX (Released immediately) is the interim relief (Will also 
contains the fix) provided to the customers till the official patch 
(Usually takes longer to be released) is available. Patch is supposed to 
be consider as the permanent solution.

--
Prabahar.

Stuart Anderson wrote:
 Thanks for the information.
 
 How does the temporary patch 127729-07 relate to the IDR127787 (x86) which
 I believe also claims to fix this panic?
 
 Thanks.
 
 
 On Mon, Feb 18, 2008 at 08:32:03PM -0800, Prabahar Jeyaram wrote:
 The patches (127728-06 : sparc, 127729-07 : x86) which has the fix for 
 this panic is in temporary state and will be released via SunSolve soon.

 Please contact your support channel to get these patches.

 --
 Prabahar.

 Stuart Anderson wrote:
 On Mon, Feb 18, 2008 at 06:28:31PM -0800, Stuart Anderson wrote:
 Is this kernel panic a known ZFS bug, or should I open a new ticket?

 Feb 18 17:55:18 thumper1 genunix: [ID 403854 kern.notice] assertion 
 failed: arc_buf_remove_ref(db-db_buf, db) == 0, file: 
 ../../common/fs/zfs/dbuf.c, line: 1692
 It looks like this might be bug 6523336,
 http://sunsolve.sun.com/search/document.do?assetkey=1-66-201229-1

 Does anyone know when the Binary relief for this and other Sol10 ZFS
 kernel panics will be released as normal kernel patches?

 Thanks.

 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Panic on Zpool Import (Urgent)

2008-01-13 Thread Prabahar Jeyaram
Your system seems to have hit the BUG 6458218 :

http://bugs.opensolaris.org/view_bug.do?bug_id=6458218

It is fixed in snv_60. As far ZFS, snv_43 is quite old.

--
Prabahar.

On Jan 12, 2008, at 11:15 PM, Ben Rockwood wrote:

 Today, suddenly, without any apparent reason that I can find, I'm
 getting panic's during zpool import.  The system paniced earlier today
 and has been suffering since.  This is snv_43 on a thumper.  Here's  
 the
 stack:

 panic[cpu0]/thread=99adbac0: assertion failed: ss != NULL,  
 file:
 ../../common/fs/zfs/space_map.c, line: 145

 fe8000a240a0 genunix:assfail+83 ()
 fe8000a24130 zfs:space_map_remove+1d6 ()
 fe8000a24180 zfs:space_map_claim+49 ()
 fe8000a241e0 zfs:metaslab_claim_dva+130 ()
 fe8000a24240 zfs:metaslab_claim+94 ()
 fe8000a24270 zfs:zio_dva_claim+27 ()
 fe8000a24290 zfs:zio_next_stage+6b ()
 fe8000a242b0 zfs:zio_gang_pipeline+33 ()
 fe8000a242d0 zfs:zio_next_stage+6b ()
 fe8000a24320 zfs:zio_wait_for_children+67 ()
 fe8000a24340 zfs:zio_wait_children_ready+22 ()
 fe8000a24360 zfs:zio_next_stage_async+c9 ()
 fe8000a243a0 zfs:zio_wait+33 ()
 fe8000a243f0 zfs:zil_claim_log_block+69 ()
 fe8000a24520 zfs:zil_parse+ec ()
 fe8000a24570 zfs:zil_claim+9a ()
 fe8000a24750 zfs:dmu_objset_find+2cc ()
 fe8000a24930 zfs:dmu_objset_find+fc ()
 fe8000a24b10 zfs:dmu_objset_find+fc ()
 fe8000a24bb0 zfs:spa_load+67b ()
 fe8000a24c20 zfs:spa_import+a0 ()
 fe8000a24c60 zfs:zfs_ioc_pool_import+79 ()
 fe8000a24ce0 zfs:zfsdev_ioctl+135 ()
 fe8000a24d20 genunix:cdev_ioctl+55 ()
 fe8000a24d60 specfs:spec_ioctl+99 ()
 fe8000a24dc0 genunix:fop_ioctl+3b ()
 fe8000a24ec0 genunix:ioctl+180 ()
 fe8000a24f10 unix:sys_syscall32+101 ()

 syncing file systems... done

 This is almost identical to a post to this list over a year ago titled
 ZFS Panic.  There was follow up on it but the results didn't make it
 back to the list.

 I spent time doing a full sweep for any hardware failures, pulled 2
 drives that I suspected as problematic but weren't flagged as such,  
 etc,
 etc, etc.  Nothing helps.

 Bill suggested a 'zpool import -o ro' on the other post, but thats not
 working either.

 I _can_ use 'zpool import' to see the pool, but I have to force the
 import.  A simple 'zpool import' returns output in about a minute.
 'zpool import -f poolname' takes almost exactly 10 minutes every  
 single
 time, like it hits some timeout and then panics.

 I did notice that while the 'zpool import' is running 'iostat' is
 useless, just hangs.  I still want to believe this is some device
 misbehaving but I have no evidence to support that theory.

 Any and all suggestions are greatly appreciated.  I've put around 8
 hours into this so far and I'm getting absolutely nowhere.

 Thanks

 benr.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S10u4 in kernel sharetab

2007-10-24 Thread Prabahar Jeyaram
Nope. It is not there in S10U4.

--
Prabahar.

On Oct 24, 2007, at 9:11 AM, Matthew C Aycock wrote:

 There was a log of talk about ZFS and NFS shares being a problem  
 when there was a large number of filesystems. There was a fix that  
 in part included an in kernel sharetab (I think :) Does anyone know  
 if this has made it into S10u4?

 Thanks,

 BlueUmp


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS file system is crashing my system

2007-10-09 Thread Prabahar Jeyaram
Your system seem to have hit a variant of BUG :

6458218 - http://bugs.opensolaris.org/view_bug.do?bug_id=6458218

This is fixed in Opensolaris Build 60 or S10U4.

--
Prabahar.


On Oct 8, 2007, at 10:04 PM, dudekula mastan wrote:

 Hi All,

 Any one has any chance to look into this issue ?

 -Masthan D

 dudekula mastan [EMAIL PROTECTED] wrote:

 Hi All,

 While pumping IO on a zfs file system my ststem is crashing/ 
 panicing. Please find the crash dump below.

 panic[cpu0]/thread=2a100adfcc0: assertion failed: ss != NULL,  
 file: ../../common/fs/zfs/space_map.c, line: 125
 02a100adec40 genunix:assfail+74 (7b652448, 7b652458, 7d,  
 183d800, 11ed400, 0)
 %l0-3:   011e7508  
 03000744ea30
 %l4-7: 011ed400  0186fc00  
 
 02a100adecf0 zfs:space_map_remove+b8 (3000683e7b8, 2b20,  
 2, 7b652000, 7b652400, 7b652400)
 %l0-3:  2b22 2b0ec600  
 03000744ebc0
 %l4-7: 03000744eaf8 2b0ec000 7b652000  
 2b0ec600
 02a100adedd0 zfs:space_map_load+218 (3000683e7b8, 30006f5f160,  
 1000, 3000683e488, 2b00, 1)
 %l0-3: 0160 030006f5f000   
 7b620ad0
 %l4-7: 7b62086c 7fff 7fff  
 030006f5f128
 02a100adeea0 zfs:metaslab_activate+3c (3000683e480,  
 8000, c000, 24a998, 3000683e480, c000)
 %l0-3:  0008   
 029ebf9d
 %l4-7: 704e2000 03000391e940 030005572540  
 0300060bacd0
 02a100adef50 zfs:metaslab_group_alloc+1bc (3fff,  
 2, 8000, 7e68000, 30006766080, )
 %l0-3:  0300060bacd8 0001  
 03000683e480
 %l4-7: 8000  03f34000  
 4000
 02a100adf030 zfs:metaslab_alloc_dva+114 (0, 7e68000,  
 30006766080, 2, 30005572540, 1e910)
 %l0-3: 0001  0003  
 03000380b6e0
 %l4-7:  0300060bacd0   
 0300060bacd0
 02a100adf100 zfs:metaslab_alloc+2c (3000391e940, 2,  
 30006766080, 1, 1e910, 0)
 %l0-3: 009980001605 0016 1b4d  
 0214
 %l4-7:   03000391e940  
 0001
 02a100adf1b0 zfs:zio_dva_allocate+4c (30005dd8a40, 7b6335a8,  
 30006766080, 704e2508, 704e2400, 20001)
 %l0-3: 030005dd8a40 060200ff00ff 060200ff00ff  
 
 %l4-7:  018a6400 0001  
 0006
 02a100adf260 zfs:zio_write_compress+1ec (30005dd8a40, 23e20b,  
 23e000, ff00ff, 2, 30006766080)
 %l0-3:  00ff 0100  
 0002
 %l4-7:  00ff fc00  
 00ff
 02a100adf330 zfs:arc_write+e4 (30005dd8a40, 3000391e940, 6, 2,  
 1, 1e910)
 %l0-3:  7b6063c8 030006af2570  
 0300060c5cf0
 %l4-7: 02a100adf538 0004 0004  
 0300060c7a88
 02a100adf440 zfs:dbuf_sync+6c0 (30006af2570, 30005dd9440,  
 2b3ca, 2, 6, 1e910)
 %l0-3: 030005dd96c0  030006ae7750  
 030006af2678
 %l4-7: 030006766080 0013 0001  
 
 02a100adf560 zfs:dnode_sync+35c (0, 0, 30005dd9440,  
 30005ac8cc0, 2, 2)
 %l0-3: 030006af2570 030006ae77a8 030006ae7808  
 030006ae7808
 %l4-7:  030006ae77a8 0001  
 03000640ace0
 02a100adf620 zfs:dmu_objset_sync_dnodes+6c (30005dd96c0,  
 30005dd97a0, 30005ac8cc0, 30006ae7750, 30006bd3ca0, 0)
 %l0-3: 704e84c0 704e8000 704e8000  
 0001
 %l4-7:  704e4000   
 030005dd9440
 02a100adf6d0 zfs:dmu_objset_sync+54 (30005dd96c0, 30005ac8cc0,  
 0, 0, 300060c5318, 1e910)
 %l0-3:  000f   
 478d
 %l4-7: 030005dd97a0  030005dd97a0  
 030005dd9820
 02a100adf7e0 zfs:dsl_dataset_sync+c (30006f36780, 30005ac8cc0,  
 30006f36810, 300040c7db8, 300040c7db8, 30006f36780)
 %l0-3: 0001 0007 0300040c7e38  
 
 %l4-7: 030006f36808    
 
 02a100adf890 zfs:dsl_pool_sync+64 (300040c7d00, 1e910,  
 30006f36780, 30005ac9640, 30005581a80, 30005581aa8)
 %l0-3:  03000391ed00 030005ac8cc0  
 0300040c7e98
 %l4-7: 0300040c7e68 0300040c7e38 0300040c7da8  
 030005dd9440
 02a100adf940 zfs:spa_sync+1b0 (3000391e940, 1e910, 0, 0,  
 2a100adfcc4, 1)
 %l0-3: 03000391eb00 03000391eb10 03000391ea28  
 030005ac9640
 %l4-7:  03000410f580 0300040c7d00  
 

Re: [zfs-discuss] ZFS file system is crashing my system

2007-10-09 Thread Prabahar Jeyaram
Hi Masthan,

There was a race in the block allocation code which allocates a  
single disk block to two consumers. The system will trip when both  
the consumers try to free the block.

--
Prabahar.

On Oct 9, 2007, at 4:20 AM, dudekula mastan wrote:

 Hi Jeyaram,

 Thanks for your reply. Can you explain more about this bug ?

 Regards
 Masthan D

 Prabahar Jeyaram [EMAIL PROTECTED] wrote:
 Your system seem to have hit a variant of BUG :

 6458218 - http://bugs.opensolaris.org/view_bug.do?bug_id=6458218

 This is fixed in Opensolaris Build 60 or S10U4.

 --
 Prabahar.


 On Oct 8, 2007, at 10:04 PM, dudekula mastan wrote:

  Hi All,
 
  Any one has any chance to look into this issue ?
 
  -Masthan D
 
  dudekula mastan wrote:
 
  Hi All,
 
  While pumping IO on a zfs file system my ststem is crashing/
  panicing. Please find the crash dump below.
 
  panic[cpu0]/thread=2a100adfcc0: assertion failed: ss != NULL,
  file: ../../common/fs/zfs/space_map.c, line: 125
  02a100adec40 genunix:assfail+74 (7b652448, 7b652458, 7d,
  183d800, 11ed400, 0)
  %l0-3:   011e7508
  03000744ea30
  %l4-7: 011ed400  0186fc00
  
  02a100adecf0 zfs:space_map_remove+b8 (3000683e7b8, 2b20,
  2, 7b652000, 7b652400, 7b652400)
  %l0-3:  2b22 2b0ec600
  03000744ebc0
  %l4-7: 03000744eaf8 2b0ec000 7b652000
  2b0ec600
  02a100adedd0 zfs:space_map_load+218 (3000683e7b8, 30006f5f160,
  1000, 3000683e488, 2b00, 1)
  %l0-3: 0160 030006f5f000 
  7b620ad0
  %l4-7: 7b62086c 7fff 7fff
  030006f5f128
  02a100adeea0 zfs:metaslab_activate+3c (3000683e480,
  8000, c000, 24a998, 3000683e480, c000)
  %l0-3:  0008 
  029ebf9d
  %l4-7: 704e2000 03000391e940 030005572540
  0300060bacd0
  02a100adef50 zfs:metaslab_group_alloc+1bc (3fff,
  2, 8000, 7e68000, 30006766080, )
  %l0-3:  0300060bacd8 0001
  03000683e480
  %l4-7: 8000  03f34000
  4000
  02a100adf030 zfs:metaslab_alloc_dva+114 (0, 7e68000,
  30006766080, 2, 30005572540, 1e910)
  %l0-3: 0001  0003
  03000380b6e0
  %l4-7:  0300060bacd0 
  0300060bacd0
  02a100adf100 zfs:metaslab_alloc+2c (3000391e940, 2,
  30006766080, 1, 1e910, 0)
  %l0-3: 009980001605 0016 1b4d
  0214
  %l4-7:   03000391e940
  0001
  02a100adf1b0 zfs:zio_dva_allocate+4c (30005dd8a40, 7b6335a8,
  30006766080, 704e2508, 704e2400, 20001)
  %l0-3: 030005dd8a40 060200ff00ff 060200ff00ff
  
  %l4-7:  018a6400 0001
  0006
  02a100adf260 zfs:zio_write_compress+1ec (30005dd8a40, 23e20b,
  23e000, ff00ff, 2, 30006766080)
  %l0-3:  00ff 0100
  0002
  %l4-7:  00ff fc00
  00ff
  02a100adf330 zfs:arc_write+e4 (30005dd8a40, 3000391e940, 6, 2,
  1, 1e910)
  %l0-3:  7b6063c8 030006af2570
  0300060c5cf0
  %l4-7: 02a100adf538 0004 0004
  0300060c7a88
  02a100adf440 zfs:dbuf_sync+6c0 (30006af2570, 30005dd9440,
  2b3ca, 2, 6, 1e910)
  %l0-3: 030005dd96c0  030006ae7750
  030006af2678
  %l4-7: 030006766080 0013 0001
  
  02a100adf560 zfs:dnode_sync+35c (0, 0, 30005dd9440,
  30005ac8cc0, 2, 2)
  %l0-3: 030006af2570 030006ae77a8 030006ae7808
  030006ae7808
  %l4-7:  030006ae77a8 0001
  03000640ace0
  02a100adf620 zfs:dmu_objset_sync_dnodes+6c (30005dd96c0,
  30005dd97a0, 30005ac8cc0, 30006ae7750, 30006bd3ca0, 0)
  %l0-3: 704e84c0 704e8000 704e8000
  0001
  %l4-7:  704e4000 
  030005dd9440
  02a100adf6d0 zfs:dmu_objset_sync+54 (30005dd96c0, 30005ac8cc0,
  0, 0, 300060c5318, 1e910)
  %l0-3:  000f 
  478d
  %l4-7: 030005dd97a0  030005dd97a0
  030005dd9820
  02a100adf7e0 zfs:dsl_dataset_sync+c (30006f36780, 30005ac8cc0,
  30006f36810, 300040c7db8, 300040c7db8, 30006f36780)
  %l0-3: 0001 0007 0300040c7e38
  
  %l4-7: 030006f36808  
  
  02a100adf890 zfs:dsl_pool_sync+64 (300040c7d00, 1e910,
  30006f36780, 30005ac9640, 30005581a80

Re: [zfs-discuss] ZFS gzip compression

2007-09-29 Thread Prabahar Jeyaram
Nope. This feature hasn't made it to S10U4. We are anticipating it to be
available in S10U5.

--
Prabahar.

Scott wrote:
 Did the ZFS gzip compression feature (i.e. zfs set compression=gzip) make 
 it in to Solaris 10 U4? I was looking forward to being able to use it in a 
 production Solaris release without having to compile my OpenSolaris build, 
 but it doesnt' seem to be there.
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool history not found

2007-09-17 Thread Prabahar Jeyaram
'zpool history' is the 4th feature of ZFS in S10. You should get it  
in S10U4.

--
Prabahar.

On Sep 17, 2007, at 8:01 PM, sunnie wrote:

 my system is currently running ZFS versionnn 3.
 And I just can't find the zpool history command.
 can anyone help me with the problem?


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS performance and memory consumption

2007-07-05 Thread Prabahar Jeyaram
This system exhibits the symptoms of :

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6495013

Moving to nevada would certainly help as it has many more bug fixes and
performance improvements over S10U3.

--
Prabahar.

?ukasz wrote:
 Hello,
 I'm investigating problem with ZFS over NFS. 
 The problems started about 2 weeks ago, most nfs threads are hanging in 
 txg_wait_open.
 Sync thread is consuming one processor all the time,
 Average spa_sync function times from entry to return is 2 minutes.
 I can't use dtrace to examine problem, because I keep getting:
  dtrace: processing aborted: Abort due to systemic unresponsiveness
 
 Using mdb and examining tx_sync_thread with ::findstack I keep getting this 
 stack:
  fe8002da1410 _resume_from_idle+0xf8() ]
   fe8002da1570 avl_walk+0x39()
   fe8002da15a0 space_map_alloc+0x21()
   fe8002da1620 metaslab_group_alloc+0x1a2()
   fe8002da16b0 metaslab_alloc_dva+0xab()
   fe8002da1700 metaslab_alloc+0x51()
   fe8002da1720 zio_dva_allocate+0x3f()
   fe8002da1730 zio_next_stage+0x72()
   fe8002da1750 zio_checksum_generate+0x5f()
   fe8002da1760 zio_next_stage+0x72()
   fe8002da17b0 zio_write_compress+0x136()
   fe8002da17c0 zio_next_stage+0x72()
   fe8002da17f0 zio_wait_for_children+0x49()
   fe8002da1800 zio_wait_children_ready+0x15()
   fe8002da1810 zio_next_stage_async+0xae()
   fe8002da1820 zio_nowait+9()
   fe8002da18b0 arc_write+0xe7()
   fe8002da19a0 dbuf_sync+0x274()
   fe8002da1a10 dnode_sync+0x2e3()
   fe8002da1a60 dmu_objset_sync_dnodes+0x7b()
   fe8002da1af0 dmu_objset_sync+0x6a()
   fe8002da1b10 dsl_dataset_sync+0x23()
   fe8002da1b60 dsl_pool_sync+0x7b()
   fe8002da1bd0 spa_sync+0x116()
 
 I also managed to sum metaslabs space maps:
   ::walk spa | ::walk metaslab | ::print struct metaslab ms_smo.smo_objsize 
 and I got 1GB.
 
 I have a pool 1,3T with 500G avail space. 
 Pool was created about 3 months ago.
 I'm using solaris 10 u3
 
 Do you think changing system to nevada will help ?
 I red that there are some changes that can help:
 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6512391
 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6532056
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 10 6/06 now available for download

2006-06-27 Thread Prabahar Jeyaram
Indeed. ZFS is included in Solaris 10 U2.

-- Prabahar.

Shannon Roddy wrote:

 Solaris 10u2 was released today.  You can now download it from here:

 http://www.sun.com/software/solaris/get.jsp
   
 
 Does anyone know if ZFS is included in this release?  One of my local
 Sun reps said it did not make it into the u2 release, though I have
 heard for ages that 6/06 would include it.
 
 Thanks!
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 15 minute fdsync problem and ZFS: Solved

2006-06-22 Thread Prabahar Jeyaram
Yep. ZFS supports the ioctl (_FIOFFS) which 'lockfs -f' issues.

-- Prabahar.

Darren J Moffat wrote:
 Bill Sommerfeld wrote:
 On Thu, 2006-06-22 at 13:01, Roch wrote:
  Is there a sync command that targets individual FS ?
 
 Yes.  lockfs -f
 
 Does lockfs work with ZFS ?  The man page appears to indicate it is very 
 UFS specific.
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss