Re: [zfs-discuss] Re: Re: gzip compression throttles system?

2007-05-09 Thread Adam Leventhal
On Thu, May 03, 2007 at 11:43:49AM -0500, [EMAIL PROTECTED] wrote:
 I think this may be a premature leap -- It is still undetermined if we are
 running up against a yet unknown bug in the kernel implementation of gzip
 used for this compression type. From my understanding the gzip code has
 been reused from an older kernel implementation,  it may be possible that
 this code has some issues with kernel stuttering when used for zfs
 compression that may have not been exposed with its original usage.  If it
 turns out that it is just a case of high cpu trade-off for buying faster
 compression times, then the talk of a tunable may make sense (if it is even
 possible given the constraints of the gzip code in kernelspace).

The in-kernel version is zlib is the latest version (1.2.3). It's not
surprising that we're spending all of our time in zlib if the machine is
being driving by I/O. There are outstanding problems with compression in
the ZIO pipeline that may contribute to the bursty behavior.

Adam

-- 
Adam Leventhal, Solaris Kernel Development   http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: gzip compression throttles system?

2007-05-09 Thread Bart Smaalders

Adam Leventhal wrote:

On Wed, May 09, 2007 at 11:52:06AM +0100, Darren J Moffat wrote:

Can you give some more info on what these problems are.


I was thinking of this bug:

  6460622 zio_nowait() doesn't live up to its name

Which was surprised to find was fixed by Eric in build 59.

Adam



It was pointed out by Jürgen Keil that using ZFS compression
submits a lot of prio 60 tasks to the system task queues;
this would clobber interactive performance.

- Bart


--
Bart Smaalders  Solaris Kernel Performance
[EMAIL PROTECTED]   http://blogs.sun.com/barts
---BeginMessage---
 with recent bits ZFS compression is now handled concurrently with  
 many CPUs working on different records.
 So this load will burn more CPUs and acheive it's results  
 (compression) faster.
 
 So the observed pauses should be consistent with that of a load  
 generating high system time.
 The assumption is that compression now goes faster than when is was  
 single threaded.
 
 Is this undesirable ? We might seek a way to slow down compression in  
 order to limit the system load.

According to this dtrace script

#!/usr/sbin/dtrace -s

sdt:genunix::taskq-enqueue
/((taskq_ent_t *)arg1)-tqent_func == (task_func_t *)`zio_write_compress/
{
@where[stack()] = count();
}

tick-5s {
printa(@where);
trunc(@where);
}




... I see bursts of ~ 1000 zio_write_compress() [gzip] taskq calls
enqueued into the spa_zio_issue taskq by zfs`spa_sync() and
its children:

  0  76337 :tick-5s 
...
  zfs`zio_next_stage+0xa1
  zfs`zio_wait_for_children+0x5d
  zfs`zio_wait_children_ready+0x20
  zfs`zio_next_stage_async+0xbb
  zfs`zio_nowait+0x11
  zfs`dbuf_sync_leaf+0x1b3
  zfs`dbuf_sync_list+0x51
  zfs`dbuf_sync_indirect+0xcd
  zfs`dbuf_sync_list+0x5e
  zfs`dbuf_sync_indirect+0xcd
  zfs`dbuf_sync_list+0x5e
  zfs`dnode_sync+0x214
  zfs`dmu_objset_sync_dnodes+0x55
  zfs`dmu_objset_sync+0x13d
  zfs`dsl_dataset_sync+0x42
  zfs`dsl_pool_sync+0xb5
  zfs`spa_sync+0x1c5
  zfs`txg_sync_thread+0x19a
  unix`thread_start+0x8
 1092

  0  76337 :tick-5s 



It seems that after such a batch of compress requests is
submitted to the spa_zio_issue taskq, the kernel is busy
for several seconds working on these taskq entries.
It seems that this blocks all other taskq activity inside the
kernel...



This dtrace script counts the number of 
zio_write_compress() calls enqueued / execed 
by the kernel per second:

#!/usr/sbin/dtrace -qs

sdt:genunix::taskq-enqueue
/((taskq_ent_t *)arg1)-tqent_func == (task_func_t *)`zio_write_compress/
{
this-tqe = (taskq_ent_t *)arg1;
@enq[this-tqe-tqent_func] = count();
}

sdt:genunix::taskq-exec-end
/((taskq_ent_t *)arg1)-tqent_func == (task_func_t *)`zio_write_compress/
{
this-tqe = (taskq_ent_t *)arg1;
@exec[this-tqe-tqent_func] = count();
}

tick-1s {
/*
printf(%Y\n, walltimestamp);
*/
printf(TS(sec): %u\n, timestamp / 10);
printa(enqueue %a: [EMAIL PROTECTED], @enq);
printa(exec%a: [EMAIL PROTECTED], @exec);
trunc(@enq);
trunc(@exec);
}




I see bursts of zio_write_compress() calls enqueued / execed,
and periods of time where no zio_write_compress() taskq calls
are enqueued or execed.

10#  ~jk/src/dtrace/zpool_gzip7.d 
TS(sec): 7829
TS(sec): 7830
TS(sec): 7831
TS(sec): 7832
TS(sec): 7833
TS(sec): 7834
TS(sec): 7835
enqueue zfs`zio_write_compress: 1330
execzfs`zio_write_compress: 1330
TS(sec): 7836
TS(sec): 7837
TS(sec): 7838
TS(sec): 7839
TS(sec): 7840
TS(sec): 7841
TS(sec): 7842
TS(sec): 7843
TS(sec): 7844
enqueue zfs`zio_write_compress: 1116
execzfs`zio_write_compress: 1116
TS(sec): 7845
TS(sec): 7846
TS(sec): 7847
TS(sec): 7848
TS(sec): 7849
TS(sec): 7850
TS(sec): 7851
TS(sec): 7852
TS(sec): 7853
TS(sec): 7854
TS(sec): 7855
TS(sec): 7856
TS(sec): 7857
enqueue zfs`zio_write_compress: 932
execzfs`zio_write_compress: 932
TS(sec): 7858
TS(sec): 7859
TS(sec): 7860
TS(sec): 7861
TS(sec): 7862
TS(sec): 7863
TS(sec): 7864
TS(sec): 7865
TS(sec): 7866
TS(sec): 7867
enqueue zfs`zio_write_compress: 5
execzfs`zio_write_compress: 5
TS(sec): 7868
enqueue zfs`zio_write_compress: 774
execzfs`zio_write_compress: 774
TS(sec): 7869
TS(sec): 7870
TS(sec): 7871
TS(sec): 7872
TS(sec): 7873
TS(sec): 7874
TS(sec): 7875
TS(sec): 7876
enqueue zfs`zio_write_compress: 653
execzfs`zio_write_compress: 653
TS(sec): 7877
TS(sec): 7878
TS(sec): 7879
TS(sec): 7880
TS(sec): 7881


And a final dtrace script, which monitors scheduler activity while
filling a gzip compressed pool:

#!/usr/sbin/dtrace -qs

sched:::off-cpu,
sched:::on-cpu,
sched:::remain-cpu,
sched:::preempt
{
/*

Re[2]: [zfs-discuss] Re: Re: gzip compression throttles system?

2007-05-08 Thread Robert Milkowski
Hello Ian,

Thursday, May 3, 2007, 10:20:20 PM, you wrote:

IC Roch Bourbonnais wrote:

 with recent bits ZFS compression is now handled concurrently with many
 CPUs working on different records.
 So this load will burn more CPUs and acheive it's results
 (compression) faster.

IC Would changing (selecting a smaller) filesystem record size have any effect?

 So the observed pauses should be consistent with that of a load
 generating high system time.
 The assumption is that compression now goes faster than when is was
 single threaded.

 Is this undesirable ? We might seek a way to slow down compression in
 order to limit the system load.

IC I think you should, otherwise we have a performance throttle that scales
IC with the number of cores!

For file servers you actually want to all CPUs to be used for
compression otherwise you get bad performance with compression and
plenty of CPU left doing nothing...

So maybe special pool/dataset property (compression_parallelism=N?)

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: gzip compression throttles system?

2007-05-04 Thread Roch - PAE

Ian Collins writes:
  Roch Bourbonnais wrote:
  
   with recent bits ZFS compression is now handled concurrently with many
   CPUs working on different records.
   So this load will burn more CPUs and acheive it's results
   (compression) faster.
  
  Would changing (selecting a smaller) filesystem record size have any effect?
  

If the problem is that we just have a high kernel load
compressing blocks, then probably not. If anything small
records might be a tad less efficient (thus needing more CPU).

   So the observed pauses should be consistent with that of a load
   generating high system time.
   The assumption is that compression now goes faster than when is was
   single threaded.
  
   Is this undesirable ? We might seek a way to slow down compression in
   order to limit the system load.
  
  I think you should, otherwise we have a performance throttle that scales
  with the number of cores!
  

Again I wonder to what extent the issue becomes painful due 
to lack of write throttling. Once we have that in, we should 
revisit this. 

-r

  Ian
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: gzip compression throttles system?

2007-05-03 Thread Jürgen Keil
 The reason you are busy computing SHA1 hashes is you are using 
 /dev/urandom.  The implementation of drv/random uses
 SHA1 for mixing, 
 actually strictly speaking it is the swrand provider that does that part.

Ahh, ok.

So, instead of using dd reading from /dev/urandom all the time,
I've now used this quick C program to write one /dev/urandom block
over and over to the gzip compressed zpool:

=
#include stdio.h
#include stdlib.h
#include fcntl.h

int
main(int argc, char **argv)
{
int fd;
char buf[128*1024];

fd = open(/dev/urandom, O_RDONLY);
if (fd  0) {
perror(open /dev/urandom);
exit(1);
}
if (read(fd, buf, sizeof(buf)) != sizeof(buf)) {
perror(fill buf from /dev/urandom);
exit(1);
}
close(fd);
fd = open(argv[1], O_WRONLY|O_CREAT, 0666);
if (fd  0) {
perror(argv[1]);
exit(1);
}
for (;;) {
if (write(fd, buf, sizeof(buf)) != sizeof(buf)) {
break;
}
}
close(fd);
exit(0);
}
=


Avoiding the reads from /dev/urandom makes the effect even
more noticeable, the machine now freezes for 10+ seconds.

CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  00   0 3109  3616  316  1965   17   48   45   2450  85   0  15
  10   0 3127  3797  592  2174   17   63   46   1760  84   0  15
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  00   0 3051  3529  277  2012   14   25   48   2160  83   0  17
  10   0 3065  3739  606  1952   14   37   47   1530  82   0  17
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  00   0 3011  3538  316  2423   26   16   52   2020  81   0  19
  10   0 3019  3698  578  2694   25   23   56   3090  83   0  17

# lockstat -kIW -D 20 sleep 30

Profiling interrupt: 6080 events in 31.341 seconds (194 events/sec)

Count indv cuml rcnt nsec Hottest CPU+PILCaller  
---
 2068  34%  34% 0.00 1767 cpu[0] deflate_slow
 1506  25%  59% 0.00 1721 cpu[1] longest_match   
 1017  17%  76% 0.00 1833 cpu[1] mach_cpu_idle   
  454   7%  83% 0.00 1539 cpu[0] fill_window 
  215   4%  87% 0.00 1788 cpu[1] pqdownheap  
  152   2%  89% 0.00 1691 cpu[0] copy_block  
   89   1%  90% 0.00 1839 cpu[1] z_adler32   
   77   1%  92% 0.0036067 cpu[1] do_splx 
   64   1%  93% 0.00 2090 cpu[0] bzero   
   62   1%  94% 0.00 2082 cpu[0] do_copy_fault_nta   
   48   1%  95% 0.00 1976 cpu[0] bcopy   
   41   1%  95% 0.0062913 cpu[0] mutex_enter 
   27   0%  96% 0.00 1862 cpu[1] build_tree  
   19   0%  96% 0.00 1771 cpu[1] gen_bitlen  
   17   0%  96% 0.00 1744 cpu[0] bi_reverse  
   15   0%  97% 0.00 1783 cpu[0] page_create_va  
   15   0%  97% 0.00 1406 cpu[1] fletcher_2_native   
   14   0%  97% 0.00 1778 cpu[1] gen_codes   
   11   0%  97% 0.00  912 cpu[1]+6   ddi_mem_put8
5   0%  97% 0.00 3854 cpu[1] fsflush_do_pages
---


It seems the same problem can be observed with lzjb compression,
but the pauses with lzjb are much shorter and the kernel consumes
less system cpu time with lzjb (which is expected, I think).
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: gzip compression throttles system?

2007-05-03 Thread Frank Hofmann


I'm not quite sure what this test should show ?

Compressing random data is the perfect way to generate heat.
After all, compression working relies on input entropy being low.
But good random generators are characterized by the opposite - output 
entropy being high.
Even a good compressor, if operated on a good random generator's output, 
will only end up burning cycles, but not reducing the data size.


Hence, is the request here for the compressor module to 'adapt', kind of 
first-pass check the input data whether it's sufficiently low-entropy to 
warrant a compression attempt ?


If not, then what ?

FrankH.

On Thu, 3 May 2007, Jürgen Keil wrote:


The reason you are busy computing SHA1 hashes is you are using
/dev/urandom.  The implementation of drv/random uses
SHA1 for mixing,
actually strictly speaking it is the swrand provider that does that part.


Ahh, ok.

So, instead of using dd reading from /dev/urandom all the time,
I've now used this quick C program to write one /dev/urandom block
over and over to the gzip compressed zpool:

=
#include stdio.h
#include stdlib.h
#include fcntl.h

int
main(int argc, char **argv)
{
   int fd;
   char buf[128*1024];

   fd = open(/dev/urandom, O_RDONLY);
   if (fd  0) {
   perror(open /dev/urandom);
   exit(1);
   }
   if (read(fd, buf, sizeof(buf)) != sizeof(buf)) {
   perror(fill buf from /dev/urandom);
   exit(1);
   }
   close(fd);
   fd = open(argv[1], O_WRONLY|O_CREAT, 0666);
   if (fd  0) {
   perror(argv[1]);
   exit(1);
   }
   for (;;) {
   if (write(fd, buf, sizeof(buf)) != sizeof(buf)) {
   break;
   }
   }
   close(fd);
   exit(0);
}
=


Avoiding the reads from /dev/urandom makes the effect even
more noticeable, the machine now freezes for 10+ seconds.

CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
 00   0 3109  3616  316  1965   17   48   45   2450  85   0  15
 10   0 3127  3797  592  2174   17   63   46   1760  84   0  15
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
 00   0 3051  3529  277  2012   14   25   48   2160  83   0  17
 10   0 3065  3739  606  1952   14   37   47   1530  82   0  17
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
 00   0 3011  3538  316  2423   26   16   52   2020  81   0  19
 10   0 3019  3698  578  2694   25   23   56   3090  83   0  17

# lockstat -kIW -D 20 sleep 30

Profiling interrupt: 6080 events in 31.341 seconds (194 events/sec)

Count indv cuml rcnt nsec Hottest CPU+PILCaller
---
2068  34%  34% 0.00 1767 cpu[0] deflate_slow
1506  25%  59% 0.00 1721 cpu[1] longest_match
1017  17%  76% 0.00 1833 cpu[1] mach_cpu_idle
 454   7%  83% 0.00 1539 cpu[0] fill_window
 215   4%  87% 0.00 1788 cpu[1] pqdownheap
 152   2%  89% 0.00 1691 cpu[0] copy_block
  89   1%  90% 0.00 1839 cpu[1] z_adler32
  77   1%  92% 0.0036067 cpu[1] do_splx
  64   1%  93% 0.00 2090 cpu[0] bzero
  62   1%  94% 0.00 2082 cpu[0] do_copy_fault_nta
  48   1%  95% 0.00 1976 cpu[0] bcopy
  41   1%  95% 0.0062913 cpu[0] mutex_enter
  27   0%  96% 0.00 1862 cpu[1] build_tree
  19   0%  96% 0.00 1771 cpu[1] gen_bitlen
  17   0%  96% 0.00 1744 cpu[0] bi_reverse
  15   0%  97% 0.00 1783 cpu[0] page_create_va
  15   0%  97% 0.00 1406 cpu[1] fletcher_2_native
  14   0%  97% 0.00 1778 cpu[1] gen_codes
  11   0%  97% 0.00  912 cpu[1]+6   ddi_mem_put8
   5   0%  97% 0.00 3854 cpu[1] fsflush_do_pages
---


It seems the same problem can be observed with lzjb compression,
but the pauses with lzjb are much shorter and the kernel consumes
less system cpu time with lzjb (which is expected, I think).


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: gzip compression throttles system?

2007-05-03 Thread Rayson Ho

On 5/3/07, Frank Hofmann [EMAIL PROTECTED] wrote:

I'm not quite sure what this test should show ?


I didn't try the test myself... but I think what it shows is a
possible problem that turning compression can hang a machine.

Rayson





Compressing random data is the perfect way to generate heat.
After all, compression working relies on input entropy being low.
But good random generators are characterized by the opposite - output
entropy being high.
Even a good compressor, if operated on a good random generator's output,
will only end up burning cycles, but not reducing the data size.

Hence, is the request here for the compressor module to 'adapt', kind of
first-pass check the input data whether it's sufficiently low-entropy to
warrant a compression attempt ?

If not, then what ?

FrankH.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: gzip compression throttles system?

2007-05-03 Thread Roch Bourbonnais


with recent bits ZFS compression is now handled concurrently with  
many CPUs working on different records.
So this load will burn more CPUs and acheive it's results  
(compression) faster.


So the observed pauses should be consistent with that of a load  
generating high system time.
The assumption is that compression now goes faster than when is was  
single threaded.


Is this undesirable ? We might seek a way to slow down compression in  
order to limit the system load.


-r

Le 3 mai 07 à 14:21, Rayson Ho a écrit :


On 5/3/07, Frank Hofmann [EMAIL PROTECTED] wrote:

I'm not quite sure what this test should show ?


I didn't try the test myself... but I think what it shows is a
possible problem that turning compression can hang a machine.

Rayson





Compressing random data is the perfect way to generate heat.
After all, compression working relies on input entropy being low.
But good random generators are characterized by the opposite - output
entropy being high.
Even a good compressor, if operated on a good random generator's  
output,

will only end up burning cycles, but not reducing the data size.

Hence, is the request here for the compressor module to 'adapt',  
kind of
first-pass check the input data whether it's sufficiently low- 
entropy to

warrant a compression attempt ?

If not, then what ?

FrankH.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: gzip compression throttles system?

2007-05-03 Thread Wade . Stuart






[EMAIL PROTECTED] wrote on 05/03/2007 11:35:24 AM:


 with recent bits ZFS compression is now handled concurrently with
 many CPUs working on different records.
 So this load will burn more CPUs and acheive it's results
 (compression) faster.

 So the observed pauses should be consistent with that of a load
 generating high system time.
 The assumption is that compression now goes faster than when is was
 single threaded.

I think this may be a premature leap -- It is still undetermined if we are
running up against a yet unknown bug in the kernel implementation of gzip
used for this compression type. From my understanding the gzip code has
been reused from an older kernel implementation,  it may be possible that
this code has some issues with kernel stuttering when used for zfs
compression that may have not been exposed with its original usage.  If it
turns out that it is just a case of high cpu trade-off for buying faster
compression times, then the talk of a tunable may make sense (if it is even
possible given the constraints of the gzip code in kernelspace).



-Wade

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: gzip compression throttles system?

2007-05-03 Thread Ian Collins
Roch Bourbonnais wrote:

 with recent bits ZFS compression is now handled concurrently with many
 CPUs working on different records.
 So this load will burn more CPUs and acheive it's results
 (compression) faster.

Would changing (selecting a smaller) filesystem record size have any effect?

 So the observed pauses should be consistent with that of a load
 generating high system time.
 The assumption is that compression now goes faster than when is was
 single threaded.

 Is this undesirable ? We might seek a way to slow down compression in
 order to limit the system load.

I think you should, otherwise we have a performance throttle that scales
with the number of cores!

Ian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: gzip compression throttles system?

2007-05-03 Thread johansen-osdev
A couple more questions here.

[mpstat]

 CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
   00   0 3109  3616  316  1965   17   48   45   2450  85   0  15
   10   0 3127  3797  592  2174   17   63   46   1760  84   0  15
 CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
   00   0 3051  3529  277  2012   14   25   48   2160  83   0  17
   10   0 3065  3739  606  1952   14   37   47   1530  82   0  17
 CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
   00   0 3011  3538  316  2423   26   16   52   2020  81   0  19
   10   0 3019  3698  578  2694   25   23   56   3090  83   0  17
 
 # lockstat -kIW -D 20 sleep 30
 
 Profiling interrupt: 6080 events in 31.341 seconds (194 events/sec)
 
 Count indv cuml rcnt nsec Hottest CPU+PILCaller  
 ---
  2068  34%  34% 0.00 1767 cpu[0] deflate_slow
  1506  25%  59% 0.00 1721 cpu[1] longest_match   
  1017  17%  76% 0.00 1833 cpu[1] mach_cpu_idle   
   454   7%  83% 0.00 1539 cpu[0] fill_window 
   215   4%  87% 0.00 1788 cpu[1] pqdownheap  
snip

What do you have zfs compresison set to?  The gzip level is tunable,
according to zfs set, anyway:

PROPERTY   EDIT  INHERIT   VALUES
compression YES  YES   on | off | lzjb | gzip | gzip-[1-9]

You still have idle time in this lockstat (and mpstat).

What do you get for a lockstat -A -D 20 sleep 30?

Do you see anyone with long lock hold times, long sleeps, or excessive
spinning?

The largest numbers from mpstat are for interrupts and cross calls.
What does intrstat(1M) show?

Have you run dtrace to determine the most frequent cross-callers?

#!/usr/sbin/dtrace -s

sysinfo:::xcalls
{
@a[stack(30)] = count();
}

END
{
trunc(@a, 30);
}

is an easy way to do this.

-j
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss