[zfs-discuss] Migrating a pool

2007-03-27 Thread Constantin Gonzalez
Hi,

soon it'll be time to migrate my patchwork pool onto a real pair of
mirrored (albeit USB-based) external disks.

Today I have about half a dozen filesystems in the old pool plus dozens of
snapshots thanks to Tim Bray's excellent SMF snapshotting service.

What is the most elegant way of migrating all filesystems to the new pool,
including snapshots?

Can I do a master snapshot of the whole pool, including sub-filesystems and
their snapshots, then send/receive them to the new pool?

Or do I have to write a script that will individually snapshot all filesystems
within my old pool, then run a send (-i) orgy?

Best regards,
   Constantin

-- 
Constantin GonzalezSun Microsystems GmbH, Germany
Platform Technology Group, Global Systems Engineering  http://www.sun.de/
Tel.: +49 89/4 60 08-25 91   http://blogs.sun.com/constantin/

Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Marcel Schneider, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating a pool

2007-03-27 Thread Constantin Gonzalez
Hi,

 Today I have about half a dozen filesystems in the old pool plus dozens of
 snapshots thanks to Tim Bray's excellent SMF snapshotting service.

I'm sorry I mixed up Tim's last name. The fine guy who wrote the SMF snapshot
service is Tim Foster. And here's the link:

  http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_8

There doesn't seem to be an easy answer to the original question of how to
migrate a complete pool. Writing a script with a snapshot send/receive
party seems to be the only approach.

I wish I could zfs snapshot pool then zfs send pool | zfs receive dest and
all blocks would be transferred as they are, including all embedded snapshots.

Is that already an RFE?

Best regards,
   Constantin

-- 
Constantin GonzalezSun Microsystems GmbH, Germany
Platform Technology Group, Global Systems Engineering  http://www.sun.de/
Tel.: +49 89/4 60 08-25 91   http://blogs.sun.com/constantin/

Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Marcel Schneider, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS filesystem online backup question

2007-03-27 Thread Łukasz
I have to backup many filesystems, which are changing and machines are heavy 
loaded.
The idea is to backup online - this should avoid I/O read operations from 
disks, 
data should go from cache.

Now I'm using script that does snapshot and zfs send.
I want to automate this operation and add new option to zfs send 

 zfs send [-w sec ] [-i snapshot] snapshot

for example

zfs send -w 10 pool/[EMAIL PROTECTED]

zfs send then would:
1. create replicate snapshot if it does not exist
2. send data
3. wait 10 seconds
4. rename snapshot to replicate_previous ( destroy previous if exists )
5. goto 1.

All snapshot operations are done in kernel - it works faster then.
I have implemented this mechanism and it works.

Do you think this change will be integrated to opensolaris ?
Is there chance this option will be available in Solaris update 4 ?

Maybe there is other way to backup filesystem online ?
I tried to traverse changing filesystem, but it does not work.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: asize is 300MB smaller than lsize - why?

2007-03-27 Thread Łukasz
I have other question about replication in this thread:

http://www.opensolaris.org/jive/thread.jspa?threadID=27082tstart=0
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 6410 expansion shelf

2007-03-27 Thread Rayson Ho

BTW, did anyone try this??

http://blogs.sun.com/ValdisFilks/entry/improving_i_o_throughput_for

Rayson



On 3/27/07, Wee Yeh Tan [EMAIL PROTECTED] wrote:

As promised.  I got my 6140 SATA delivered yesterday and I hooked it
up to a T2000 on S10u3.  The T2000 saw the disks straight away and is
working for the last 1 hour.  I'll be running some benchmarks on it.
 I'll probably have a week with it until our vendor comes around and
steals it from me.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS filesystem online backup question

2007-03-27 Thread Mark J Musante
On Tue, 27 Mar 2007, [UTF-8] �^Aukasz wrote:

 zfs send then would:
 1. create replicate snapshot if it does not exist
 2. send data
 3. wait 10 seconds
 4. rename snapshot to replicate_previous ( destroy previous if exists )
 5. goto 1.

 All snapshot operations are done in kernel - it works faster then. I
 have implemented this mechanism and it works.

Out of curiosity, what is the timing difference between a userland script
and performing the operations in the kernel?

Which of the steps are you attempting to speed up?  What's the bottleneck?


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and Kstats

2007-03-27 Thread Atul Vidwansa

Hi,
  Does ZFS has support for kstats? If I want to extract information
like no of files commited to disk during an interval, no of
transactions performed, I/O bandwidth etc, how can I get that
information?

Regards,
-Atul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS filesystem online backup question

2007-03-27 Thread Łukasz
Out of curiosity, what is the timing difference between a userland script
and performing the operations in the kernel?

[EMAIL PROTECTED] ~]# time zfs destroy solaris/[EMAIL PROTECTED] ; time zfs 
rename solaris/[EMAIL PROTECTED] solaris/[EMAIL PROTECTED]; time zfs snapshot 
solaris/[EMAIL PROTECTED]

real0m5.220s
user0m0.010s
sys 0m0.023s

real0m5.856s
user0m0.010s
sys 0m0.023s

real0m7.620s
user0m0.009s
sys 0m0.029s
[EMAIL PROTECTED] ~]# time zfs destroy solaris/[EMAIL PROTECTED] ; time zfs 
rename solaris/[EMAIL PROTECTED] solaris/[EMAIL PROTECTED]; time zfs snapshot 
solaris/[EMAIL PROTECTED]

real0m7.363s
user0m0.010s
sys 0m0.031s

real0m5.107s
user0m0.010s
sys 0m0.022s

real0m7.888s
user0m0.009s
sys 0m0.024s

Operation takes 15 - 20 seconds

In kernel it takes ( time in ms ):
  0  42867   dmu_objset_snapshot:return time 2471
  1  42867   dmu_objset_snapshot:return time 10803
  1  42867   dmu_objset_snapshot:return time 7968
  0  42867   dmu_objset_snapshot:return time 14139
  0  42867   dmu_objset_snapshot:return time 14405
  1  42867   dmu_objset_snapshot:return time 8883
  0  42867   dmu_objset_snapshot:return time 4960

Now the code in kernel is without optimalization

zfs_unmount_snap(snap_previous, NULL);
dmu_objset_destroy(snap_previous);
zfs_unmount_snap(zc-zc_value, NULL);
dmu_objset_rename(zc-zc_value, snap_previous); 
error = dmu_objset_snapshot(zc-zc_name,

REPLICATE_SNAPSHOT_LATEST, 0);

In kernel operation can be optimized and done in one dsl_sync_task_do call.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Kstats

2007-03-27 Thread Roch - PAE

See

Kernel Statistics Library Functions kstat(3KSTAT)

-r

Atul Vidwansa writes:
  Peter,
  How do I get those stats programatically? Any clues?
  Regards,
  _Atul
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Kstats

2007-03-27 Thread Shawn Walker

On 27/03/07, Atul Vidwansa [EMAIL PROTECTED] wrote:

Peter,
How do I get those stats programatically? Any clues?
Regards,
_Atul


man kstat

http://docs.sun.com/app/docs/doc/816-5172/6mbb7bu50?q=kstatsa=view

--
Less is only more where more is no good. --Frank Lloyd Wright

Shawn Walker, Software and Systems Analyst
[EMAIL PROTECTED] - http://binarycrusader.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Kstats

2007-03-27 Thread Darren J Moffat

Atul Vidwansa wrote:

Peter,
   How do I get those stats programatically? Any clues?


With the kstat(3kstat) API from C or Perl.

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Kstats

2007-03-27 Thread Sanjeev Bagewadi

Atul,

libkstat(3LIB) is the library.
man -s 3KSTAT kstat should give a good start.

Regards,
Sanjeev.

Atul Vidwansa wrote:

Peter,
   How do I get those stats programatically? Any clues?
Regards,
_Atul

On 3/27/07, Peter Tribble [EMAIL PROTECTED] wrote:

On 3/27/07, Atul Vidwansa [EMAIL PROTECTED] wrote:
 Hi,
Does ZFS has support for kstats? If I want to extract information
 like no of files commited to disk during an interval, no of
 transactions performed, I/O bandwidth etc, how can I get that
 information?

From the command line, look at the fsstat utility.

If you want the raw kstats then you need to look for ones
of the form 'unix:0:vopstats_*' where there are two forms:
with the name of the filesystem type (eg zfs or ufs) on the
end, or the device id of the individual filesystem.

--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Assertion raised during zfs share?, Re: a 30mb ZFS OS install

2007-03-27 Thread MC
 o I've got a modified Solaris miniroot with ZFS
 functionality which 
 takes up about 60 MB  (The compressed image, which
 GRUB uses, is less 
 than 30MB). Solaris boots entirely into RAM.  From
 poweron to full 
 functionality, it takes about 45 seconds to boot on a
 very modest 1GHz 
 Cyrix Mini ITX motherboard.
 
 o As Solaris runs entirely in RAM, there is no
 Solaris footprint on the 
 attached storage. It is entirely dedicated to ZFS.
  With a little 
 ludgery, all state can be managed from ZFS in effect
 making Solaris 
 stateless.  There should be no serious ramifications
 to pulling the plug 
 on this device.  In fact that's pretty much how this
 thing is rebooted 
 right now.
 
 o As a potential example, one might consider managing
 this device via a 
 web-based interface, perhaps not all that different
 than the way you 
 might manage say, a Linksys router.
 
 Yeah I know this is silly, but it's fun.  Time to get
 back to my real job
 -- Jim C

Silly is the opposite of such a project!  I'm just wondering how so much time 
has passed without it becoming an explicit OpenSolaris project! 

A RAM-driven headless ZFS file server to compete with FreeNAS, OpenFiler, 
Windows Storage Server 2003 and Windows Home Server?  Where do we sign up for 
this?!?! :)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Kstats

2007-03-27 Thread Rayson Ho

I like these articles at SDN:

http://developers.sun.com/solaris/articles/kstatc.html
http://developers.sun.com/solaris/articles/kstat_part2.html

Rayson




On 3/27/07, Sanjeev Bagewadi [EMAIL PROTECTED] wrote:

Atul,

libkstat(3LIB) is the library.
man -s 3KSTAT kstat should give a good start.

Regards,
Sanjeev.

Atul Vidwansa wrote:
 Peter,
How do I get those stats programatically? Any clues?
 Regards,
 _Atul

 On 3/27/07, Peter Tribble [EMAIL PROTECTED] wrote:
 On 3/27/07, Atul Vidwansa [EMAIL PROTECTED] wrote:
  Hi,
 Does ZFS has support for kstats? If I want to extract information
  like no of files commited to disk during an interval, no of
  transactions performed, I/O bandwidth etc, how can I get that
  information?

 From the command line, look at the fsstat utility.

 If you want the raw kstats then you need to look for ones
 of the form 'unix:0:vopstats_*' where there are two forms:
 with the name of the filesystem type (eg zfs or ufs) on the
 end, or the device id of the individual filesystem.

 --
 -Peter Tribble
 http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Assertion raised during zfs share?, Re: a 30mb ZFS OS install

2007-03-27 Thread Malachi de Ælfweald

If I had had that a few months ago, I might have designed a completely
different system.

Great job!

Malachi

On 3/27/07, MC [EMAIL PROTECTED] wrote:


 o I've got a modified Solaris miniroot with ZFS
 functionality which
 takes up about 60 MB  (The compressed image, which
 GRUB uses, is less
 than 30MB). Solaris boots entirely into RAM.  From
 poweron to full
 functionality, it takes about 45 seconds to boot on a
 very modest 1GHz
 Cyrix Mini ITX motherboard.

 o As Solaris runs entirely in RAM, there is no
 Solaris footprint on the
 attached storage. It is entirely dedicated to ZFS.
  With a little
 ludgery, all state can be managed from ZFS in effect
 making Solaris
 stateless.  There should be no serious ramifications
 to pulling the plug
 on this device.  In fact that's pretty much how this
 thing is rebooted
 right now.

 o As a potential example, one might consider managing
 this device via a
 web-based interface, perhaps not all that different
 than the way you
 might manage say, a Linksys router.

 Yeah I know this is silly, but it's fun.  Time to get
 back to my real job
 -- Jim C

Silly is the opposite of such a project!  I'm just wondering how so much
time has passed without it becoming an explicit OpenSolaris project!

A RAM-driven headless ZFS file server to compete with FreeNAS, OpenFiler,
Windows Storage Server 2003 and Windows Home Server?  Where do we sign up
for this?!?! :)


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS filesystem online backup question

2007-03-27 Thread Matthew Ahrens

Łukasz wrote:

All snapshot operations are done in kernel - it works faster then.
I have implemented this mechanism and it works.


Cool!


Do you think this change will be integrated to opensolaris ?


It's possible, but I'd prefer to first exhaust all options for improving 
performance of the base operations.



Is there chance this option will be available in Solaris update 4 ?


No, it's too late to integrate features into update 4.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (1) zfs list memory usage (2) zfs send/recv safety

2007-03-27 Thread Matthew Ahrens

Samuel Hexter wrote:

Hi all,

Two separate questions:

1. We have a pool with 134 filesystems which collectively have about
75000 snapshots. The zfs list command grows to over 650MB resident
before printing its output. This doesn't overly bother me since the
box in question (snv53) has plenty of memory but I thought I'd ask
whether it is a known area for improvement since it does seem a
little excessive.


I'm not aware of that... Since the output it sorted, clearly we must use 
O(snapshots+filesystems) memory, but 8k per snapshot seems excessive. 
I've filed 6539380 to track this issue.



2. I have a couple of safety-related questions about zfs send/recv.
The first is about the versioning of these streams -- if a version
incompatibility exists, will the recv complain or is the behaviour
undefined?


recv will complain.

 Similarly, if a stream were to be corrupted in transit, is

it possible that the recv could somehow corrupt the destination
filesystem/pool?


The stream is checksummed, so we would detect the corruption and abort 
the recv.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS filesystem online backup question

2007-03-27 Thread Mark J Musante
On Tue, 27 Mar 2007, [UTF-8] �^Aukasz wrote:

 Out of curiosity, what is the timing difference between a userland script
 and performing the operations in the kernel?

 Operation takes 15 - 20 seconds

 In kernel it takes ( time in ms ):

  [between 2.5 and 14.5 seconds]

Very nice improvement.

 In kernel operation can be optimized and done in one dsl_sync_task_do
 call.

Is this where the speed-up is, or is it that libzfs has a lot of overhead
for the three operations (destroy, snapshot, rename)?

Currently, destroy and snapshot have got a -r option for performing
recursive operations on snapshots, and rename is getting one soon.  Will
your changes handle recursive sends too?  Or do they require a separate
zfs send per filesystem?


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Atomic setting of properties?

2007-03-27 Thread Fred Oliver


Has consideration been given to setting multiple properties at once in a 
single zfs set command?


For example, consider attempting to maintain quota == reservation, while 
increasing both. It is impossible to maintain this equality without some 
additional help.


Quota must be increased first (because the reservation can't exceed the 
quota), increasing the reservation could fail (due to insufficient 
space), and restoring the quota to the previous value can fail (due to 
file system growth). It would seem convenient if these race conditions 
could be handled in the kernel.



Alternatively, why not allow the reservation to exceed the quota? Some 
space is unusable until the quota is raised, but isn't that acceptable 
and/or desirable?



Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 6410 expansion shelf

2007-03-27 Thread Wee Yeh Tan

Cool blog!  I'll try a run at this on the benchmark.

On 3/27/07, Rayson Ho [EMAIL PROTECTED] wrote:

BTW, did anyone try this??

http://blogs.sun.com/ValdisFilks/entry/improving_i_o_throughput_for

Rayson



On 3/27/07, Wee Yeh Tan [EMAIL PROTECTED] wrote:
 As promised.  I got my 6140 SATA delivered yesterday and I hooked it
 up to a T2000 on S10u3.  The T2000 saw the disks straight away and is
 working for the last 1 hour.  I'll be running some benchmarks on it.
  I'll probably have a week with it until our vendor comes around and
 steals it from me.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
Just me,
Wire ...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 6410 expansion shelf

2007-03-27 Thread Jonathan Edwards
right on for optimizing throughput on solaris .. a couple of notes  
though (also mentioned in the QFS manuals):


- on x86/x64 you're just going to have an sd.conf so just increase  
the max_xfer_size for all with a line at the bottom like:

sd_max_xfer_size=0x80;
(note: if you look at the source the ssd driver is built from the sd  
source .. it got collapsed back down to sd in S10 x86)


- ssd_max_throttle or sd_max_throttle is typically a point of  
contention that has had many years of history with storage vendors ..  
this will limit the maximum queue depth across the board for all sd  
or ssd devices (read all disks) .. if you're using the native  
Leadville stack, there is a dynamic throttle that should adjust per  
target, so you really shouldn't have to set this unless you're seeing  
command timeouts either on the port or on the host.  By tuning this  
down you can affect performance on the root drives as well as  
external storage making solaris appear slower than it may or may not be.


- ZFS has a maximum block size of 128KB - so i don't think that  
tuning up maxphys and the max transfer sizes to 8MB isn't going to  
make that much difference here .. if you want larger block transfers  
(possibly matching to a full stripe width) you'd have to either go  
with QFS or raw - (but note that with larger block transfers you can  
get into higher cache latency response times depending on the storage  
controller .. and that's a whole other discussion)



On Mar 27, 2007, at 08:24, Rayson Ho wrote:


BTW, did anyone try this??

http://blogs.sun.com/ValdisFilks/entry/improving_i_o_throughput_for

Rayson



On 3/27/07, Wee Yeh Tan [EMAIL PROTECTED] wrote:

As promised.  I got my 6140 SATA delivered yesterday and I hooked it
up to a T2000 on S10u3.  The T2000 saw the disks straight away and is
working for the last 1 hour.  I'll be running some benchmarks on  
it.

 I'll probably have a week with it until our vendor comes around and
steals it from me.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 6410 expansion shelf

2007-03-27 Thread Selim Daoud

talking of which,
what's the effort and consequences to increase the max allowed block
size in zfs to highr figures like 1M...

s.

On 3/28/07, Jonathan Edwards [EMAIL PROTECTED] wrote:

right on for optimizing throughput on solaris .. a couple of notes
though (also mentioned in the QFS manuals):

- on x86/x64 you're just going to have an sd.conf so just increase
the max_xfer_size for all with a line at the bottom like:
sd_max_xfer_size=0x80;
(note: if you look at the source the ssd driver is built from the sd
source .. it got collapsed back down to sd in S10 x86)

- ssd_max_throttle or sd_max_throttle is typically a point of
contention that has had many years of history with storage vendors ..
this will limit the maximum queue depth across the board for all sd
or ssd devices (read all disks) .. if you're using the native
Leadville stack, there is a dynamic throttle that should adjust per
target, so you really shouldn't have to set this unless you're seeing
command timeouts either on the port or on the host.  By tuning this
down you can affect performance on the root drives as well as
external storage making solaris appear slower than it may or may not be.

- ZFS has a maximum block size of 128KB - so i don't think that
tuning up maxphys and the max transfer sizes to 8MB isn't going to
make that much difference here .. if you want larger block transfers
(possibly matching to a full stripe width) you'd have to either go
with QFS or raw - (but note that with larger block transfers you can
get into higher cache latency response times depending on the storage
controller .. and that's a whole other discussion)


On Mar 27, 2007, at 08:24, Rayson Ho wrote:

 BTW, did anyone try this??

 http://blogs.sun.com/ValdisFilks/entry/improving_i_o_throughput_for

 Rayson



 On 3/27/07, Wee Yeh Tan [EMAIL PROTECTED] wrote:
 As promised.  I got my 6140 SATA delivered yesterday and I hooked it
 up to a T2000 on S10u3.  The T2000 saw the disks straight away and is
 working for the last 1 hour.  I'll be running some benchmarks on
 it.
  I'll probably have a week with it until our vendor comes around and
 steals it from me.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss