Re: [zfs-discuss] Apple Time Machine

2006-08-08 Thread Robert Gordon
On Aug 8, 2006, at 12:34 AM, Darren J Moffat wrote: Adam Leventhal wrote: Needless to say, this was a pretty interesting piece of the keynote from a technical point of view that had quite a few of us scratching our heads. After talking to some Apple engineers, it seems like what they're

Re: [zfs-discuss] Apple Time Machine

2006-08-08 Thread Tim Foster
Bryan Cantrill wrote: So in short (and brace yourself, because I know it will be a shock): mentions by executives in keynotes don't always accurately represent a technology. DynFS, anyone? ;) I'm shocked and stunned, and not a little amazed! I'll bet the OpenSolaris PPC guys are thrilled

Re[2]: [zfs-discuss] zil_disable

2006-08-08 Thread Robert Milkowski
Hello Eric, Monday, August 7, 2006, 6:29:45 PM, you wrote: ES Robert - ES This isn't surprising (either the switch or the results). Our long term ES fix for tweaking this knob is: ES 6280630 zil synchronicity ES Which would add 'zfs set sync' as a per-dataset option. A cut from the ES

Re[2]: [zfs-discuss] zil_disable

2006-08-08 Thread Robert Milkowski
Hello Neil, Monday, August 7, 2006, 6:40:01 PM, you wrote: NP Not quite, zil_disable is inspected on file system mounts. I guess you right that umount/mount will suffice - I just hadn't time to check it and export/import worked. Anyway is there a way for file systems to make it active without

Re[2]: [zfs-discuss] 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Robert Milkowski
Hello Richard, Monday, August 7, 2006, 6:54:37 PM, you wrote: RE Hi Robert, thanks for the data. RE Please clarify one thing for me. RE In the case of the HW raid, was there just one LUN? Or was it 12 LUNs? Just one lun which was build on 3510 from 12 luns in raid-1(0). -- Best regards,

Re[2]: [zfs-discuss] ZFS/Thumper experiences

2006-08-08 Thread Robert Milkowski
Hello David, Tuesday, August 8, 2006, 3:39:42 AM, you wrote: DJO Thanks, interesting read. It'll be nice to see the actual DJO results if Sun ever publishes them. You may bet I'll post some results hopefully soon :) -- Best regards, Robertmailto:[EMAIL PROTECTED]

[zfs-discuss] Querying ZFS version?

2006-08-08 Thread Luke Scharf
Although regular Solaris is good for what I'm doing at work, I prefer apt-get or yum for package management for a desktop. So, I've been playing with Nexenta / GnuSolaris -- which appears to be the open-sourced Solaris kernel and low-level system utilities with Debian package management --

Re: [zfs-discuss] Querying ZFS version?

2006-08-08 Thread Darren Reed
Luke Scharf wrote: Although regular Solaris is good for what I'm doing at work, I prefer apt-get or yum for package management for a desktop. So, I've been playing with Nexenta / GnuSolaris -- which appears to be the open-sourced Solaris kernel and low-level system utilities with Debian

Re: [zfs-discuss] Querying ZFS version?

2006-08-08 Thread George Wilson
Luke, You can run 'zpool upgrade' to see what on-disk version you are capable of running. If you have the latest features then you should be running version 3: hadji-2# zpool upgrade This system is currently running ZFS version 3. Unfortunately this won't tell you if you are running the

Re: [zfs-discuss] Querying ZFS version?

2006-08-08 Thread Luke Scharf
George Wilson wrote: Luke, You can run 'zpool upgrade' to see what on-disk version you are capable of running. If you have the latest features then you should be running version 3: hadji-2# zpool upgrade This system is currently running ZFS version 3. Unfortunately this

Re: [zfs-discuss] Querying ZFS version?

2006-08-08 Thread Luke Scharf
Darren Reed wrote: On Solaris, pkginfo -l SUNWzfsr would give you a package version for that part of ZFS.. and modinfo | grep zfs will tell you something about the kernel module rev. No such luck. Modinfo doesn't show the ZFS module as loaded; that's probably because I'm not running

[zfs-discuss] DTrace IO provider and oracle

2006-08-08 Thread przemolicc
Hello, Solaris 10 GA + latest recommended patches: while runing dtrace: bash-3.00# dtrace -n 'io:::start [EMAIL PROTECTED], args[2]-fi_pathname] = count();}' ... vim /zones/obsdb3/root/opt/sfw/bin/vim 296 tnslsnr

Re: [zfs-discuss] zil_disable

2006-08-08 Thread Neil Perrin
Robert Milkowski wrote: Hello Neil, Monday, August 7, 2006, 6:40:01 PM, you wrote: NP Not quite, zil_disable is inspected on file system mounts. I guess you right that umount/mount will suffice - I just hadn't time to check it and export/import worked. Anyway is there a way for file systems

Re: [zfs-discuss] zil_disable

2006-08-08 Thread Neil Perrin
Robert Milkowski wrote: Hello Eric, Monday, August 7, 2006, 6:29:45 PM, you wrote: ES Robert - ES This isn't surprising (either the switch or the results). Our long term ES fix for tweaking this knob is: ES 6280630 zil synchronicity ES Which would add 'zfs set sync' as a per-dataset

[zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Robert Milkowski
Hi. This time some RAID5/RAID-Z benchmarks. This time I connected 3510 head unit with one link to the same server as 3510 JBODs are connected (using second link). snv_44 is used, server is v440. I also tried changing max pending IO requests for HW raid5 lun and checked with DTrace that

RE: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Luke Lonergan
Does snv44 have the ZFS fixes to the I/O scheduler, the ARC and the prefetch logic? These are great results for random I/O, I wonder how the sequential I/O looks? Of course you'll not get great results for sequential I/O on the 3510 :-) - Luke Sent from my GoodLink synchronized handheld

[zfs-discuss] Re: ZFS + /var/log + Single-User

2006-08-08 Thread Pierre Klovsjo
Thanks for your answer Eric! I don't see any problem mounting a filesystem under 'legacy' options as long as i can have the freedom of ZFS features by being able to add/remove/play around with disks really! I tested the 'zfs mount -a' and of course my /var/log[b]/test[/b] became visible and

Re: [zfs-discuss] Re: ZFS + /var/log + Single-User

2006-08-08 Thread Robert Milkowski
Hello Pierre, Tuesday, August 8, 2006, 4:51:20 PM, you wrote: PK Thanks for your answer Eric! PK I don't see any problem mounting a filesystem under 'legacy' PK options as long as i can have the freedom of ZFS features by being PK able to add/remove/play around with disks really! PK I tested

Re: [zfs-discuss] SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-08 Thread Leon Koll
On 8/8/06, eric kustarz [EMAIL PROTECTED] wrote: Leon Koll wrote: I performed a SPEC SFS97 benchmark on Solaris 10u2/Sparc with 4 64GB LUNs, connected via FC SAN. The filesystems that were created on LUNS: UFS,VxFS,ZFS. Unfortunately the ZFS test couldn't complete bacuase the box was hung

Re: [zfs-discuss] Apple Time Machine

2006-08-08 Thread Frank Cusack
On August 8, 2006 3:04:09 PM +0930 Darren J Moffat [EMAIL PROTECTED] wrote: Adam Leventhal wrote: When a file is modified, the kernel fires off an event which a user-land daemon listens for. Every so often, the user-land daemon does something like a snapshot of the affected portions of the

Re[2]: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Robert Milkowski
Hello Luke, Tuesday, August 8, 2006, 4:48:38 PM, you wrote: LL Does snv44 have the ZFS fixes to the I/O scheduler, the ARC and the prefetch logic? LL These are great results for random I/O, I wonder how the sequential I/O looks? LL Of course you'll not get great results for sequential I/O on

Re: Re[2]: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Luke Lonergan
Robert, On 8/8/06 9:11 AM, Robert Milkowski [EMAIL PROTECTED] wrote: 1. UFS, noatime, HW RAID5 6 disks, S10U2 70MB/s 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1) 87MB/s 3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2 130MB/s 4. ZFS, atime=off, SW RAID-Z 6

Re[4]: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Robert Milkowski
Hello Luke, Tuesday, August 8, 2006, 6:18:39 PM, you wrote: LL Robert, LL On 8/8/06 9:11 AM, Robert Milkowski [EMAIL PROTECTED] wrote: 1. UFS, noatime, HW RAID5 6 disks, S10U2 70MB/s 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1) 87MB/s 3. ZFS, atime=off, SW

Re: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Mark Maybee
Luke Lonergan wrote: Robert, On 8/8/06 9:11 AM, Robert Milkowski [EMAIL PROTECTED] wrote: 1. UFS, noatime, HW RAID5 6 disks, S10U2 70MB/s 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1) 87MB/s 3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2 130MB/s 4. ZFS,

Re: Re[4]: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Luke Lonergan
Robert, LL Most of my ZFS experiments have been with RAID10, but there were some LL massive improvements to seq I/O with the fixes I mentioned - I'd expect that LL this shows that they aren't in snv44. So where did you get those fixes? From the fine people who implemented them! As Mark

[zfs-discuss] ZFS RAID10

2006-08-08 Thread Robert Milkowski
Hi. snv_44, v440 filebench/varmail results for ZFS RAID10 with 6 disks and 32 disks. What is suprising is that the results for both cases are almost the same! 6 disks: IO Summary: 566997 ops 9373.6 ops/s, (1442/1442 r/w) 45.7mb/s, 299us cpu/op, 5.1ms latency IO Summary:

[zfs-discuss] Re: Re[2]: Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Doug Scott
Robert, On 8/8/06 9:11 AM, Robert Milkowski [EMAIL PROTECTED] wrote: 1. UFS, noatime, HW RAID5 6 disks, S10U2 70MB/s 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1) 87MB/s 3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2 130MB/s 4. ZFS, atime=off,

Re: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Matthew Ahrens
On Tue, Aug 08, 2006 at 06:11:09PM +0200, Robert Milkowski wrote: filebench/singlestreamread v440 1. UFS, noatime, HW RAID5 6 disks, S10U2 70MB/s 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1) 87MB/s 3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2

Re[2]: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Robert Milkowski
Hello Matthew, Tuesday, August 8, 2006, 7:25:17 PM, you wrote: MA On Tue, Aug 08, 2006 at 06:11:09PM +0200, Robert Milkowski wrote: filebench/singlestreamread v440 1. UFS, noatime, HW RAID5 6 disks, S10U2 70MB/s 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)

Re: [zfs-discuss] ZFS RAID10

2006-08-08 Thread Matthew Ahrens
On Tue, Aug 08, 2006 at 09:54:16AM -0700, Robert Milkowski wrote: Hi. snv_44, v440 filebench/varmail results for ZFS RAID10 with 6 disks and 32 disks. What is suprising is that the results for both cases are almost the same! 6 disks: IO Summary: 566997 ops 9373.6

Re: [zfs-discuss] Re: ZFS RAID10

2006-08-08 Thread Robert Milkowski
Hello Doug, Tuesday, August 8, 2006, 7:28:07 PM, you wrote: DS Looks like somewhere between the CPU and your disks you have a limitation of 9500 ops/sec. DS How did you connect 32 disks to your v440? Some 3510 JBODs connected directly over FC. -- Best regards, Robert

[zfs-discuss] Re: ZFS RAID10

2006-08-08 Thread Robert Milkowski
filebench in varmail by default creates 16 threads - I configrm it with prstat, 16 threrads are created and running. bash-3.00# lockstat -kgIW sleep 60|less Profiling interrupt: 23308 events in 60.059 seconds (388 events/sec) Count genr cuml rcnt nsec Hottest CPU+PILCaller

[zfs-discuss] Re: ZFS/Thumper experiences

2006-08-08 Thread Jochen M. Kaiser
Hello, I really appreciate such information, could you please give us some additional insight regarding your statement, that [you] tried to drive ZFS to its limit, [...] found that the results were less consistent or predictable. Especially when taking a closer look at the upcoming

Re: [zfs-discuss] Re: ZFS RAID10

2006-08-08 Thread Matthew Ahrens
On Tue, Aug 08, 2006 at 10:42:41AM -0700, Robert Milkowski wrote: filebench in varmail by default creates 16 threads - I configrm it with prstat, 16 threrads are created and running. Ah, OK. Looking at these results, it doesn't seem to be CPU bound, and the disks are not fully utilized either.

Re: [zfs-discuss] SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-08 Thread eric kustarz
Leon Koll wrote: On 8/8/06, eric kustarz [EMAIL PROTECTED] wrote: Leon Koll wrote: I performed a SPEC SFS97 benchmark on Solaris 10u2/Sparc with 4 64GB LUNs, connected via FC SAN. The filesystems that were created on LUNS: UFS,VxFS,ZFS. Unfortunately the ZFS test couldn't complete

[zfs-discuss] Re: Lots of seeks?

2006-08-08 Thread Anton B. Rang
So while I'm feeling optimistic :-) we really ought to be able to do this in two I/O operations. If we have, say, 500K of data to write (including all of the metadata), we should be able to allocate a contiguous 500K block on disk and write that with a single operation. Then we update the

Re: [zfs-discuss] Re: Lots of seeks?

2006-08-08 Thread Spencer Shepler
On Tue, Anton B. Rang wrote: So while I'm feeling optimistic :-) we really ought to be able to do this in two I/O operations. If we have, say, 500K of data to write (including all of the metadata), we should be able to allocate a contiguous 500K block on disk and write that with a single

Re: [zfs-discuss] Re: ZFS/Thumper experiences

2006-08-08 Thread Luke Lonergan
Jochen, On 8/8/06 10:47 AM, Jochen M. Kaiser [EMAIL PROTECTED] wrote: I really appreciate such information, could you please give us some additional insight regarding your statement, that [you] tried to drive ZFS to its limit, [...] found that the results were less consistent or

Re: [zfs-discuss] Re: Re[2]: Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Luke Lonergan
Doug, On 8/8/06 10:15 AM, Doug Scott [EMAIL PROTECTED] wrote: I dont think there is much chance of achieving anywhere near 350MB/s. That is a hell of a lot of IO/s for 6 disks+raid(5/Z)+shared fibre. While you can always get very good results from a single disk IO, your percentage gain is