On Aug 8, 2006, at 12:34 AM, Darren J Moffat wrote:
Adam Leventhal wrote:
Needless to say, this was a pretty interesting piece of the
keynote from a
technical point of view that had quite a few of us scratching our
heads.
After talking to some Apple engineers, it seems like what they're
Bryan Cantrill wrote:
So in short (and brace yourself, because I
know it will be a shock): mentions by executives in keynotes don't always
accurately represent a technology. DynFS, anyone? ;)
I'm shocked and stunned, and not a little amazed!
I'll bet the OpenSolaris PPC guys are thrilled
Hello Eric,
Monday, August 7, 2006, 6:29:45 PM, you wrote:
ES Robert -
ES This isn't surprising (either the switch or the results). Our long term
ES fix for tweaking this knob is:
ES 6280630 zil synchronicity
ES Which would add 'zfs set sync' as a per-dataset option. A cut from the
ES
Hello Neil,
Monday, August 7, 2006, 6:40:01 PM, you wrote:
NP Not quite, zil_disable is inspected on file system mounts.
I guess you right that umount/mount will suffice - I just hadn't time
to check it and export/import worked.
Anyway is there a way for file systems to make it active without
Hello Richard,
Monday, August 7, 2006, 6:54:37 PM, you wrote:
RE Hi Robert, thanks for the data.
RE Please clarify one thing for me.
RE In the case of the HW raid, was there just one LUN? Or was it 12 LUNs?
Just one lun which was build on 3510 from 12 luns in raid-1(0).
--
Best regards,
Hello David,
Tuesday, August 8, 2006, 3:39:42 AM, you wrote:
DJO Thanks, interesting read. It'll be nice to see the actual
DJO results if Sun ever publishes them.
You may bet I'll post some results hopefully soon :)
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
Although regular Solaris is good for what I'm doing at work, I prefer
apt-get or yum for package management for a desktop. So, I've been
playing with Nexenta / GnuSolaris -- which appears to be the
open-sourced Solaris kernel and low-level system utilities with Debian
package management --
Luke Scharf wrote:
Although regular Solaris is good for what I'm doing at work, I prefer
apt-get or yum for package management for a desktop. So, I've been
playing with Nexenta / GnuSolaris -- which appears to be the
open-sourced Solaris kernel and low-level system utilities with Debian
Luke,
You can run 'zpool upgrade' to see what on-disk version you are capable
of running. If you have the latest features then you should be running
version 3:
hadji-2# zpool upgrade
This system is currently running ZFS version 3.
Unfortunately this won't tell you if you are running the
George Wilson wrote:
Luke,
You can run 'zpool upgrade' to see what on-disk version you are capable
of running. If you have the latest features then you should be running
version 3:
hadji-2# zpool upgrade
This system is currently running ZFS version 3.
Unfortunately this
Darren Reed wrote:
On Solaris,
pkginfo -l SUNWzfsr
would give you a package version for that part of ZFS..
and modinfo | grep zfs will tell you something about the kernel
module rev.
No such luck. Modinfo doesn't show the ZFS module as loaded; that's
probably because I'm not running
Hello,
Solaris 10 GA + latest recommended patches:
while runing dtrace:
bash-3.00# dtrace -n 'io:::start [EMAIL PROTECTED], args[2]-fi_pathname] =
count();}'
...
vim
/zones/obsdb3/root/opt/sfw/bin/vim 296
tnslsnr
Robert Milkowski wrote:
Hello Neil,
Monday, August 7, 2006, 6:40:01 PM, you wrote:
NP Not quite, zil_disable is inspected on file system mounts.
I guess you right that umount/mount will suffice - I just hadn't time
to check it and export/import worked.
Anyway is there a way for file systems
Robert Milkowski wrote:
Hello Eric,
Monday, August 7, 2006, 6:29:45 PM, you wrote:
ES Robert -
ES This isn't surprising (either the switch or the results). Our long term
ES fix for tweaking this knob is:
ES 6280630 zil synchronicity
ES Which would add 'zfs set sync' as a per-dataset
Hi.
This time some RAID5/RAID-Z benchmarks.
This time I connected 3510 head unit with one link to the same server as 3510
JBODs are connected (using second link). snv_44 is used, server is v440.
I also tried changing max pending IO requests for HW raid5 lun and checked with
DTrace that
Does snv44 have the ZFS fixes to the I/O scheduler, the ARC and the prefetch
logic?
These are great results for random I/O, I wonder how the sequential I/O looks?
Of course you'll not get great results for sequential I/O on the 3510 :-)
- Luke
Sent from my GoodLink synchronized handheld
Thanks for your answer Eric!
I don't see any problem mounting a filesystem under 'legacy' options as long as
i can have the freedom of ZFS features by being able to add/remove/play around
with disks really!
I tested the 'zfs mount -a' and of course my /var/log[b]/test[/b] became
visible and
Hello Pierre,
Tuesday, August 8, 2006, 4:51:20 PM, you wrote:
PK Thanks for your answer Eric!
PK I don't see any problem mounting a filesystem under 'legacy'
PK options as long as i can have the freedom of ZFS features by being
PK able to add/remove/play around with disks really!
PK I tested
On 8/8/06, eric kustarz [EMAIL PROTECTED] wrote:
Leon Koll wrote:
I performed a SPEC SFS97 benchmark on Solaris 10u2/Sparc with 4 64GB
LUNs, connected via FC SAN.
The filesystems that were created on LUNS: UFS,VxFS,ZFS.
Unfortunately the ZFS test couldn't complete bacuase the box was hung
On August 8, 2006 3:04:09 PM +0930 Darren J Moffat [EMAIL PROTECTED] wrote:
Adam Leventhal wrote:
When a file is modified, the kernel fires off an event which a user-land
daemon listens for. Every so often, the user-land daemon does something
like a snapshot of the affected portions of the
Hello Luke,
Tuesday, August 8, 2006, 4:48:38 PM, you wrote:
LL Does snv44 have the ZFS fixes to the I/O scheduler, the ARC and the
prefetch logic?
LL These are great results for random I/O, I wonder how the sequential I/O
looks?
LL Of course you'll not get great results for sequential I/O on
Robert,
On 8/8/06 9:11 AM, Robert Milkowski [EMAIL PROTECTED] wrote:
1. UFS, noatime, HW RAID5 6 disks, S10U2
70MB/s
2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
87MB/s
3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2
130MB/s
4. ZFS, atime=off, SW RAID-Z 6
Hello Luke,
Tuesday, August 8, 2006, 6:18:39 PM, you wrote:
LL Robert,
LL On 8/8/06 9:11 AM, Robert Milkowski [EMAIL PROTECTED] wrote:
1. UFS, noatime, HW RAID5 6 disks, S10U2
70MB/s
2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
87MB/s
3. ZFS, atime=off, SW
Luke Lonergan wrote:
Robert,
On 8/8/06 9:11 AM, Robert Milkowski [EMAIL PROTECTED] wrote:
1. UFS, noatime, HW RAID5 6 disks, S10U2
70MB/s
2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
87MB/s
3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2
130MB/s
4. ZFS,
Robert,
LL Most of my ZFS experiments have been with RAID10, but there were some
LL massive improvements to seq I/O with the fixes I mentioned - I'd expect
that
LL this shows that they aren't in snv44.
So where did you get those fixes?
From the fine people who implemented them!
As Mark
Hi.
snv_44, v440
filebench/varmail results for ZFS RAID10 with 6 disks and 32 disks.
What is suprising is that the results for both cases are almost the same!
6 disks:
IO Summary: 566997 ops 9373.6 ops/s, (1442/1442 r/w) 45.7mb/s,
299us cpu/op, 5.1ms latency
IO Summary:
Robert,
On 8/8/06 9:11 AM, Robert Milkowski
[EMAIL PROTECTED] wrote:
1. UFS, noatime, HW RAID5 6 disks, S10U2
70MB/s
2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the
same lun as in #1)
87MB/s
3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2
130MB/s
4. ZFS, atime=off,
On Tue, Aug 08, 2006 at 06:11:09PM +0200, Robert Milkowski wrote:
filebench/singlestreamread v440
1. UFS, noatime, HW RAID5 6 disks, S10U2
70MB/s
2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
87MB/s
3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2
Hello Matthew,
Tuesday, August 8, 2006, 7:25:17 PM, you wrote:
MA On Tue, Aug 08, 2006 at 06:11:09PM +0200, Robert Milkowski wrote:
filebench/singlestreamread v440
1. UFS, noatime, HW RAID5 6 disks, S10U2
70MB/s
2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
On Tue, Aug 08, 2006 at 09:54:16AM -0700, Robert Milkowski wrote:
Hi.
snv_44, v440
filebench/varmail results for ZFS RAID10 with 6 disks and 32 disks.
What is suprising is that the results for both cases are almost the same!
6 disks:
IO Summary: 566997 ops 9373.6
Hello Doug,
Tuesday, August 8, 2006, 7:28:07 PM, you wrote:
DS Looks like somewhere between the CPU and your disks you have a limitation
of 9500 ops/sec.
DS How did you connect 32 disks to your v440?
Some 3510 JBODs connected directly over FC.
--
Best regards,
Robert
filebench in varmail by default creates 16 threads - I configrm it with prstat,
16 threrads are created and running.
bash-3.00# lockstat -kgIW sleep 60|less
Profiling interrupt: 23308 events in 60.059 seconds (388 events/sec)
Count genr cuml rcnt nsec Hottest CPU+PILCaller
Hello,
I really appreciate such information, could you please give us some additional
insight regarding your statement, that [you] tried to drive ZFS to its limit,
[...]
found that the results were less consistent or predictable.
Especially when taking a closer look at the upcoming
On Tue, Aug 08, 2006 at 10:42:41AM -0700, Robert Milkowski wrote:
filebench in varmail by default creates 16 threads - I configrm it
with prstat, 16 threrads are created and running.
Ah, OK. Looking at these results, it doesn't seem to be CPU bound, and
the disks are not fully utilized either.
Leon Koll wrote:
On 8/8/06, eric kustarz [EMAIL PROTECTED] wrote:
Leon Koll wrote:
I performed a SPEC SFS97 benchmark on Solaris 10u2/Sparc with 4 64GB
LUNs, connected via FC SAN.
The filesystems that were created on LUNS: UFS,VxFS,ZFS.
Unfortunately the ZFS test couldn't complete
So while I'm feeling optimistic :-) we really ought to be able to do this in
two I/O operations. If we have, say, 500K of data to write (including all of
the metadata), we should be able to allocate a contiguous 500K block on disk
and write that with a single operation. Then we update the
On Tue, Anton B. Rang wrote:
So while I'm feeling optimistic :-) we really ought to be able to do this in
two I/O operations. If we have, say, 500K of data to write (including all of
the metadata), we should be able to allocate a contiguous 500K block on disk
and write that with a single
Jochen,
On 8/8/06 10:47 AM, Jochen M. Kaiser [EMAIL PROTECTED] wrote:
I really appreciate such information, could you please give us some additional
insight regarding your statement, that [you] tried to drive ZFS to its limit,
[...]
found that the results were less consistent or
Doug,
On 8/8/06 10:15 AM, Doug Scott [EMAIL PROTECTED] wrote:
I dont think there is much chance of achieving anywhere near 350MB/s.
That is a hell of a lot of IO/s for 6 disks+raid(5/Z)+shared fibre. While you
can always get very good results from a single disk IO, your percentage
gain is
39 matches
Mail list logo