Re: [zfs-discuss] Fileserver performance tests

2007-10-11 Thread Thomas Liesner
Hi, compression is off. I've checked rw-perfomance with 20 simultaneous cp and with the following... #!/usr/bin/bash for ((i=1; i=20; i++)) do cp lala$i lulu$i done (lala1-20 are 2gb files) ...and ended up with 546mb/s. Not too bad at all. This message posted from opensolaris.org

Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread eric kustarz
Since you were already using filebench, you could use the 'singlestreamwrite.f' and 'singlestreamread.f' workloads (with nthreads set to 20, iosize set to 128k) to achieve the same things. With the latest version of filebench, you can then use the '-c' option to compare your results in a

Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread Thomas Liesner
Hi Eric, Are you talking about the documentation at: http://sourceforge.net/projects/filebench or: http://www.opensolaris.org/os/community/performance/filebench/ and: http://www.solarisinternals.com/wiki/index.php/FileBench ? i was talking about the solarisinternals wiki. I can't find any

Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread Luke Lonergan
Hi Eric, On 10/10/07 12:50 AM, eric kustarz [EMAIL PROTECTED] wrote: Since you were already using filebench, you could use the 'singlestreamwrite.f' and 'singlestreamread.f' workloads (with nthreads set to 20, iosize set to 128k) to achieve the same things. Yes but once again we see the

Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread Spencer Shepler
On Oct 10, 2007, at 8:41 AM, Luke Lonergan wrote: Hi Eric, On 10/10/07 12:50 AM, eric kustarz [EMAIL PROTECTED] wrote: Since you were already using filebench, you could use the 'singlestreamwrite.f' and 'singlestreamread.f' workloads (with nthreads set to 20, iosize set to 128k) to

Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread Spencer Shepler
On Oct 10, 2007, at 2:56 AM, Thomas Liesner wrote: Hi Eric, Are you talking about the documentation at: http://sourceforge.net/projects/filebench or: http://www.opensolaris.org/os/community/performance/filebench/ and: http://www.solarisinternals.com/wiki/index.php/FileBench ? i was

Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread eric kustarz
That all said - we don't have a simple dd benchmark for random seeking. Feel free to try out randomread.f and randomwrite.f - or combine them into your own new workload to create a random read and write workload. eric ___ zfs-discuss mailing

Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
Hi again, i did not want to compare the filebench test with the single mkfile command. Still, i was hoping to see similar numbers in the filbench stats. Any hints what i could do to further improve the performance? Would a raid1 over two stripes be faster? TIA, Tom This message posted from

Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Dick Davies
Hi Thomas the point I was making was that you'll see low performance figures with 100 concurrent threads. If you set nthreads to something closer to your expected load, you'll get a more accurate figure. Also, there's a new filebench out now, see

Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
Hi, i checked with $nthreads=20 which will roughly represent the expected load and these are the results: IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s,255us cpu/op, 0.2ms latency BTW, smpatch is still running and further tests will get done when the system is

Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
Hi, i checked with $nthreads=20 which will roughly represent the expected load and these are the results: IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s, 255us cpu/op, 0.2ms latency BTW, smpatch is still running and further tests will get done when the system is rebooted. The

Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread eric kustarz
On Oct 9, 2007, at 4:25 AM, Thomas Liesner wrote: Hi, i checked with $nthreads=20 which will roughly represent the expected load and these are the results: Note, here is the description of the 'fileserver.f' workload: define process name=filereader,instances=1 { thread

Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
i wanted to test some simultanious sequential writes and wrote this little snippet: #!/bin/bash for ((i=1; i=20; i++)) do dd if=/dev/zero of=lala$i bs=128k count=32768 done While the script was running i watched zpool iostat and measured the time between starting and stopping of the writes

Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Anton B. Rang
Do you have compression turned on? If so, dd'ing from /dev/zero isn't very useful as a benchmark. (I don't recall if all-zero blocks are always detected if checksumming is turned on, but I seem to recall that they are, even if compression is off.) This message posted from opensolaris.org

[zfs-discuss] Fileserver performance tests

2007-10-08 Thread Thomas Liesner
Hi all, i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver suite. I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS controllers, attached two sas-jbods with 8

Re: [zfs-discuss] Fileserver performance tests

2007-10-08 Thread johansen
statfile1 988ops/s 0.0mb/s 0.0ms/op 22us/op-cpu deletefile1 991ops/s 0.0mb/s 0.0ms/op 48us/op-cpu closefile2997ops/s 0.0mb/s 0.0ms/op4us/op-cpu readfile1 997ops/s 139.8mb/s 0.2ms/op