Pool is 6x striped Stec ZEUSRam as ZIL, 6x OCZ Talos C 230GB drives L2ARC,
and 24x 15k SAS drives striped (no parity, no mirroring) - I know, terrible
for reliability, but I just want to see what kind of IO I can hit.
Checksum is ON - can't recall what default is right now.
Compression is off
Dedupe is off

Trying to figure out vdbench right now, but apparently that's beyond my
abilities at 8:30PM :(

-----Original Message-----
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] 
Sent: Tuesday, July 24, 2012 8:13 PM
To: matth...@flash.shanje.com
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] IO load questions

On Tue, 24 Jul 2012, matth...@flash.shanje.com wrote:
> ~50,000 IOPS 4k random read.  200MB/sec, 30% CPU utilization on 
> Nexenta, ~90% utilization on guest OS.  I?m guessing guest OS is 
> bottlenecking.  Going to try physical hardware next week
> ~25,000 IOPS 4k random write.  100MB/sec, ~70% CPU utilization on 
> Nexenta, ~45% CPU utilization on guest OS.  Feels like Nexenta CPU is 
> bottleneck. Load average of 2.5
> A quick test with 128k recordsizes and 128k IO looked to be 400MB/sec 
> performance, can?t remember CPU utilization on either side. Will 
> retest and report those numbers.
> It feels like something is adding more overhead here than I would 
> expect on the 4k recordsizes/IO workloads.  Any thoughts where I should
start on this?
> I?d really like to see closer to 10Gbit performance here, but it seems 
> like the hardware isn?t able to cope with it?

All systems have a bottleneck.  You are highly unlikely to get close to
10Gbit performance with 4k random synchronous write.  25K IOPS seems pretty
good to me.

The 2.4GHz clock rate of the 4-core Xeon CPU you are using is not terribly
high.  Performance is likely better with a higher-clocked more modern design
with more cores.

Verify that the zfs checksum algorithm you are using is a low-cost one and
that you have not enabled compression or deduplication.

You did not tell us how your zfs pool is organized so it is impossible to
comment more.

Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

zfs-discuss mailing list

Reply via email to