Thank you for those numbers.

I should have mentioned that I was mostly interested in single disk or small
array performance, as it is not possible for dd to meaningfully access
multiple-disk configurations without going through the file system. I find
it curious that there is such a large slowdown by going through file system
(with single drive configuration), especially compared to UFS or ext3.

I simply have a small SOHO server and I am trying to evaluate which OS to
use to keep a redundant disk array. With unreliable consumer-level hardware,
ZFS and the checksum feature are very interesting and the primary selling
point compared to a Linux setup, for as long as ZFS can generate enough
bandwidth from the drive array to saturate single gigabit ethernet.

My hardware at the moment is the "wrong" choice for Solaris/ZFS - PCI 3114
SATA controller on a 32-bit AthlonXP, according to many posts I found.
However, since dd over raw disk is capable of extracting 75+MB/s from this
setup, I keep feeling that surely I must be able to get at least that much
from reading a pair of striped or mirrored ZFS drives. But I can't - single
drive or 2-drive stripes or mirrors, I only get around 34MB/s going through
ZFS. (I made sure mirror was rebuilt and I resilvered the stripes.)
Everything is stock Nevada b63 installation, so I haven't messed it up with
misguided tuning attempts. Don't know if it matters, but test file was
created originally from /dev/random. Compression is off, and everything is
default. CPU utilization remains low at all times (haven't seen it go over
25%).

On 5/14/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

This certainly isn't the case on my machine.

$ /usr/bin/time dd if=/test/filebench/largefile2 of=/dev/null bs=128k
count=10000
10000+0 records in
10000+0 records out

real        1.3
user        0.0
sys         1.2

# /usr/bin/time dd if=/dev/dsk/c0t0d0 of=/dev/null bs=128k count=10000
10000+0 records in
10000+0 records out

real       22.3
user        0.0
sys         2.2

This looks like 56 MB/s on the /dev/dsk and 961 MB/s on the pool.

My pool is configured into a 46 disk RAID-0 stripe.  I'm going to omit
the zpool status output for the sake of brevity.

> What I am seeing is that ZFS performance for sequential access is
> about 45% of raw disk access, while UFS (as well as ext3 on Linux) is
> around 70%. For workload consisting mostly of reading large files
> sequentially, it would seem then that ZFS is the wrong tool
> performance-wise. But, it could be just my setup, so I would
> appreciate more data points.

This isn't what we've observed in much of our performance testing.
It may be a problem with your config, although I'm not an expert on
storage configurations.  Would you mind providing more details about
your controller, disks, and machine setup?

-j


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to