Paul:

I'm interested in getting a box like this.  I hope this problem has a
simple fix.

> When I create a 8x mirrored vdev pool (1T samsung enterise drives) and
> do a simple dd test it maxes out at 100M/s, where I'd normally expect
> 500M/s+ at least.

Will you include the output from the following commands?

$ zpool status
$ zpool list

> lo...@dev-storage-01:/storage# dd if=/dev/zero of=test.zeros bs=1M
> count=64000
> 64000+0 records in
> 64000+0 records out
> 67108864000 bytes (67 GB) copied, 630.583 s, 106 MB/s

ZFS's default block size is 128k.  Does setting bs=128k improve
performance at all?

> lo...@dev-storage-01:~$ mpstat 15
> ...

mpstat(1) showed a lot of system time.  You can run the following DTrace
as root to get more detail about what the OS is doing:

# dtrace -n 'profile-1997hz {...@a[stack(50)] = count();} END {trunc(@a, 20);}'

This will show you the 20 most frequent kernel stacks, sampled at 1997 Hz.

I agree with Richard's advice about resolving your FMA problems.
Whatever is going on, it's certainly not helping your performance.

I'd also be curious to see the iostat output for these transactions
while the dd is running.  I typically use the following.

$ iostat -xn 1

Disk drivers can often have an adverse affect on performance.  I've seen
some poorly supported SATA cards/drivers cause ZFS performance to look
really bad, when in fact the problem was in the I/O hardware/software.

It might also be useful to know what kind of I/O hardware you're using
in this system.

Thanks,

-j
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to