On Thu, Apr 17, 2008 at 11:47 AM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:

> On Thu, 17 Apr 2008, Pascal Vandeputte wrote:
> >
> > At the moment I'm seeing read speeds of 200MB/s on a ZFS raidz
> > filesystem consisting of c1t0d0s3, c1t1d0 and c1t2d0 (I'm booting
> > from a small 700MB slice on the first sata drive; c1t0d0s3 is about
> > 690 "real" gigabytes large and ZFS just uses the same amount of
> > sectors on the other disks and leaves the rest untouched). As a
> > single drive should top out at about 104MB/s for sequential access
> > in the outer tracks, I'm very pleased with that.
> >
> > But the write speeds I'm getting are still far below my
> > expectations: about 20MB/s (versus 14MB/s in Windows 2003 with Intel
> > RAID driver). I was hoping for at least 100MB/s, maybe even more.
>
> I don't know what you should be expecting.  20MB/s seems pretty poor
> but 100MB/s seems like a stretch with only three drives.
>
> > I'm a Solaris newbie (but with the intention of learning a whole
> > lot), so I may have overlooked something. I also don't really know
> > where to start looking for bottlenecks.
>
> There are a couple of things which come to mind.
>
>  * Since you are using a slice on the boot drive, this causes ZFS to
> not enable the disk drive write cache since it does not assume to know
> what the filesystem on the other partition needs.  As a result, writes
> to that disk will have more latency and since you are using raidz
> (which needs to write to all the drives) the extra latency will impact
> overall write performance.  If one of the drives has slower write
> performance than the others, then the whole raidz will suffer.  See
> "Storage Pools" in
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide.
>
>  * Maybe this ICH9R interface has some sort of bottleneck in its
> design or there is a driver performance problem.  If the ICH9R is
> sharing resources rather than dedicating a channel for each drive,
> then raidz's increased write load may be overwelming it.
>
> If you are looking for really good scalable write performance, perhaps
> you should be using mirrors instead.
>
> In order to see if you have a slow drive, run 'iostat -x' while
> writing data.  If the svc_t field is much higher for one drive than
> the others, then that drive is likely slow.
>
> Bob
> ======================================
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


Along those lines, I'd *strongly* suggest running Jeff's script to pin down
whether one drive is the culprit:






#!/bin/ksh

disks=`format </dev/null | grep c.t.d | nawk '{print $2}'`

getspeed1()
{
       ptime dd if=/dev/rdsk/${1}s0 of=/dev/null bs=64k count=1024 2>&1 |
           nawk '$1 == "real" { printf("%.0f\n", 67.108864 / $2) }'
}

getspeed()
{
       for iter in 1 2 3
       do
               getspeed1 $1
       done | sort -n | tail -2 | head -1
}

for disk in $disks
do
       echo $disk `getspeed $disk` MB/sec
done

----------------------
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to