On Fri, Jun 18, 2010 at 05:15:44AM -0400, Thomas Burgess wrote:
>    On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen <[1]pa...@iki.fi> wrote:
> 
>      On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
>      > Well, I've searched my brains out and I can't seem to find a reason
>      for this.
>      >
>      > I'm getting bad to medium performance with my new test storage device.
>      I've got 24 1.5T disks with 2 SSDs configured as a zil log device. I'm
>      using the Areca raid controller, the driver being arcmsr. Quad core AMD
>      with 16 gig of RAM OpenSolaris upgraded to snv_134.
>      >
>      > The zpool has 2 11-disk raidz2's and I'm getting anywhere between
>      1MB/sec to 40MB/sec with zpool iostat. On average, though it's more like
>      5MB/sec if I watch while I'm actively doing some r/w. I know that I
>      should be getting better performance.
>      >
> 
>      How are you measuring the performance?
>      Do you understand raidz2 with that big amount of disks in it will give
>      you really poor random write performance?
>      -- Pasi
> 
>    i have a media server with 2 raidz2 vdevs 10 drives wide myself without a
>    ZIL (but with a 64 gb l2arc)
>    I can write to it about 400 MB/s over the network, and scrubs show 600
>    MB/s but it really depends on the type of i/o you have....random i/o
>    across 2 vdevs will be REALLY slow (as slow as the slowest 2 drives in
>    your pool basically)
>    40 MB/s might be right if it's random....though i'd still expect to see
>    more.
> 

7200 RPM SATA disk can do around 120 IOPS max (7200/60 = 120), so if you're 
doing
4 kB random IO you end up getting 4*120 = 480 kB/sec throughput max from a 
single disk 
(in the worst case).

40 MB/sec of random IO throughput using 4 kB IOs would be around 10240 IOPS..
you'd need 85x SATA 7200 RPM disks in raid-0 (striping) for that :)

-- Pasi

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to