Is this typical for these devices? Can I extrapolate these numbers 
linearly? For example, can I expect  ~21 4MB sequential writes per second?

--- Todd

Andrey Kuzmin wrote:
> http://mtron.net/Upload_Data/Spec/ASiC/MOBI/SATA/MSD-SATA3535_rev0.3.pdf,
> Sect. 3.3.1, Table 4, last row.
>
> Regards,
> Andrey
>
>
>
> On Wed, Oct 15, 2008 at 9:10 PM, Eric Sproul <[EMAIL PROTECTED]> wrote:
>   
>> Eric Sproul wrote:
>>     
>>> zpool create data raidz c0t4d0 c0t5d0 c0t6d0 c0t7d0 log mirror c2d1 c3d1
>>>       
>> This box has 8 cores (2x quad-core Xeon) and 32GB RAM.
>>
>> We've been running Postgres 8.3 stress tests (pgbench), but we see some odd
>> results.  Looking at 'iostat -xn' we see what looks like a bottleneck on the 
>> ZIL
>> devices:
>>
>>                    extended device statistics
>>    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>>    0.0  102.0    0.0 6527.7  0.0  1.0    0.0    9.4   0  96 c2d1
>>    0.0  101.0    0.0 6399.7  0.0  1.0    0.0    9.5   0  95 c3d1
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c1t1d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t2d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t3d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t4d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t6d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t7d0
>> `                    extended device statistics
>>    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>>    0.0  102.0    0.0 5072.0  0.0  1.0    0.0    9.4   0  96 c2d1
>>    0.0  103.0    0.0 5200.0  0.0  0.9    0.0    9.0   0  92 c3d1
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c1t1d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t2d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t3d0
>>    0.0   55.0    0.0 1458.5  0.0  0.9    0.0   17.1   0  24 c0t4d0
>>    0.0   53.0    0.0 1458.5  0.0  0.8    0.0   15.8   0  24 c0t5d0
>>    0.0   57.0    0.0 1456.5  0.0  1.5    0.0   25.7   0  29 c0t6d0
>>    0.0   63.0    0.0 1438.0  0.0  1.1    0.0   17.3   0  22 c0t7d0
>>                    extended device statistics
>>    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>>    0.0  101.0    0.0 5608.3  0.0  1.0    0.0    9.4   0  95 c2d1
>>    0.0  101.0    0.0 5608.3  0.0  1.0    0.0    9.4   0  95 c3d1
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c1t1d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t2d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t3d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t4d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t6d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t7d0
>>                    extended device statistics
>>    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>>    0.0  102.0    0.0 6528.0  0.0  1.0    0.0    9.4   0  95 c2d1
>>    0.0  102.0    0.0 6528.0  0.0  0.9    0.0    9.0   0  92 c3d1
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c1t1d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t2d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t3d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t4d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t6d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t7d0
>>                    extended device statistics
>>    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>>    0.0  100.0    0.0 6399.8  0.0  1.0    0.0    9.6   0  96 c2d1
>>    0.0  100.0    0.0 6399.8  0.0  1.0    0.0    9.6   0  96 c3d1
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c1t1d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t2d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t3d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t4d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t6d0
>>    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t7d0
>>
>> We never see more than about 100 write ops/second on the SSDs, which seems
>> really low to me, and it also seems odd that they are 95-96% utilized at that
>> rate.  The pgbench test is primarily doing updates with some percentage of
>> inserts thrown in, and the entire database fits into RAM, which is why we see
>> very little activity on the raidz disks, but lots on the ZIL, since there's a
>> ton of fsync() from Postgres.
>>
>> I was expecting much higher IOPS on the SSDs.  Can anyone help me understand
>> what's going on here?
>>
>> Thanks,
>> Eric
>> _______________________________________________
>> storage-discuss mailing list
>> [email protected]
>> http://mail.opensolaris.org/mailman/listinfo/storage-discuss
>>
>>     
> _______________________________________________
> storage-discuss mailing list
> [email protected]
> http://mail.opensolaris.org/mailman/listinfo/storage-discuss
>   

_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to