On Mon, Dec 9, 2013 at 12:30 PM, Jim Klimov <[email protected]> wrote:

> On 2013-12-09 19:57, Saso Kiselkov wrote:
>
>> Right, but how could this result in >2x the performance?  As indicated
>>> by your diagram, you are doing at most 2 reads at once (or, you are
>>> getting at most 1 read "for free" while the CPU is busy processing the
>>> last block).  You claimed a 10-20x speedup (I am assuming that "several"
>>> means 3).
>>>
>>
>> As I said, I'm gonna have to recheck, it's possible I might be
>> remembering stuff incorrectly. However, currently my possibilities for
>> performance testing are somewhat limited. I'll get back to you as soon
>> as I have more info.
>>
>
> Speaking from the theoretical peanut gallery, the SSDs' modern
> speediness is due to their amount of NAND chips, each doing some
> 20MB/s or so (figures non-authoritative). So if multiple queued
> tasks land on different chips - and there are dozens of them now -
> it may indeed be much faster than single-chip single-requests.
> The device can indeed issue operations to the chips in parallel
> and return answers as they come into its buffers. Maybe so...
>
>
Yes, if we issue N concurrent i/os, it could be up to N times faster than
if we issue one i/o.  In this case, N=2.

--matt
_______________________________________________
developer mailing list
[email protected]
http://lists.open-zfs.org/mailman/listinfo/developer

Reply via email to