On Tue, 15 Apr 2008, Mark Maybee wrote:
> going to take 12sec to get this data onto the disk.  This "impedance
> mis-match" is going to manifest as pauses:  the application fills
> the pipe, then waits for the pipe to empty, then starts writing again.
> Note that this won't be smooth, since we need to complete an entire
> sync phase before allowing things to progress.  So you can end up
> with IO gaps.  This is probably what the original submitter is

Yes.  With an application which also needs to make best use of 
available CPU, these I/O "gaps" cut into available CPU time (by 
blocking the process) unless the application uses multithreading and 
an intermediate write queue (more memory) to separate the CPU-centric 
parts from the I/O-centric parts.  While the single-threaded 
application is waiting for data to be written, it is not able to read 
and process more data.  Since reads take time to complete, being 
blocked on write stops new reads from being started so the data is 
ready when it is needed.

> There is one "down side" to this new model: if a write load is very
> "bursty", e.g., a large 5GB write followed by 30secs of idle, the
> new code may be less efficient than the old.  In the old code, all

This is also a common scenario. :-)

Presumably the special "slow I/O" code would not kick in unless the 
burst was large enough to fill quite a bit of the ARC.

Real time throttling is quite a challenge to do in software.

Bob
======================================
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to