On Thu, May 29, 2014 at 11:05:42AM +0200, Paolo Valente wrote:
> This patch boosts the throughput on NCQ-capable flash-based devices,
> while still preserving latency guarantees for interactive and soft
> real-time applications. The throughput is boosted by just not idling
> the device when the in-service queue remains empty, even if the queue
> is sync and has a non-null idle window. This helps to keep the drive's
> internal queue full, which is necessary to achieve maximum
> performance. This solution to boost the throughput is a port of
> commits a68bbdd and f7d7b7a for CFQ.
> 
> As already highlighted in patch 10, allowing the device to prefetch
> and internally reorder requests trivially causes loss of control on
> the request service order, and hence on service guarantees.
> Fortunately, as discussed in detail in the comments to the function
> bfq_bfqq_must_not_expire(), if every process has to receive the same
> fraction of the throughput, then the service order enforced by the
> internal scheduler of a flash-based device is relatively close to that
> enforced by BFQ. In particular, it is close enough to let service
> guarantees be substantially preserved.
> 
> Things change in an asymmetric scenario, i.e., if not every process
> has to receive the same fraction of the throughput. In this case, to
> guarantee the desired throughput distribution, the device must be
> prevented from prefetching requests. This is exactly what this patch
> does in asymmetric scenarios.

Does it even make sense to use this type of heavy iosched on ssds?
It's highly likely that ssds will soon be served through blk-mq
bypassing all these.  I don't feel too enthused about adding code to
support ssds to ioscheds.  A lot better approach would be just default
to deadline for them anyway.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to