On 10/23/2015 02:21 AM, Howard Chu wrote:
Normally, best practice is to use batching to avoid paying worst case latency
>when you do a synchronous IO. Write a batch of files or appends without
fsync,
>then go back and fsync and you will pay that latency once (not per file/op).
If filesystems would support ordered writes you wouldn't need to fsync at
all. Just spit out a stream of writes and declare that batch N must be
written before batch N+1. (Note that this is not identical to "write
barriers", which imposed the same latencies as fsync by blocking all I/Os at
a barrier boundary. Ordered writes may be freely interleaved with un-ordered
writes, so normal I/O traffic can proceed unhindered. Their ordering is only
enforced wrt other ordered writes.)

A bit of a shame that Linux's SCSI drivers support Ordering attributes but
nothing above that layer makes use of it.

I think that if the stream on either side of the barrier is large enough, using ordered tags (SCSI speak) versus doing stream1, fsync(), stream2, should have the same performance.

Not clear to me if we could do away with an fsync to trigger a cache flush here either - do SCSI ordered tags require that the writes be acknowledged only when durable, or can the device ack them once the target has them (including in a volatile write cache)?

Ric

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to