I never played much with rados bench but it doesn't seem to have for example 
settings for synchronous/asynchronous workloads, thus it probably just 
benchmarks the OSD throughput and ability to write to journal (in write mode) 
unless you let it run for a longer time.
So when you stop rados bench the OSDs are actually still flushing the data, 
exactly as you wrote.

There are parameters filestore_min_sync_interval and 
filestore_max_sync_interval on OSDs, the cluster should be idle after 
filestore_max_sync_interval (+ a few seconds to actually write the dirty data + 
possibly a few seconds for filesystem to flush) has elapsed.

How did you drop caches on the OSD nodes?

I apologize in advance if I'm wrong :-)

Jan

> On 08 Sep 2015, at 20:38, Deneau, Tom <[email protected]> wrote:
> 
> When measuring read bandwidth using rados bench, I've been doing the
> following:
>   * write some objects using rados bench write --no-cleanup
>   * drop caches on the osd nodes
>   * use rados bench seq to read.
> 
> I've noticed that on the first rados bench seq immediately following the 
> rados bench write,
> there is often activity on the journal partitions which must be a carry over 
> from the rados
> bench write.
> 
> What is the preferred way to ensure that all write activity is finished 
> before starting
> to use rados bench seq?
> 
> -- Tom Deneau
> 
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to