Re: Ignore O_SYNC for rbd cache

2012-10-12 Thread Tommi Virtanen
On Wed, Oct 10, 2012 at 9:23 AM, Sage Weil s...@inktank.com wrote: I certainly wouldn't recommend it, but there are probably use cases where it makes sense (i.e., the data isn't as important as the performance). This would make a lot of sense for e.g. service orchestration-style setups where

Ignore O_SYNC for rbd cache

2012-10-10 Thread Andrey Korolyov
Hi, Recent tests on my test rack with 20G IB(iboip, 64k mtu, default CUBIC, CFQ, LSI SAS 2108 w/ wb cache) interconnect shows a quite fantastic performance - on both reads and writes Ceph completely utilizing all disk bandwidth as high as 0.9 of theoretical limit of sum of all bandwidths bearing

Re: Ignore O_SYNC for rbd cache

2012-10-10 Thread Sage Weil
On Wed, 10 Oct 2012, Andrey Korolyov wrote: Hi, Recent tests on my test rack with 20G IB(iboip, 64k mtu, default CUBIC, CFQ, LSI SAS 2108 w/ wb cache) interconnect shows a quite fantastic performance - on both reads and writes Ceph completely utilizing all disk bandwidth as high as 0.9 of

Re: Ignore O_SYNC for rbd cache

2012-10-10 Thread Josh Durgin
On 10/10/2012 09:23 AM, Sage Weil wrote: On Wed, 10 Oct 2012, Andrey Korolyov wrote: Hi, Recent tests on my test rack with 20G IB(iboip, 64k mtu, default CUBIC, CFQ, LSI SAS 2108 w/ wb cache) interconnect shows a quite fantastic performance - on both reads and writes Ceph completely utilizing