On Mon, Oct 1, 2012 at 9:47 AM, Tommi Virtanen <[email protected]> wrote:
> On Thu, Sep 27, 2012 at 11:04 AM, Gregory Farnum <[email protected]> wrote:
>> However, my suspicion is that you're limited by metadata throughput
>> here. How large are your files? There might be some MDS or client
>> tunables we can adjust, but rsync's workload is a known weak spot for
>> CephFS.
>
> I feel like people are missing this part of Greg's message. Everyone
> is so busy benchmarking RADOS small I/O, but what if it's currently
> bottlenecked by all the file-level access operations that interact
> with the MDS? Rsync causes a ton of those.

Yes. Bryan, you mentioned that you didn't see a lot of resource usage
— was it perhaps flatlined at (100 * 1 / num_cpus)? The MDS is
multi-threaded in theory, but in practice it has the equivalent of a
Big Kernel Lock so it's not going to get much past one cpu core of
time...
The rados bench results do indicate some pretty bad small-file write
performance as well though, so I guess it's possible your testing is
running long enough that the page cache isn't absorbing that hit. Did
performance start out higher or has it been flat?

> If you want to benchmark just the small IO, you can't compare rsync to rsync.
>
> If you want to benchmark just the metadata part, rsync with 0-size
> files might actually be an interesting workload.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to