Thanks, that’s it exactly.
But I think that’s really too much work for now, that’s why I really would like 
to see a quick-win by using the local RBD cache for now - that would suffice 
for most workloads (not too many people run big databases on CEPH now, those 
who do must be aware of this).

The issue is - and I have not yet seen an answer to that - would it be safe as 
it is now if the flushes were ignored (rbd cache = unsafe) or will it 
completely b0rk the filesystem when not flushed properly?

Jan

> On 01 Jun 2015, at 12:37, Nick Fisk <[email protected]> wrote:
> 
> Hi Mark, I think the real problem is that even tuning Ceph to the Max it is 
> still potentially 100x slower than a hardware raid card for doing these very 
> important sync writes. Especially in DB's that have been designed to rely on 
> the fact they can submit a large chain of very small IO's, without some sort 
> of cache sitting at the front of the whole Ceph infrastructure (Journals and 
> cache tiering are too far back), Ceph just doesn't provide the required 
> latency. I know it would be really quite a large piece of work, but 
> implementing some sort of distributed cache with a very low overhead that 
> could plump direct into librbd would dramatically improve performance, 
> especially in a lot of enterprise workloads.
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to