Re: [ceph-users] Persistent Write Back Cache

2015-03-04 Thread Sage Weil
-boun...@lists.ceph.com] On Behalf Of Christian Balzer Sent: 04 March 2015 08:40 To: ceph-users@lists.ceph.com Cc: Nick Fisk Subject: Re: [ceph-users] Persistent Write Back Cache Hello, If I understand you correctly, you're talking about the rbd cache on the client side. So assume

Re: [ceph-users] Persistent Write Back Cache

2015-03-04 Thread Nick Fisk
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of John Spray Sent: 04 March 2015 11:34 To: Nick Fisk; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Persistent Write Back Cache On 04/03/2015 08:26, Nick Fisk wrote: To illustrate the difference

Re: [ceph-users] Persistent Write Back Cache

2015-03-04 Thread Mark Nelson
On 03/04/2015 05:34 AM, John Spray wrote: On 04/03/2015 08:26, Nick Fisk wrote: To illustrate the difference a proper write back cache can make, I put a 1GB (512mb dirty threshold) flashcache in front of my RBD and tweaked the flush parameters to flush dirty blocks at a large queue depth. The

Re: [ceph-users] Persistent Write Back Cache

2015-03-04 Thread Christian Balzer
March 2015 08:40 To: ceph-users@lists.ceph.com Cc: Nick Fisk Subject: Re: [ceph-users] Persistent Write Back Cache Hello, If I understand you correctly, you're talking about the rbd cache on the client side. So assume that host or the cache SSD on if fail terminally. The client

Re: [ceph-users] Persistent Write Back Cache

2015-03-04 Thread Nick Fisk
: Re: [ceph-users] Persistent Write Back Cache Hello, If I understand you correctly, you're talking about the rbd cache on the client side. So assume that host or the cache SSD on if fail terminally. The client thinks its sync'ed are on the permanent storage (the actual ceph storage cluster

Re: [ceph-users] Persistent Write Back Cache

2015-03-04 Thread John Spray
On 04/03/2015 08:26, Nick Fisk wrote: To illustrate the difference a proper write back cache can make, I put a 1GB (512mb dirty threshold) flashcache in front of my RBD and tweaked the flush parameters to flush dirty blocks at a large queue depth. The same fio test (128k iodepth=1) now runs