This would be in addition to having the journal on SSD.  The journal doesn't 
help at all with small random reads and has a fairly limited ability to 
coalesce writes. 

In my case, the SSDs we are using for journals should have plenty of 
bandwidth/IOPs/space to spare, so I want to see if I can get a little more out 
of them.

-Brendan

________________________________________
From: Noah Mehl [noahm...@combinedpublic.com]
Sent: Monday, March 23, 2015 1:45 PM
To: Brendan Moloney
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] OSD + Flashcache + udev + Partition uuid

We deployed with just putting the journal on an SSD directly, why would this 
not work for you?  Just wondering really :)

Thanks!

~Noah

> On Mar 23, 2015, at 4:36 PM, Brendan Moloney <molo...@ohsu.edu> wrote:
>
> I have been looking at the options for SSD caching for a bit now. Here is my 
> take on the current options:
>
> 1) bcache - Seems to have lots of reliability issues mentioned on mailing 
> list with little sign of improvement.
>
> 2) flashcache - Seems to be no longer (or minimally?) developed/maintained, 
> instead folks are working on the fork enhanceio.
>
> 3) enhanceio - Fork of flashcache.  Dropped the ability to skip caching on 
> sequential writes, which many folks have claimed is important for Ceph OSD 
> caching performance. (see: https://github.com/stec-inc/EnhanceIO/issues/32)
>
> 4) LVM cache (dm-cache) - There is now a user friendly way to use dm-cache, 
> through LVM.  Allows sequential writes to be skipped. You need a pretty 
> recent kernel.
>
> I am going to be trying out LVM cache on my own cluster in the next few 
> weeks.  I will share my results here on the mailing list.  If anyone else has 
> tried it out I would love to hear about it.
>
> -Brendan
>
>> In a long term use I also had some issues with flashcache and enhanceio. 
>> I've noticed frequent slow requests.
>>
>> Andrei
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to