Re: [ceph-users] SSS Caching

2016-10-26 Thread Christian Balzer

Hello,

On Wed, 26 Oct 2016 15:40:00 + Ashley Merrick wrote:

> Hello All,
> 
> Currently running a CEPH cluster connected to KVM via the KRBD and used only 
> for this purpose.
> 
> Is working perfectly fine, however would like to look at increasing / helping 
> with random write performance and latency, specially from multiple VM's 
> hitting the spinning disks at same time.
> 
Is it more a question of contention (HDDs being busy) or latency (lots of
small write I/Os)?

> Currently have journals on SSD so helps with a very short burst, however 
> looking into putting some proper SSD "cache" in front.
> 
You will want to read some of the recent and current "cache tier" threads
here, especially the "cache tiering deprecated in RHCS 2.0" one, to which
interestingly there hasn't been a single comment by RH or the devs,
official or otherwise.

> I have read that in the past the cache tiering hasn't been great, however has 
> improved some what in recent releases and if setup correctly works well once 
> tuned.
> 

Tuning the various (and often undocumented like "readforward") cache
options is one thing, having a working set of hot objects that fits into
your cache after all that tuning is the biggest question/issue. 

So if you have VMs that do operations on the same DB files over and over
again you're more likely to see success than if your VMs are writing
hundreds of GB of fresh data per day.

> However as I want to make sure what choice and hardware I put in place will 
> last for a while / future releases, does SSD cache work / going to work with 
> the new BlueStore?
> 
See the above thread for the "future" of cache-tiering. 

BlueStore may or may not be fast enough to not require cache-tiering for
your situation.
I see no reason why cache-tiering wouldn't work with it, it's just pools
after all, nothing to do with the storage layer.

> Or am I better off creating a SSD Pool and placing the OS Disk on this and 
> using the standard pool as anything non OS related such as /home partitions 
> e.t.c (bigger overhead on the configuration side per a VM)
> 
Usually the scenario here would be to have "fast" (SSD pool backed) images
for users with special needs (i.e. DBs).

A SSD pool approach has the advantage that you can be VERY specific
instead of dealing with ALL transactions and data which the cache-tier
would need to.


> Basically my questions are:
> 
> 1/ Will cache Tier continue to be supported for versions to come and in new 
> backend format.
>
Nobody knows at this time, or more precisely is willing to speak up.
 
> 2/ I currently run at 3 replication, is it safe to run replication 2 for SSD 
> cache while using Write Back while using DC grade SSD?
> 
I'm doing it, for both performance and cost reasons, but you'll sleep
better with a 3x replication, especially if the individual SSDs are large
and/or your network is slow (time to recovery).

Christian

> 3/ Anything I should be aware of when looking into caching?
> 
> Thanks for your time!,
> Ashley
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] SSS Caching

2016-10-26 Thread Ashley Merrick
Hello All,

Currently running a CEPH cluster connected to KVM via the KRBD and used only 
for this purpose.

Is working perfectly fine, however would like to look at increasing / helping 
with random write performance and latency, specially from multiple VM's hitting 
the spinning disks at same time.

Currently have journals on SSD so helps with a very short burst, however 
looking into putting some proper SSD "cache" in front.

I have read that in the past the cache tiering hasn't been great, however has 
improved some what in recent releases and if setup correctly works well once 
tuned.

However as I want to make sure what choice and hardware I put in place will 
last for a while / future releases, does SSD cache work / going to work with 
the new BlueStore?

Or am I better off creating a SSD Pool and placing the OS Disk on this and 
using the standard pool as anything non OS related such as /home partitions 
e.t.c (bigger overhead on the configuration side per a VM)

Basically my questions are:

1/ Will cache Tier continue to be supported for versions to come and in new 
backend format.

2/ I currently run at 3 replication, is it safe to run replication 2 for SSD 
cache while using Write Back while using DC grade SSD?

3/ Anything I should be aware of when looking into caching?

Thanks for your time!,
Ashley
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com