Re: [ceph-users] OSD caching on EC-pools (heavy cross OSD communication on cached reads)

2019-06-10 Thread Janne Johansson
Den sön 9 juni 2019 kl 18:29 skrev : > make sense - makes the cases for ec pools smaller though. > > Sunday, 9 June 2019, 17.48 +0200 from paul.emmer...@croit.io < > paul.emmer...@croit.io>: > > Caching is handled in BlueStore itself, erasure coding happens on a higher > layer. > > > In your

Re: [ceph-users] OSD caching on EC-pools (heavy cross OSD communication on cached reads)

2019-06-09 Thread jesper
make sense - makes the cases for ec pools smaller though. Jesper Sent from myMail for iOS Sunday, 9 June 2019, 17.48 +0200 from paul.emmer...@croit.io : >Caching is handled in BlueStore itself, erasure coding happens on a higher >layer. > > >Paul > >-- >Paul Emmerich > >Looking for help

Re: [ceph-users] OSD caching on EC-pools (heavy cross OSD communication on cached reads)

2019-06-09 Thread Paul Emmerich
Caching is handled in BlueStore itself, erasure coding happens on a higher layer. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Sun, Jun 9, 2019 at 8:43 AM

[ceph-users] OSD caching on EC-pools (heavy cross OSD communication on cached reads)

2019-06-09 Thread jesper
Hi. I just changed some of my data on CephFS to go to the EC pool instead of the 3x replicated pool. The data is "write rare / read heavy" data being served to an HPC cluster. To my surprise it looks like the OSD memory caching is done at the "split object level" not at the "assembled object