[ceph-users] Giant to Jewel poor read performance with Rados bench

2016-08-06 Thread David
Hi All I've just installed Jewel 10.2.2 on hardware that has previously been running Giant. Rados Bench with the default rand and seq tests is giving me approx 40% of the throughput I used to achieve. On Giant I would get ~1000MB/s (so probably limited by the 10GbE interface), now I'm getting 300

[ceph-users] OSDs going down when we bring down some OSD nodes Or cut-off the cluster network link between OSD nodes

2016-08-06 Thread Venkata Manojawa Paritala
Hi, We have configured single Ceph cluster in a lab with the below specification. 1. Divided the cluster into 3 logical sites (SiteA, SiteB & SiteC). This is to simulate that nodes are part of different Data Centers and having network connectivity between them for DR. 2. Each site operates in a d

Re: [ceph-users] rbd-mirror questions

2016-08-06 Thread Shain Miley
Thank you both for the detailed answers...this gives me a starting point to work from! Shain Sent from my iPhone > On Aug 5, 2016, at 8:25 AM, Jason Dillaman wrote: > >> On Fri, Aug 5, 2016 at 3:42 AM, Wido den Hollander wrote: >> >>> Op 4 augustus 2016 om 18:17 schreef Shain Miley : >>> >

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-06 Thread Ilya Dryomov
On Sat, Aug 6, 2016 at 1:10 AM, Alex Gorbachev wrote: > Is there a way to perhaps increase the discard granularity? The way I see > it based on the discussion so far, here is why discard/unmap is failing to > work with VMWare: > > - RBD provides space in 4MB blocks, which must be discarded entire