Re: [ceph-users] question about feature set mismatch

2014-06-21 Thread Ilya Dryomov
On Fri, Jun 20, 2014 at 2:02 AM, Erik Logtenberg e...@logtenberg.eu wrote: Hi Ilya, Do you happen to know when this fix will be released? Is upgrading to a newer kernel (client side) still a solution/workaround too? If yes, which kernel version is required? This fix is purely server-side,

Re: [ceph-users] erasure coding parameter's choice and performance

2014-06-21 Thread Loic Dachary
Hi, erasure-code: implement alignment on chunk sizes https://github.com/ceph/ceph/pull/1890 should resolve the unnecessary overhead for Cauchy and will hopefully be merged soon. Cheers On 21/06/2014 14:57, David Z wrote: Hi Loic, Thanks for your reply. I actually used the tool you

Re: [ceph-users] Upgrading Ceph 0.72 to 0.79 on Ubuntu 12.04

2014-06-21 Thread Uwe Grohnwaldt
Hi, best way to upgrade: use official ceph repository. It has firefly (0.80.1) for precise. (http://ceph.com/docs/master/install/get-packages/) Moreover you should install the trusty-kernel (linux-generic-lts-trusty) Mit freundlichen Grüßen / Best Regards, -- Consultant Dipl.-Inf. Uwe

Re: [ceph-users] Upgrading Ceph 0.72 to 0.79 on Ubuntu 12.04

2014-06-21 Thread Shesha Sreenivasamurthy
Well as mentioned, I do not want to upgrade operating system. Can we not run ceph 0.79 on 12.04 Ubuntu ? On Jun 21, 2014 1:54 PM, Uwe Grohnwaldt u...@grohnwaldt.eu wrote: Hi, best way to upgrade: use official ceph repository. It has firefly (0.80.1) for precise.

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-21 Thread Mark Kirkwood
I can reproduce this in: ceph version 0.81-423-g1fb4574 on Ubuntu 14.04. I have a two osd cluster with data on two sata spinners (WD blacks) and journals on two ssd (Crucual m4's). I getting about 3.5 MB/s (kernel and librbd) using your dd command with direct on. Leaving off direct I'm

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-21 Thread Mark Kirkwood
On 22/06/14 14:09, Mark Kirkwood wrote: Upgrading the VM to 14.04 and restesting the case *without* direct I get: - 164 MB/s (librbd) - 115 MB/s (kernel 3.13) So managing to almost get native performance out of the librbd case. I tweaked both filestore max and min sync intervals (100 and 10