Re: [ceph-users] v0.90 released

2014-12-23 Thread René Gallati
Hello, so I upgraded my cluster from 89 to 90 and now I get: ~# ceph health HEALTH_WARN too many PGs per OSD (864 max 300) That is a new one. I had too few but never too many. Is this a problem that needs attention, or ignorable? Or is there even a command now to shrink PGs? The message

Re: [ceph-users] v0.90 released

2014-12-23 Thread René Gallati
Hello, On 23.12.2014 12:14, Henrik Korkuc wrote: On 12/23/14 12:57, René Gallati wrote: Hello, so I upgraded my cluster from 89 to 90 and now I get: ~# ceph health HEALTH_WARN too many PGs per OSD (864 max 300) That is a new one. I had too few but never too many. Is this a problem

Re: [ceph-users] Ceph performance - 10 times slower

2014-11-20 Thread René Gallati
Hello Mark, sorry for barging in there but are you sure this is correct? In my tests the -b parameter in rados bench does exactly one thing and that is it uses the value in its output to calculate IO bandwidth: taking the OPS value and multiplies it with the -b value for display. However it