Re: [ceph-users] documentation: osd crash tunables optimal and "some data movement"

2016-12-08 Thread David Welch

I've seen this before and would recommend upgrading from Hammer.


On 12/08/2016 04:26 PM, Peter Gervai wrote:

"Hello List,

This could be transformed a bugreport if anyone feels like, I just
kind of share my harsh experience of today.

We had a few OSDs getting full while others being below 40%; while
these were properly weighted (full ones of 800GB being 0.800 and
fairly empty ones being 2.7TB and 2.700 weighted) it did not seem to
work well.

So I did a
   ceph osd reweight-by-utilization
which resulted some stuck unclean pgs (apart from ~10% pg migration).

Net wisdom said that the CRUSH map and the probabilities were the
cause (of some not really defined way, mentioning probabilities
rejecting OSDs, which without context were pretty hard to interpret
but I have accepted "crush problem, don't ask more"), and some
mentioned that CRUSH tunables should be set to optimal. I have tried
to see what 'optimal' would change but that's not trivial, there seem
to be no documentation to _see_ the current values [update: I figured
out that exporting the crushmap and decompiling it with crushtool
lists the current tunables at the top] or to know which preset contain
what values.

This is ceph version 0.94.9
(fe6d859066244b97b24f09d46552afc2071e6f90), aka hammer. It was
installed as hammer as far as I remember.

Now, the documentation says that if I set tunables to optimal, quote:
"this will result in some data movement (possibly as much as 10%)."
(Sidenote: ceph wasn't complaining about tunables.)

So, that's okay. "ceph osd crush tunables optimal"

Setting it resulted the not quite funny amount of 65% displaced
objects, which didn't make me happy, nor the cluster members due to
extreme IO loads. Fortunately setting it back to "legacy" caused the
whole shitstorm to stop. (I will start it soon, now, in the late
evening, where it won't cause too much harm.)

So, it's not always "some" and "possibly as much as 10%". Reading the
various tunable profiles it seems that there are changes with high
data migration so I don't quite see why this "small data movement" is
mentioned: it's possible, but not compulsory.

Peter
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
~~
David Welch
DevOps
ARS
http://thinkars.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] documentation: osd crash tunables optimal and "some data movement"

2016-12-08 Thread Peter Gervai
"Hello List,

This could be transformed a bugreport if anyone feels like, I just
kind of share my harsh experience of today.

We had a few OSDs getting full while others being below 40%; while
these were properly weighted (full ones of 800GB being 0.800 and
fairly empty ones being 2.7TB and 2.700 weighted) it did not seem to
work well.

So I did a
  ceph osd reweight-by-utilization
which resulted some stuck unclean pgs (apart from ~10% pg migration).

Net wisdom said that the CRUSH map and the probabilities were the
cause (of some not really defined way, mentioning probabilities
rejecting OSDs, which without context were pretty hard to interpret
but I have accepted "crush problem, don't ask more"), and some
mentioned that CRUSH tunables should be set to optimal. I have tried
to see what 'optimal' would change but that's not trivial, there seem
to be no documentation to _see_ the current values [update: I figured
out that exporting the crushmap and decompiling it with crushtool
lists the current tunables at the top] or to know which preset contain
what values.

This is ceph version 0.94.9
(fe6d859066244b97b24f09d46552afc2071e6f90), aka hammer. It was
installed as hammer as far as I remember.

Now, the documentation says that if I set tunables to optimal, quote:
"this will result in some data movement (possibly as much as 10%)."
(Sidenote: ceph wasn't complaining about tunables.)

So, that's okay. "ceph osd crush tunables optimal"

Setting it resulted the not quite funny amount of 65% displaced
objects, which didn't make me happy, nor the cluster members due to
extreme IO loads. Fortunately setting it back to "legacy" caused the
whole shitstorm to stop. (I will start it soon, now, in the late
evening, where it won't cause too much harm.)

So, it's not always "some" and "possibly as much as 10%". Reading the
various tunable profiles it seems that there are changes with high
data migration so I don't quite see why this "small data movement" is
mentioned: it's possible, but not compulsory.

Peter
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com