Re: [ceph-users] Ceph osd crush weight to utilization incorrect on one node

2018-05-11 Thread Pardhiv Karri
Hi David, Here is the output of ceph df. We have lot of space in our ceph cluster. We have 2 OSDs (266,500) down earlier due to hardware issue and never got a chance to fix them. GLOBAL: SIZE AVAIL RAW USED %RAW USED 1101T 701T 400T 36.37 POOLS: NAME

Re: [ceph-users] Ceph osd crush weight to utilization incorrect on one node

2018-05-11 Thread David Turner
What's your `ceph osd tree`, `ceph df`, `ceph osd df`? You sound like you just have a fairly fill cluster that you haven't balanced the crush weights on. On Fri, May 11, 2018, 10:06 PM Pardhiv Karri wrote: > Hi David, > > Thanks for the reply. Yeah we are seeing that

Re: [ceph-users] Ceph osd crush weight to utilization incorrect on one node

2018-05-11 Thread Pardhiv Karri
Hi David, Thanks for the reply. Yeah we are seeing that 0.0001 usage on pretty much on all OSDs. But this node it is different whether full weight or just 0.2of OSD 611 the OSD 611 start increasing. --Pardhiv K On Fri, May 11, 2018 at 10:50 AM, David Turner wrote: >

Re: [ceph-users] Ceph osd crush weight to utilization incorrect on one node

2018-05-11 Thread Pardhiv Karri
Hi Bryan, Thank you for the reply. We are on Hammer, ceph version 0.94.9 (fe6d859066244b97b24f09d46552afc2071e6f90) We tried with full weight on all OSDs on that node and the OSDs like 611 are going above 90% so downsized and tested with only 0.2 Our PGs are at 119 for all 12 pools in the

Re: [ceph-users] Ceph osd crush weight to utilization incorrect on one node

2018-05-11 Thread David Turner
There was a time in the history of Ceph where a weight of 0.0 was not always what you thought. People had better experiences with crush weights of something like 0.0001 or something. This is just a memory tickling in the back of my mind of things I've read on the ML years back. On Fri, May 11,

Re: [ceph-users] Ceph osd crush weight to utilization incorrect on one node

2018-05-11 Thread Bryan Stillwell
> We have a large 1PB ceph cluster. We recently added 6 nodes with 16 2TB disks > each to the cluster. All the 5 nodes rebalanced well without any issues and > the sixth/last node OSDs started acting weird as I increase weight of one osd > the utilization doesn't change but a different osd on the

[ceph-users] Ceph osd crush weight to utilization incorrect on one node

2018-05-10 Thread Pardhiv Karri
Hi, We have a large 1PB ceph cluster. We recently added 6 nodes with 16 2TB disks each to the cluster. All the 5 nodes rebalanced well without any issues and the sixth/last node OSDs started acting weird as I increase weight of one osd the utilization doesn't change but a different osd on the