Fwir, you could also put this into your ceph.conf to explicitly put an osd
into the correct chassis at start if you have other osd's which you still
want the crush_update_on_start setting set to true for:

[osd.34]
       osd crush location = "chassis=ceph-osd3-internal"
[osd.35]
       osd crush location = "chassis=ceph-osd3-internal"

etc..

Kind regards,
Caspar

2018-05-22 3:03 GMT+02:00 David Turner <[email protected]>:

> Your problem sounds like osd_crush_update_on_start.  While set to the
> default of true, when an osd starts it tells the mons which server it is on
> and the mons will update the crush map to reflect it. While these osds
> running on the host, but placed in a custom host in the crush map... when
> they start they will update to show which host they are running on.  You
> probably want to disable that in the config so that your custom crush
> placement is not altered.
>
>
> On Mon, May 21, 2018, 6:29 PM Martin, Jeremy <[email protected]> wrote:
>
>> Hello,
>>
>> I had a ceph cluster up and running for a few months now and all has been
>> well and good except for today where I updated two osd nodes and well still
>> well, these two nodes are designated within a rack and the rack is the
>> failure domain so they are essentially mirrors of each.  The issue came
>> when I updated and rebooted the third node which has internal and external
>> disks in a shelf and the failure domain is at the actual osd level as these
>> are normal off the shelf disks for low priority storage that is not mission
>> critical.  The issue is that before the reboot the crush map look and
>> behaved correctly but after the reboot the crush map was changed and had to
>> be rebuilt to get the storage back online, all was well after the
>> reassignment by I need to track down why it lots it configuration.  The
>> main differences here is that the first four disks (34-37) are supposed to
>> be assigned to the chassis ceph-osd3-internal (like the before) and 21-31
>> assigned to chassis chassis-ceph-osd3-shelf1
>>  (again like the before).  After the reboot everything (34-37 and 21-31)
>> was reassigned to the host ceph-osd3.  Update was from 12.2.4 to 12.2.5.
>> Any thoughts?
>>
>> Jeremy
>>
>> Before
>>
>> ID  CLASS WEIGHT   TYPE NAME                      STATUS REWEIGHT PRI-AFF
>> -58              0 root osd3-internal
>> -54              0     chassis ceph-osd3-internal
>>  34   hdd  0.42899         osd.34                     up  1.00000 1.00000
>>  35   hdd  0.42899         osd.35                     up  1.00000 1.00000
>>  36   hdd  0.42899         osd.36                     up  1.00000 1.00000
>>  37   hdd  0.42899         osd.37                     up  1.00000 1.00000
>> -50              0 root osd3-shelf1
>> -56              0     chassis ceph-osd3-shelf1
>>  21   hdd  1.81898         osd.21                     up  1.00000 1.00000
>>  22   hdd  1.81898         osd.22                     up  1.00000 1.00000
>>  23   hdd  1.81898         osd.23                     up  1.00000 1.00000
>>  24   hdd  1.81898         osd.24                     up  1.00000 1.00000
>>  25   hdd  1.81898         osd.25                     up  1.00000 1.00000
>>  26   hdd  1.81898         osd.26                     up  1.00000 1.00000
>>  27   hdd  1.81898         osd.27                     up  1.00000 1.00000
>>  28   hdd  1.81898         osd.28                     up  1.00000 1.00000
>>  29   hdd  1.81898         osd.29                     up  1.00000 1.00000
>>  30   hdd  1.81898         osd.30                     up  1.00000 1.00000
>>  31   hdd  1.81898         osd.31                     up  1.00000 1.00000
>>  -7              0 host ceph-osd3
>>  -1       47.21199 root default
>> -40       23.59000     rack mainehall
>>  -3       23.59000         host ceph-osd1
>>   0   hdd  1.81898             osd.0                  up  1.00000 1.00000
>>       Additional osd's left off for brevity
>> -5       23.62199         host ceph-osd2
>>  11   hdd  1.81898             osd.11                 up  1.00000 1.00000
>>       Additional osd's left off for brevity
>>
>> After
>>
>> ID  CLASS WEIGHT   TYPE NAME                      STATUS REWEIGHT PRI-AFF
>> -58              0 root osd3-internal
>> -54              0     chassis ceph-osd3-internal
>> -50              0 root osd3-shelf1
>> -56              0     chassis ceph-osd3-shelf1
>> -7               0 host ceph-osd3
>>  21   hdd  1.81898         osd.21                     up  1.00000 1.00000
>>  22   hdd  1.81898         osd.22                     up  1.00000 1.00000
>>  23   hdd  1.81898         osd.23                     up  1.00000 1.00000
>>  24   hdd  1.81898         osd.24                     up  1.00000 1.00000
>>  25   hdd  1.81898         osd.25                     up  1.00000 1.00000
>>  26   hdd  1.81898         osd.26                     up  1.00000 1.00000
>>  27   hdd  1.81898         osd.27                     up  1.00000 1.00000
>>  28   hdd  1.81898         osd.28                     up  1.00000 1.00000
>>  29   hdd  1.81898         osd.29                     up  1.00000 1.00000
>>  30   hdd  1.81898         osd.30                     up  1.00000 1.00000
>>  31   hdd  1.81898         osd.31                     up  1.00000 1.00000
>>  34   hdd  0.42899         osd.34                     up  1.00000 1.00000
>>  35   hdd  0.42899         osd.35                     up  1.00000 1.00000
>>  36   hdd  0.42899         osd.36                     up  1.00000 1.00000
>>  37   hdd  0.42899         osd.37                     up  1.00000 1.00000
>>  -1       47.21199 root default
>> -40       23.59000     rack mainehall
>>  -3       23.59000         host ceph-osd1
>>   0   hdd  1.81898             osd.0                  up  1.00000 1.00000
>>       Additional osd's left off for brevity
>> -42       23.62199     rack rangleyhall
>>  -5       23.62199         host ceph-osd2
>>  11   hdd  1.81898             osd.11                 up  1.00000 1.00000
>>       Additional osd's left off for brevity
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to