Re: [ceph-users] after reboot node appear outside the root root tree

2017-09-13 Thread Maxime Guyot
Hi,

This is a common problem when doing custom CRUSHmap, the default behavior
is to update the OSD node to location in the CRUSHmap on start. did you
keep to the defaults there?

If that is the problem, you can either:
1) Disable the update on start option: "osd crush update on start = false"
(see
http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-location)
2) Customize the script defining the location of OSDs with "crush location
hook = /path/to/customized-ceph-crush-location" (see
https://github.com/ceph/ceph/blob/master/src/ceph-crush-location.in).

Cheers,
Maxime

On Wed, 13 Sep 2017 at 18:35 German Anders  wrote:

> *# ceph health detail*
> HEALTH_OK
>
> *# ceph osd stat*
> 48 osds: 48 up, 48 in
>
> *# ceph pg stat*
> 3200 pgs: 3200 active+clean; 5336 MB data, 79455 MB used, 53572 GB / 53650
> GB avail
>
>
> *German*
>
> 2017-09-13 13:24 GMT-03:00 dE :
>
>> On 09/13/2017 09:08 PM, German Anders wrote:
>>
>> Hi cephers,
>>
>> I'm having an issue with a newly created cluster 12.2.0
>> (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc). Basically when I
>> reboot one of the nodes, and when it come back, it come outside of the root
>> type on the tree:
>>
>> root@cpm01:~# ceph osd tree
>> ID  CLASS WEIGHT   TYPE NAME  STATUS REWEIGHT PRI-AFF
>> -15   12.0 *root default*
>> * 36  nvme  1.0 osd.36 up  1.0 1.0*
>> * 37  nvme  1.0 osd.37 up  1.0 1.0*
>> * 38  nvme  1.0 osd.38 up  1.0 1.0*
>> * 39  nvme  1.0 osd.39 up  1.0 1.0*
>> * 40  nvme  1.0 osd.40 up  1.0 1.0*
>> * 41  nvme  1.0 osd.41 up  1.0 1.0*
>> * 42  nvme  1.0 osd.42 up  1.0 1.0*
>> * 43  nvme  1.0 osd.43 up  1.0 1.0*
>> * 44  nvme  1.0 osd.44 up  1.0 1.0*
>> * 45  nvme  1.0 osd.45 up  1.0 1.0*
>> * 46  nvme  1.0 osd.46 up  1.0 1.0*
>> * 47  nvme  1.0 osd.47 up  1.0 1.0*
>>  -7   36.0 *root root*
>>  -5   24.0 rack rack1
>>  -1   12.0 node cpn01
>>   01.0 osd.0  up  1.0 1.0
>>   11.0 osd.1  up  1.0 1.0
>>   21.0 osd.2  up  1.0 1.0
>>   31.0 osd.3  up  1.0 1.0
>>   41.0 osd.4  up  1.0 1.0
>>   51.0 osd.5  up  1.0 1.0
>>   61.0 osd.6  up  1.0 1.0
>>   71.0 osd.7  up  1.0 1.0
>>   81.0 osd.8  up  1.0 1.0
>>   91.0 osd.9  up  1.0 1.0
>>  101.0 osd.10 up  1.0 1.0
>>  111.0 osd.11 up  1.0 1.0
>>  -3   12.0 node cpn03
>>  241.0 osd.24 up  1.0 1.0
>>  251.0 osd.25 up  1.0 1.0
>>  261.0 osd.26 up  1.0 1.0
>>  271.0 osd.27 up  1.0 1.0
>>  281.0 osd.28 up  1.0 1.0
>>  291.0 osd.29 up  1.0 1.0
>>  301.0 osd.30 up  1.0 1.0
>>  311.0 osd.31 up  1.0 1.0
>>  321.0 osd.32 up  1.0 1.0
>>  331.0 osd.33 up  1.0 1.0
>>  341.0 osd.34 up  1.0 1.0
>>  351.0 osd.35 up  1.0 1.0
>>  -6   12.0 rack rack2
>>  -2   12.0 node cpn02
>>  121.0 osd.12 up  1.0 1.0
>>  131.0 osd.13 up  1.0 1.0
>>  141.0 osd.14 up  1.0 1.0
>>  151.0 osd.15 up  1.0 1.0
>>  161.0 osd.16 up  1.0 1.0
>>  171.0 osd.17 up  1.0 1.0
>>  181.0 osd.18 up  1.0 1.0
>>  191.0 osd.19 up  1.0 1.0
>>  201.0 osd.20 up  1.0 1.0
>>  211.0 osd.21 up  1.0 1.0
>>  221.0 osd.22 up  1.0 1.0
>>  231.0 osd.23 up  1.0 1.0
>> * -4  0 node cpn04*
>>
>> Any ideas of why this happen? and how can I fix it? It supposed to be
>> inside rack2
>>
>> Thanks in advance,
>>
>> Best,
>>
>> *German*
>>
>>
>> ___
>> 

Re: [ceph-users] after reboot node appear outside the root root tree

2017-09-13 Thread Luis Periquito
What's your "osd crush update on start" option?

further information can be found
http://docs.ceph.com/docs/master/rados/operations/crush-map/

On Wed, Sep 13, 2017 at 4:38 PM, German Anders  wrote:
> Hi cephers,
>
> I'm having an issue with a newly created cluster 12.2.0
> (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc). Basically when I
> reboot one of the nodes, and when it come back, it come outside of the root
> type on the tree:
>
> root@cpm01:~# ceph osd tree
> ID  CLASS WEIGHT   TYPE NAME  STATUS REWEIGHT PRI-AFF
> -15   12.0 root default
>  36  nvme  1.0 osd.36 up  1.0 1.0
>  37  nvme  1.0 osd.37 up  1.0 1.0
>  38  nvme  1.0 osd.38 up  1.0 1.0
>  39  nvme  1.0 osd.39 up  1.0 1.0
>  40  nvme  1.0 osd.40 up  1.0 1.0
>  41  nvme  1.0 osd.41 up  1.0 1.0
>  42  nvme  1.0 osd.42 up  1.0 1.0
>  43  nvme  1.0 osd.43 up  1.0 1.0
>  44  nvme  1.0 osd.44 up  1.0 1.0
>  45  nvme  1.0 osd.45 up  1.0 1.0
>  46  nvme  1.0 osd.46 up  1.0 1.0
>  47  nvme  1.0 osd.47 up  1.0 1.0
>  -7   36.0 root root
>  -5   24.0 rack rack1
>  -1   12.0 node cpn01
>   01.0 osd.0  up  1.0 1.0
>   11.0 osd.1  up  1.0 1.0
>   21.0 osd.2  up  1.0 1.0
>   31.0 osd.3  up  1.0 1.0
>   41.0 osd.4  up  1.0 1.0
>   51.0 osd.5  up  1.0 1.0
>   61.0 osd.6  up  1.0 1.0
>   71.0 osd.7  up  1.0 1.0
>   81.0 osd.8  up  1.0 1.0
>   91.0 osd.9  up  1.0 1.0
>  101.0 osd.10 up  1.0 1.0
>  111.0 osd.11 up  1.0 1.0
>  -3   12.0 node cpn03
>  241.0 osd.24 up  1.0 1.0
>  251.0 osd.25 up  1.0 1.0
>  261.0 osd.26 up  1.0 1.0
>  271.0 osd.27 up  1.0 1.0
>  281.0 osd.28 up  1.0 1.0
>  291.0 osd.29 up  1.0 1.0
>  301.0 osd.30 up  1.0 1.0
>  311.0 osd.31 up  1.0 1.0
>  321.0 osd.32 up  1.0 1.0
>  331.0 osd.33 up  1.0 1.0
>  341.0 osd.34 up  1.0 1.0
>  351.0 osd.35 up  1.0 1.0
>  -6   12.0 rack rack2
>  -2   12.0 node cpn02
>  121.0 osd.12 up  1.0 1.0
>  131.0 osd.13 up  1.0 1.0
>  141.0 osd.14 up  1.0 1.0
>  151.0 osd.15 up  1.0 1.0
>  161.0 osd.16 up  1.0 1.0
>  171.0 osd.17 up  1.0 1.0
>  181.0 osd.18 up  1.0 1.0
>  191.0 osd.19 up  1.0 1.0
>  201.0 osd.20 up  1.0 1.0
>  211.0 osd.21 up  1.0 1.0
>  221.0 osd.22 up  1.0 1.0
>  231.0 osd.23 up  1.0 1.0
>  -4  0 node cpn04
>
> Any ideas of why this happen? and how can I fix it? It supposed to be inside
> rack2
>
> Thanks in advance,
>
> Best,
>
> German
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] after reboot node appear outside the root root tree

2017-09-13 Thread German Anders
Thanks a lot Maxime, I did the osd_crush_update_on_start = false on
ceph.conf and push it to all the nodes, and then i create a map file:

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8
device 9 osd.9
device 10 osd.10
device 11 osd.11
device 12 osd.12
device 13 osd.13
device 14 osd.14
device 15 osd.15
device 16 osd.16
device 17 osd.17
device 18 osd.18
device 19 osd.19
device 20 osd.20
device 21 osd.21
device 22 osd.22
device 23 osd.23
device 24 osd.24
device 25 osd.25
device 26 osd.26
device 27 osd.27
device 28 osd.28
device 29 osd.29
device 30 osd.30
device 31 osd.31
device 32 osd.32
device 33 osd.33
device 34 osd.34
device 35 osd.35
device 36 osd.36
device 37 osd.37
device 38 osd.38
device 39 osd.39
device 40 osd.40
device 41 osd.41
device 42 osd.42
device 43 osd.43
device 44 osd.44
device 45 osd.45
device 46 osd.46
device 47 osd.47

# types
type 0 osd
type 1 node
type 2 rack
type 3 root

# buckets
node cpn01 {
id -1 # do not change unnecessarily
# weight 12.000
alg straw
hash 0 # rjenkins1
item osd.0 weight 1.000
item osd.1 weight 1.000
item osd.2 weight 1.000
item osd.3 weight 1.000
item osd.4 weight 1.000
item osd.5 weight 1.000
item osd.6 weight 1.000
item osd.7 weight 1.000
item osd.8 weight 1.000
item osd.9 weight 1.000
item osd.10 weight 1.000
item osd.11 weight 1.000
}
node cpn02 {
id -2 # do not change unnecessarily
# weight 12.000
alg straw
hash 0 # rjenkins1
item osd.12 weight 1.000
item osd.13 weight 1.000
item osd.14 weight 1.000
item osd.15 weight 1.000
item osd.16 weight 1.000
item osd.17 weight 1.000
item osd.18 weight 1.000
item osd.19 weight 1.000
item osd.20 weight 1.000
item osd.21 weight 1.000
item osd.22 weight 1.000
item osd.23 weight 1.000
}
node cpn03 {
id -3 # do not change unnecessarily
# weight 12.000
alg straw
hash 0 # rjenkins1
item osd.24 weight 1.000
item osd.25 weight 1.000
item osd.26 weight 1.000
item osd.27 weight 1.000
item osd.28 weight 1.000
item osd.29 weight 1.000
item osd.30 weight 1.000
item osd.31 weight 1.000
item osd.32 weight 1.000
item osd.33 weight 1.000
item osd.34 weight 1.000
item osd.35 weight 1.000
}
node cpn04 {
id -4 # do not change unnecessarily
# weight 12.000
alg straw
hash 0 # rjenkins1
item osd.36 weight 1.000
item osd.37 weight 1.000
item osd.38 weight 1.000
item osd.39 weight 1.000
item osd.40 weight 1.000
item osd.41 weight 1.000
item osd.42 weight 1.000
item osd.43 weight 1.000
item osd.44 weight 1.000
item osd.45 weight 1.000
item osd.46 weight 1.000
item osd.47 weight 1.000
}
rack rack1 {
id -5 # do not change unnecessarily
# weight 24.000
alg straw
hash 0 # rjenkins1
item cpn01 weight 12.000
item cpn03 weight 12.000
}
rack rack2 {
id -6 # do not change unnecessarily
# weight 24.000
alg straw
hash 0 # rjenkins1
item cpn02 weight 12.000
item cpn04 weight 12.000
}
root root {
id -7 # do not change unnecessarily
# weight 48.000
alg straw
hash 0 # rjenkins1
item rack1 weight 24.000
item rack2 weight 24.000
}

# rules
rule replicated_rule {
id 0
type replicated
min_size 1
max_size 10
step take root
step chooseleaf firstn 0 type node
step emit
}

# end crush map

and finally issue:
# *crushtool -c map.txt -o crushmap*
# *ceph osd setcrushmap -i crushmap*

since it's a new cluster no problem with rebalance


​Best,​

*German*

2017-09-13 13:46 GMT-03:00 Maxime Guyot :

> Hi,
>
> This is a common problem when doing custom CRUSHmap, the default behavior
> is to update the OSD node to location in the CRUSHmap on start. did you
> keep to the defaults there?
>
> If that is the problem, you can either:
> 1) Disable the update on start option: "osd crush update on start = false"
> (see http://docs.ceph.com/docs/master/rados/operations/
> crush-map/#crush-location)
> 2) Customize the script defining the location of OSDs with "crush location
> hook = /path/to/customized-ceph-crush-location" (see
> https://github.com/ceph/ceph/blob/master/src/ceph-crush-location.in).
>
> Cheers,
> Maxime
>
> On Wed, 13 Sep 2017 at 18:35 German Anders  wrote:
>
>> *# ceph health detail*
>> HEALTH_OK
>>
>> *# ceph osd stat*
>> 48 osds: 48 up, 48 in
>>
>> *# ceph pg stat*
>> 3200 pgs: 3200 active+clean; 5336 MB data, 79455 MB used, 53572 GB /
>> 53650 GB avail
>>
>>
>> *German*
>>
>> 2017-09-13 13:24 GMT-03:00 dE :
>>
>>> On 09/13/2017 09:08 PM, German Anders wrote:
>>>
>>> Hi cephers,
>>>
>>> I'm having an issue with a newly created cluster 12.2.0 (
>>> 32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc). Basically when
>>> I reboot one of the nodes, and when it come back, it come outside of the
>>> root type on the tree:
>>>
>>> root@cpm01:~# ceph 

Re: [ceph-users] after reboot node appear outside the root root tree

2017-09-13 Thread German Anders
*# ceph health detail*
HEALTH_OK

*# ceph osd stat*
48 osds: 48 up, 48 in

*# ceph pg stat*
3200 pgs: 3200 active+clean; 5336 MB data, 79455 MB used, 53572 GB / 53650
GB avail


*German*

2017-09-13 13:24 GMT-03:00 dE :

> On 09/13/2017 09:08 PM, German Anders wrote:
>
> Hi cephers,
>
> I'm having an issue with a newly created cluster 12.2.0 (
> 32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc). Basically when I
> reboot one of the nodes, and when it come back, it come outside of the root
> type on the tree:
>
> root@cpm01:~# ceph osd tree
> ID  CLASS WEIGHT   TYPE NAME  STATUS REWEIGHT PRI-AFF
> -15   12.0 *root default*
> * 36  nvme  1.0 osd.36 up  1.0 1.0*
> * 37  nvme  1.0 osd.37 up  1.0 1.0*
> * 38  nvme  1.0 osd.38 up  1.0 1.0*
> * 39  nvme  1.0 osd.39 up  1.0 1.0*
> * 40  nvme  1.0 osd.40 up  1.0 1.0*
> * 41  nvme  1.0 osd.41 up  1.0 1.0*
> * 42  nvme  1.0 osd.42 up  1.0 1.0*
> * 43  nvme  1.0 osd.43 up  1.0 1.0*
> * 44  nvme  1.0 osd.44 up  1.0 1.0*
> * 45  nvme  1.0 osd.45 up  1.0 1.0*
> * 46  nvme  1.0 osd.46 up  1.0 1.0*
> * 47  nvme  1.0 osd.47 up  1.0 1.0*
>  -7   36.0 *root root*
>  -5   24.0 rack rack1
>  -1   12.0 node cpn01
>   01.0 osd.0  up  1.0 1.0
>   11.0 osd.1  up  1.0 1.0
>   21.0 osd.2  up  1.0 1.0
>   31.0 osd.3  up  1.0 1.0
>   41.0 osd.4  up  1.0 1.0
>   51.0 osd.5  up  1.0 1.0
>   61.0 osd.6  up  1.0 1.0
>   71.0 osd.7  up  1.0 1.0
>   81.0 osd.8  up  1.0 1.0
>   91.0 osd.9  up  1.0 1.0
>  101.0 osd.10 up  1.0 1.0
>  111.0 osd.11 up  1.0 1.0
>  -3   12.0 node cpn03
>  241.0 osd.24 up  1.0 1.0
>  251.0 osd.25 up  1.0 1.0
>  261.0 osd.26 up  1.0 1.0
>  271.0 osd.27 up  1.0 1.0
>  281.0 osd.28 up  1.0 1.0
>  291.0 osd.29 up  1.0 1.0
>  301.0 osd.30 up  1.0 1.0
>  311.0 osd.31 up  1.0 1.0
>  321.0 osd.32 up  1.0 1.0
>  331.0 osd.33 up  1.0 1.0
>  341.0 osd.34 up  1.0 1.0
>  351.0 osd.35 up  1.0 1.0
>  -6   12.0 rack rack2
>  -2   12.0 node cpn02
>  121.0 osd.12 up  1.0 1.0
>  131.0 osd.13 up  1.0 1.0
>  141.0 osd.14 up  1.0 1.0
>  151.0 osd.15 up  1.0 1.0
>  161.0 osd.16 up  1.0 1.0
>  171.0 osd.17 up  1.0 1.0
>  181.0 osd.18 up  1.0 1.0
>  191.0 osd.19 up  1.0 1.0
>  201.0 osd.20 up  1.0 1.0
>  211.0 osd.21 up  1.0 1.0
>  221.0 osd.22 up  1.0 1.0
>  231.0 osd.23 up  1.0 1.0
> * -4  0 node cpn04*
>
> Any ideas of why this happen? and how can I fix it? It supposed to be
> inside rack2
>
> Thanks in advance,
>
> Best,
>
> *German*
>
>
> ___
> ceph-users mailing 
> listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> Can we see the output of ceph health detail. Maybe they're under the
> process of recovery.
>
> Also post the output of ceph osd stat so we can see what nodes are up/in
> etc... and ceph pg stat to see the status of various PGs (a pointer to the
> recovery process).
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] after reboot node appear outside the root root tree

2017-09-13 Thread dE

On 09/13/2017 09:08 PM, German Anders wrote:

Hi cephers,

I'm having an issue with a newly created cluster 12.2.0 
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc). Basically 
when I reboot one of the nodes, and when it come back, it come outside 
of the root type on the tree:


root@cpm01:~# ceph osd tree
ID  CLASS WEIGHT   TYPE NAMESTATUS REWEIGHT PRI-AFF
-15   12.0 *root default*
* 36  nvme  1.0 osd.36   up  1.0 1.0*
* 37  nvme  1.0 osd.37   up  1.0 1.0*
* 38  nvme  1.0 osd.38   up  1.0 1.0*
* 39  nvme  1.0 osd.39   up  1.0 1.0*
* 40  nvme  1.0 osd.40   up  1.0 1.0*
* 41  nvme  1.0 osd.41   up  1.0 1.0*
* 42  nvme  1.0 osd.42   up  1.0 1.0*
* 43  nvme  1.0 osd.43   up  1.0 1.0*
* 44  nvme  1.0 osd.44   up  1.0 1.0*
* 45  nvme  1.0 osd.45   up  1.0 1.0*
* 46  nvme  1.0 osd.46   up  1.0 1.0*
* 47  nvme  1.0 osd.47   up  1.0 1.0*
 -7   36.0 *root root*
 -5   24.0 rack rack1
 -1   12.0 node cpn01
  01.0 osd.0  up  1.0 1.0
  11.0 osd.1  up  1.0 1.0
  21.0 osd.2  up  1.0 1.0
  31.0 osd.3  up  1.0 1.0
  41.0 osd.4  up  1.0 1.0
  51.0 osd.5  up  1.0 1.0
  61.0 osd.6  up  1.0 1.0
  71.0 osd.7  up  1.0 1.0
  81.0 osd.8  up  1.0 1.0
  91.0 osd.9  up  1.0 1.0
 101.0 osd.10 up  1.0 1.0
 111.0 osd.11 up  1.0 1.0
 -3   12.0 node cpn03
 241.0 osd.24 up  1.0 1.0
 251.0 osd.25 up  1.0 1.0
 261.0 osd.26 up  1.0 1.0
 271.0 osd.27 up  1.0 1.0
 281.0 osd.28 up  1.0 1.0
 291.0 osd.29 up  1.0 1.0
 301.0 osd.30 up  1.0 1.0
 311.0 osd.31 up  1.0 1.0
 321.0 osd.32 up  1.0 1.0
 331.0 osd.33 up  1.0 1.0
 341.0 osd.34 up  1.0 1.0
 351.0 osd.35 up  1.0 1.0
 -6   12.0 rack rack2
 -2   12.0 node cpn02
 121.0 osd.12 up  1.0 1.0
 131.0 osd.13 up  1.0 1.0
 141.0 osd.14 up  1.0 1.0
 151.0 osd.15 up  1.0 1.0
 161.0 osd.16 up  1.0 1.0
 171.0 osd.17 up  1.0 1.0
 181.0 osd.18 up  1.0 1.0
 191.0 osd.19 up  1.0 1.0
 201.0 osd.20 up  1.0 1.0
 211.0 osd.21 up  1.0 1.0
 221.0 osd.22 up  1.0 1.0
 231.0 osd.23 up  1.0 1.0
* -4  0 node cpn04*

Any ideas of why this happen? and how can I fix it? It supposed to be 
inside rack2


Thanks in advance,

Best,

**

*German*


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Can we see the output of ceph health detail. Maybe they're under the 
process of recovery.


Also post the output of ceph osd stat so we can see what nodes are up/in 
etc... and ceph pg stat to see the status of various PGs (a pointer to 
the recovery process).


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] after reboot node appear outside the root root tree

2017-09-13 Thread German Anders
Hi cephers,

I'm having an issue with a newly created cluster 12.2.0
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc). Basically when I
reboot one of the nodes, and when it come back, it come outside of the root
type on the tree:

root@cpm01:~# ceph osd tree
ID  CLASS WEIGHT   TYPE NAME  STATUS REWEIGHT PRI-AFF
-15   12.0 *root default*
* 36  nvme  1.0 osd.36 up  1.0 1.0*
* 37  nvme  1.0 osd.37 up  1.0 1.0*
* 38  nvme  1.0 osd.38 up  1.0 1.0*
* 39  nvme  1.0 osd.39 up  1.0 1.0*
* 40  nvme  1.0 osd.40 up  1.0 1.0*
* 41  nvme  1.0 osd.41 up  1.0 1.0*
* 42  nvme  1.0 osd.42 up  1.0 1.0*
* 43  nvme  1.0 osd.43 up  1.0 1.0*
* 44  nvme  1.0 osd.44 up  1.0 1.0*
* 45  nvme  1.0 osd.45 up  1.0 1.0*
* 46  nvme  1.0 osd.46 up  1.0 1.0*
* 47  nvme  1.0 osd.47 up  1.0 1.0*
 -7   36.0 *root root*
 -5   24.0 rack rack1
 -1   12.0 node cpn01
  01.0 osd.0  up  1.0 1.0
  11.0 osd.1  up  1.0 1.0
  21.0 osd.2  up  1.0 1.0
  31.0 osd.3  up  1.0 1.0
  41.0 osd.4  up  1.0 1.0
  51.0 osd.5  up  1.0 1.0
  61.0 osd.6  up  1.0 1.0
  71.0 osd.7  up  1.0 1.0
  81.0 osd.8  up  1.0 1.0
  91.0 osd.9  up  1.0 1.0
 101.0 osd.10 up  1.0 1.0
 111.0 osd.11 up  1.0 1.0
 -3   12.0 node cpn03
 241.0 osd.24 up  1.0 1.0
 251.0 osd.25 up  1.0 1.0
 261.0 osd.26 up  1.0 1.0
 271.0 osd.27 up  1.0 1.0
 281.0 osd.28 up  1.0 1.0
 291.0 osd.29 up  1.0 1.0
 301.0 osd.30 up  1.0 1.0
 311.0 osd.31 up  1.0 1.0
 321.0 osd.32 up  1.0 1.0
 331.0 osd.33 up  1.0 1.0
 341.0 osd.34 up  1.0 1.0
 351.0 osd.35 up  1.0 1.0
 -6   12.0 rack rack2
 -2   12.0 node cpn02
 121.0 osd.12 up  1.0 1.0
 131.0 osd.13 up  1.0 1.0
 141.0 osd.14 up  1.0 1.0
 151.0 osd.15 up  1.0 1.0
 161.0 osd.16 up  1.0 1.0
 171.0 osd.17 up  1.0 1.0
 181.0 osd.18 up  1.0 1.0
 191.0 osd.19 up  1.0 1.0
 201.0 osd.20 up  1.0 1.0
 211.0 osd.21 up  1.0 1.0
 221.0 osd.22 up  1.0 1.0
 231.0 osd.23 up  1.0 1.0
* -4  0 node cpn04*

Any ideas of why this happen? and how can I fix it? It supposed to be
inside rack2

Thanks in advance,

Best,

*German*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com