Here it is 

root@ceph-01:/etc/netplan# ceph osd dump |grep -i osd
require_osd_release squid
max_osd 3
osd.0 up   in  weight 1 up_from 188 up_thru 188 down_at 187 last_clean_interval 
[29,178) [v2:192.168.1.214:6800/864636279,v1:192.168.1.214:6801/864636279] 
[v2:192.168.2.215:6802/864636279,v1:192.168.2.215:6803/864636279] exists,up 
413eaf8b-6ee3-4296-9c2c-ae45f291e2df
osd.1 up   in  weight 1 up_from 181 up_thru 188 down_at 180 last_clean_interval 
[26,178) [v2:192.168.1.212:6800/811374774,v1:192.168.1.212:6801/811374774] 
[v2:192.168.2.213:6802/811374774,v1:192.168.2.213:6803/811374774] exists,up 
3abeb495-fe1f-4d4a-8263-b920f552681e
osd.2 up   in  weight 1 up_from 183 up_thru 188 down_at 182 last_clean_interval 
[50,178) [v2:192.168.1.210:6800/3123642749,v1:192.168.1.210:6801/3123642749] 
[v2:192.168.2.211:6802/3123642749,v1:192.168.2.211:6803/3123642749] exists,up 
38b7fb7b-e4d8-456d-87b6-2afb662422d0

Showing both networks, but what’s the surity that between osd to osd it using 
cluster network? It should show something without trying any tcpdump etc.. 



Regards
Dev

> On Jun 22, 2025, at 9:22 AM, Eugen Block <ebl...@nde.ag> wrote:
> 
> The command 'ceph osd find <ID>' is not the right one to query an OSD for the 
> cluster network, it just shows the public address of an OSD (like a client 
> would need to). Just use 'ceph osd dump' and look at the OSD output.
> 
> 
> Zitat von Devender Singh <deven...@netskrt.io>:
> 
>> Hello
>> 
>> I checked on my all clusters everywhere OSD’s not using cluster network.
>> Here is another example in my lab where I have three hosts in vlan1 and 
>> vlan2 running on one proxmox server and here also same thing…
>> No MTU change, these are default to 1500.
>> 
>> I don’t understand what I am missing?
>> 
>> root@ceph-01:~# ceph config dump |grep -i network
>> global                                      advanced  cluster_network        
>>                 192.168.2.0/24                                               
>>                               *
>> mon                                         advanced  public_network         
>>                 192.168.1.0/24                                               
>>                               *
>> 
>> root@ceph-01:~# ceph osd find 1
>> {
>>    "osd": 1,
>>    "addrs": {
>>        "addrvec": [
>>            {
>>                "type": "v2",
>>                "addr": "192.168.1.212:6800",
>>                "nonce": 811374774
>>            },
>>            {
>>                "type": "v1",
>>                "addr": "192.168.1.212:6801",
>>                "nonce": 811374774
>>            }
>>        ]
>>    },
>>    "osd_fsid": "3abeb495-fe1f-4d4a-8263-b920f552681e",
>>    "host": "ceph-02.tinihub.com",
>>    "crush_location": {
>>        "host": "ceph-02",
>>        "root": "default"
>>    }
>> }
>> 
>> 
>> public_network                         192.168.1.0/24
>> 
>> root@ceph-01:/etc/netplan# telnet 192.168.1.212 6800
>> Trying 192.168.1.212...
>> Connected to 192.168.1.212.
>> Escape character is '^]'.
>> ceph v2
>> ^]
>> telnet> quit
>> Connection closed.
>> 
>> root@ceph-01:/etc/netplan# telnet 192.168.1.212 6801
>> Trying 192.168.1.212...
>> Connected to 192.168.1.212.
>> Escape character is '^]'.
>> ceph v027\0b^]
>> telnet> quit
>> Connection closed.
>> 
>> 
>> cluster_network                        192.168.2.0/24
>> 
>> root@ceph-01:/etc/netplan# telnet 192.168.2.211 6800
>> Trying 192.168.2.211...
>> Connected to 192.168.2.211.
>> Escape character is '^]'.
>> ceph v2
>> ^]
>> telnet> quit
>> Connection closed.
>> root@ceph-01:/etc/netplan# telnet 192.168.2.211 6801
>> Trying 192.168.2.211...
>> Connected to 192.168.2.211.
>> Escape character is '^]'.
>> ceph v027}/^]
>> telnet> quit
>> Connection closed.
>> 
>> 
>> 
>> root@ceph-01:/etc/netplan# ls
>> 00-installer-config.yaml  01-installer-config.yaml
>> root@ceph-01:/etc/netplan# cat 00-installer-config.yaml
>> # This file is generated from information provided by the datasource.  
>> Changes
>> # to it will not persist across an instance reboot.  To disable cloud-init's
>> # network configuration capabilities, write a file
>> # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
>> # network: {config: disabled}
>> network:
>>    ethernets:
>>        ens18:
>>            dhcp4: false
>>            addresses:
>>              - 192.168.1.210/24
>>            routes:
>>              - to: default
>>                via: 192.168.1.254
>>            nameservers:
>>              addresses:
>>                - 192.168.1.201
>>    version: 2
>> root@ceph-01:/etc/netplan# cat 01-installer-config.yaml
>> # This file is generated from information provided by the datasource.  
>> Changes
>> # to it will not persist across an instance reboot.  To disable cloud-init's
>> # network configuration capabilities, write a file
>> # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
>> # network: {config: disabled}
>> network:
>>    ethernets:
>>        ens19:
>>            dhcp4: false
>>            addresses:
>>              - 192.168.2.211/24
>>            nameservers:
>>              addresses:
>>                - 192.168.1.201
>>    version: 2
>> 
>> 
>> 
>> Regards
>> Dev
>> 
>>> On Jun 22, 2025, at 1:50 AM, Michel Jouvin <michel.jou...@ijclab.in2p3.fr> 
>>> wrote:
>>> 
>>> d of ok before the upgrade, for me there is no reason to reformat OSDs or 
>>> change anything to the cluster c
>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to