Hello,
I want to bind an OSD to a virtual network interface (let's call it
eth0:0). The setting is the following. There are two net interfaces:
eth0 --> 10.0.0.100
eth0:0 --> 10.0.0.200
"/etc/hostname" contains "testnode" and in "/etc/hosts" I have:
10.0.0.100 testnode
10.0.0.200 testnode0
Both monitor and osd have the following rule in /etc/ceph/ceph.conf:
[osd.0]
host = testnode0
cluster addr = 10.0.0.200
public addr = 10.0.0.200
Then, I manually initialize the OSD (osd.0 in this example), and after
executing:
$ ceph osd crush add-bucket testnode0 host
$ ceph osd crush move testnode0 root=default
$ ceph osd crush add osd.0 1.0 host=testnode0
I can see the following crush map:
# buckets
host testnode0 {
id -2 # do not change unnecessarily
# weight 1.000
alg straw
hash 0 # rjenkins1
item osd.0 weight 1.000
}
root default {
id -1 # do not change unnecessarily
# weight 1.000
alg straw
hash 0 # rjenkins1
item testnode0 weight 1.000
}
However, after starting the OSD (start ceph-osd id=0), the OSD is
reallocated, changing the crushmap to:
# buckets
host testnode0 {
id -2 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
}
host testnode {
id -3 # do not change unnecessarily
# weight 1.000
alg straw
hash 0 # rjenkins1
item osd.0 weight 1.000
}
root default {
id -1 # do not change unnecessarily
# weight 1.000
alg straw
hash 0 # rjenkins1
item testnode0 weight 0.000
item testnode weight 1.000
}
Is there any way to avoid that and get the new OSD running under the
name "testnode0" ?
Thanks,
--
Lluís Pàmies i Juárez
http://lluis.pamies.cat
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com