I have an OpenNebula HA setup which requires a floating ip to function. For
testing purposes, I also have my ceph cluster co-located on the OpenNebula
cluster.
While I have configured my mon daemons to take the actual ips. On
installation one of the mon daemons takes its IP from outside the
configuration. While the said MON starts and forms a quorum the
corresponding osd refuses to start.

Have managed to resolve the issue for the time-being though by changing the
MON floating IP to the phsyical IP as mentioned in the manual. OSD now
starts. I only hope its a permanent fix.

Regards,
Rahul S


On 27 June 2018 at 12:19, Paul Emmerich <paul.emmer...@croit.io> wrote:

> Mons have only exactly one fixed IP address. A mon cannot use a floating
> IP, otherwise it couldn't find its peers.
> Also, the concept of a floating IP makes no sense for mons - you simply
> give your clients a list of mon IPs to connect to.
>
>
> Paul
>
> 2018-06-26 10:17 GMT+02:00 Rahul S <saple.rahul.eightyth...@gmail.com>:
>
>> Hi! In my organisation we are using OpenNebula as our Cloud Platform.
>> Currently we are testing High Availability(HA) feature with Ceph Cluster as
>> our storage backend. In our test setup we have 3 systems with front-end HA
>> already successfully setup and configured with a floating IP in between
>> them. We are having our ceph cluster(3 osds and 3 mons) on these very 3
>> machines. However, when we try to deploy a ceph cluster, we have a
>> successful quorum with the following issues on the OpenNebula 'LEADER' node
>>
>>     1) The mon daemon successfully starts, but takes up the floating IP
>> rather than the actual IP.
>>
>>     2) The osd daemon on the other hand goes down after a while giving an
>> error
>>     log_channel(cluster) log [ERR] : map e29 had wrong cluster addr
>> (192.x.x.20:6801/10821 != my 192.x.x.245:6801/10821)
>>     192.x.x.20 being the floating ip
>>     192.x.x.245 being the actual ip
>>
>> Apart from that, we are getting HEALTH_WARN status on running ceph -s,
>> with many pgs in a degraded, unclean, undersized state
>>
>> Also, if that matters, we have our osds on a seperate partition rather
>> than a disk.
>>
>> We only need to get the cluster in a healthy state in our minimalistic
>> setup. Any idea on how to get past this?
>>
>> Thanks and Regards,
>> Rahul S
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> <https://maps.google.com/?q=Freseniusstr.+31h+81247+M%C3%BCnchen&entry=gmail&source=g>
> 81247 München
> <https://maps.google.com/?q=Freseniusstr.+31h+81247+M%C3%BCnchen&entry=gmail&source=g>
> www.croit.io
> Tel: +49 89 1896585 90
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to