>From time to time old monitor can’t connect to new monitors. 
For example upgrade from 0.66 to 0.67
http://ceph.com/docs/master/release-notes/#upgrading-from-v0-66

        • There is monitor internal protocol change, which means that v0.67 
ceph-mon daemons cannot talk to v0.66 or older daemons. We recommend upgrading 
all monitors at once (or in relatively quick succession) to minimize the 
possibility of downtime.

I has test cluster on the 0.66 version and monitor upgrade was failed.

I the situation update one by one don’t help me when I upgrade middle monitor 
instance.

> On 04/28/2014 05:59 PM, Timofey Koolin wrote:
>> Is a setting for change the behavior to return read error instead of block 
>> read?
>> 
>> I think it is more reasonable behavior because it is similar to bad block on 
>> HDD: it can’t be read.
>> 
>> Or may be a timeout some seconds, then return read error for the block and 
>> other absented blocks in same image/PG.
>> 
> 
> Yes, you could do so in theory, but I wouldn't do that. Usually the VM itself 
> will start throwing errors when the drive doesn't respond.
> 
>> 
>> Or is any method safe upgrade cluster without downtime.
>> 
> 
> Yes, you can upgrade a Ceph cluster without downtime. Just do a rolling 
> upgrade.
> 
>> Now if I will upgrade monitors and upgrade will fail on second (of three) 
>> monitor - cluster will down. Becouse it will have
>> 1 new monitor
>> 1 down monitor
>> 1 old monitor
> 
> You expect the upgrade to fail? Simply upgrade the monitors one by one within 
> ~30 minutes and you should be fine.
> 
> Afterwards you do the same for all the OSDs. Restart the daemons one by one 
> and without downtime you can upgrade the cluster.
> 
>> 
>> Old and mew monitor haven’t quorum.
>> 
>> Same for 5 monitors:
>> 2 new monitors
>> 1 down monitor
>> 2 old monitors.
>> 
>>> On 04/28/2014 02:35 PM, Timofey Koolin wrote:
>>>> What will happened if RBD lose all copied of data-block and I read the 
>>>> block?
>>>> 
>>> 
>>> The read to the object will block until a replica comes online to serve it.
>>> 
>>> Remember this with Ceph: "Consistency goes over availability"
>>> 
>>>> Context:
>>>> I want use RDB as main storage with replication factor 1 and drbd for 
>>>> replication on non rbd storage by client side.
>>>> 
>>>> For example:
>>>> Computer1:
>>>> 1. connect rbd as /dev/rbd15
>>>> 2. use rbd as disk for drbd
>>>> 
>>>> Computer2:
>>>> Use HDD for drbd-replication.
>>>> 
>>>> 
>>>> I want protect from break of ceph system (for example while upgrade ceph) 
>>>> and long-distance replication.
>>> 
>>> Ceph wants to be consistent at all times. So copying over long distances 
>>> with high latency will be very slow.
>>> 
>>>> 
>>>> 
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> [email protected]
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>> 
>>> 
>>> 
>>> --
>>> Wido den Hollander
>>> 42on B.V.
>>> Ceph trainer and consultant
>>> 
>>> Phone: +31 (0)20 700 9902
>>> Skype: contact42on
>>> _______________________________________________
>>> ceph-users mailing list
>>> [email protected]
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> 
> 
> -- 
> Wido den Hollander
> 42on B.V.
> Ceph trainer and consultant
> 
> Phone: +31 (0)20 700 9902
> Skype: contact42on
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to