W dniu 5/4/2015 o 11:50 AM, Angus Salkeld pisze:> On Mon, May 4, 2015 at 6:33 
PM, Jastrzebski, Michal 
> <[email protected] <mailto:[email protected]>> wrote:
> 
>     W dniu 5/4/2015 o 8:21 AM, Angus Salkeld pisze:> On Thu, Apr 30,
>     2015 at 9:25 PM, Jastrzebski, Michal
>      > <[email protected]
>     <mailto:[email protected]>
>     <mailto:[email protected]
>     <mailto:[email protected]>>> wrote:
>      >
>      >     Hello,
>      >
>      >     After discussions, we've spotted possible gap in versioned
>     objects:
>      >     backporting of too-new versions in RPC.
>      >     Nova does that by conductor, but not every service has something
>      >     like that. I want to propose another approach:
>      >
>      >     1. Milestone pinning - we need to make single reference to
>     versions
>      >     of various objects - for example heat in version 15.1 will mean
>      >     stack in version 1.1 and resource in version 1.5.
>      >     2. Compatibility mode - this will add flag to service
>      >     --compatibility=15.1, that will mean that every outgoing RPC
>      >     communication will be backported before sending to object
>     versions
>      >     bound to this milestone.
>      >
>      >     With this 2 things landed we'll achieve rolling upgrade like
>     that:
>      >     1. We have N nodes in version V
>      >     2. We take down 1 node and upgrade code to version V+1
>      >     3. Run code in ver V+1 with --compatibility=V
>      >     4. Repeat 2 and 3 until every node will have version V+1
>      >     5. Restart each service without compatibility flag
>      >
>      >     This approach has one big disadvantage - 2 restarts required, but
>      >     should solve problem of backporting of too-new versions.
>      >     Any ideas? Alternatives?
>      >
>      >
>      > AFAIK if nova gets a message that is too new, it just forwards it on
>      > (and a newer server will handle it).
>      >
>      > With that this *should* work, shouldn't it?
>      > 1. rolling upgrade of heat-engine
> 
>     That will be hard part. When we'll have only one engine from given
>     version, we lose HA. Also, since we never know where given task
>     lands, we might end up with one task bouncing from old version to
>     old version, making call indefinitely long. Ofc with each upgraded
>     engine we'll lessen change for that to happen, but I think we should
>     aim for lowest possible downtime. That being said, that might be
>     good idea to solve this problem not-too-clean, but quickly.
> 
> 
> I don't think losing HA in the time it takes some heat-engines to stop, 
> install new software and restart the heat-engines is a big deal (IMHO).
> 
> -Angus

We will also lose guarantee that this RPC call will be completed in any given 
time. It can bounce from incompatible node to incompatible node until there are 
no incompatible nodes. Especially if there are no other tasks on queue and when 
service returns it to queue and takes call right afterwards, there is good 
chance that it will take this particular one, and we'll get loop out there.

> 
> 
>     > 2. db sync
>     > 3. rolling upgrade of heat-api
>     >
>     > -Angus
>     >
>     >
>     >     Regards,
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     [email protected]?subject:unsubscribe
>     <http://[email protected]?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: [email protected]?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to