On 26 January 2017 at 13:50, Sylvain Bauza <sba...@redhat.com> wrote:
> Le 26/01/2017 05:42, Matt Riedemann a écrit :
>> This is my public hand off to Sylvain for the work done tonight.
>>
>
> Thanks Matt for your help yesterday, was awesome to count you in even
> you're personally away.
>
>
>> Starting with the multinode grenade failure in the nova patch to
>> integrate placement with the filter scheduler:
>>
>> https://review.openstack.org/#/c/417961/
>>
>> The test_schedule_to_all_nodes tempest test was failing in there because
>> that test explicitly forces hosts using AZs to build two instances.
>> Because we didn't have nova.conf on the Newton subnode in the multinode
>> grenade job configured to talk to placement, there was no resource
>> provider for that Newton subnode when we started running smoke tests
>> after the upgrade to Ocata, so that test failed since the request to the
>> subnode had a NoValidHost (because no resource provider was checking in
>> from the Newton node).
>>
>
> That's where I think the current implementation is weird : if you force
> the scheduler to return you a destination (without even calling the
> filters) by just verifying if the corresponding service is up, then why
> are you needing to get the full list of computes before that ?
>
> To the placement extend, if you just *force* the scheduler to return you
> a destination, then why should we verify if the resources are happy ?
> FWIW, we now have a fully different semantics that replaces the
> "force_hosts" thing that I hate : it's called
> RequestSpec.requested_destination and it actually verifies the filters
> only for that destination. No straight bypass of the filters like
> force_hosts does.

Thats just a symptom though, as I understand it?

It seems the real problem seems to be the placement isn't configured
on the old node. Which by accident is what most deployers are likely
to hit, if they didn't setup placement when upgrading last cycle.

>> Grenade is not topology aware so it doesn't know anything about the
>> subnode. When the subnode is stacked, it does so via a post-stack hook
>> script that devstack-gate writes into the grenade run, so after stacking
>> the primary Newton node, it then uses Ansible to ssh into the subnode
>> and stack Newton there too:
>>
>> https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L629
>>
>>
>> logs.openstack.org/61/417961/26/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/15545e4/logs/grenade.sh.txt.gz#_2017-01-26_00_26_59_296
>>
>>
>> And placement was optional in Newton so, you know, problems.
>>
>
> That's where I think we have another problem, which is bigger than the
> corner case you mentioned above : when upgrading from Newton to Ocata,
> we said that all Newton computes have be upgraded to the latest point
> release. Great. But we forgot to identify that it would also require to
> *modify* their nova.conf so they would be able to call the placement API.
>
> That looks to me more than just a rolling upgrade mechanism. In theory,
> a rolling upgrade process accepts that N-1 versioned computes can talk
> to N versioned other services. That doesn't imply a necessary
> configuration change (except the upgrade_levels flag) on the computes to
> achieve that, right?
>
> http://docs.openstack.org/developer/nova/upgrade.html

We normally say the config that worked last cycle should be fine.

We probably should have said placement was required last cycle, then
this wouldn't have been an issue.

>> Some options came to mind:
>>
>> 1. Change the test to not be a smoke test which would exclude it from
>> running during grenade. QA would barf on this.
>>
>> 2. Hack some kind of pre-upgrade callback from d-g into grenade just for
>> configuring placement on the compute subnode. This would probably
>> require adding a script to devstack just so d-g has something to call so
>> we could keep branch logic out of d-g, like what we did for the
>> discover_hosts stuff for cells v2. This is more complicated than what I
>> wanted to deal with tonight with limited time on my hands.
>>
>> 3. Change the nova filter scheduler patch to fallback to get all compute
>> nodes if there are no resource providers. We've already talked about
>> this a few times already in other threads and I consider it a safety net
>> we'd like to avoid if all else fails. If we did this, we could
>> potentially restrict it to just the forced-host case...
>>
>> 4. Setup the Newton subnode in the grenade run to configure placement,
>> which I think we can do from d-g using the features yaml file. That's
>> what I opted to go with and the patch is here:
>>
>> https://review.openstack.org/#/c/425524/
>>
>> I've made the nova patch dependent on that *and* the other grenade patch
>> to install and configure placement on the primary node when upgrading
>> from Newton to Ocata.
>>
>> --
>>
>> That's where we're at right now. If #4 fails, I think we are stuck with
>> adding a workaround for #3 into Ocata and then remove that in Pike when
>> we know/expect computes to be running placement (they would be in our
>> grenade runs from ocata->pike at least).

Option 5:
Make placement on by default for Newton in devstack, so its present
across both sides of the upgrade?

That seems to model what we are telling our users to do.

> Given the above two problems that I stated, I think I'm in favor of a #3
> approach now that would do the following :
>
>  - modify the scheduler so that it's acceptable to have the placement
> returning nothing if you force hosts
>
>  - modify the scheduler so in the event of an empty list returned by the
> placement API, fallback getting the list of all computes
>
>
> That still leaves the problem where a few computes are not all upgraded
> to Ocata but some are : in that case, we would return only a subset of
> what's in the cloud which is terribly suboptimal.
>
> Thoughts ? Another option could be to verify the compute service
> versions to know the state of the cloud, but we turned down that option
> previously.

I personally like the idea of only starting to use placement once all
the computes are at the latest version, and when its a mixed
environment keep using the old system. We can do this using the usual
service version infrastructure.

Thats assuming new compute nodes would fail in some obvious why if
placement isn't configured.

This has the added advantage, that we don't break our upgrade rules
around needing tweaking the old configuration before upgrading. Its
more complexity in the system we would ideally avoid, but at least it
only has to stay in there one cycle.

Thanks,
John

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to