Johnston, Nate wrote:
I was wondering if you provide a little bit more background on the QoS item, 
"-- remove RPC upgrade tech debt that we left in L (that should open path for 
new QoS rules that are currently blocked by it);”.  Is this related to the issue you 
point out in the commit message for the https://review.openstack.org/#/c/211520 
change?  Does this block work on implementing QoS DSCP in Mitaka, and if so are 
there bugs that we can pitch in to in order to get past it?


Ok, that's something we need to implement before pushing new rules. Basically RPC Callbacks push neutron objects over the wire:

QoSPolicy to be exact. That QoSPolicy version depends on the rules it contains, so if we add a new type of rule, we need to bump that version on the server, but still, make the old agents on the field work until you upgrade them.

I have several different strategies in mind [1]. But I have to put a document so we can discuss and decide on the best and put it in place.

[1] https://github.com/openstack/neutron/blob/master/doc/source/devref/rpc_callbacks.rst (read around "Considering rolling upgrades")


Thanks,

—N.

On Oct 1, 2015, at 10:02 AM, Ihar Hrachyshka<ihrac...@redhat.com>  wrote:

On 01 Oct 2015, at 15:45, Ihar Hrachyshka<ihrac...@redhat.com>  wrote:

Hi all,

I talked recently with several contributors about what each of us plans for the 
next cycle, and found it’s quite useful to share thoughts with others, because 
you have immediate yay/nay feedback, and maybe find companions for next 
adventures, and what not. So I’ve decided to ask everyone what you see the team 
and you personally doing the next cycle, for fun or profit.

That’s like a PTL nomination letter, but open to everyone! :) No commitments, 
no deadlines, just list random ideas you have in mind or in your todo lists, 
and we’ll all appreciate the huge pile of awesomeness no one will ever have 
time to implement even if scheduled for Xixao release.

To start the fun, I will share my silly ideas in the next email.
Here is my silly list of stuff to do.

- start adopting NeutronDbObject for core resources (ports, networks) [till 
now, it’s used in QoS only];

- introduce a so called ‘core resource extender manager’ that would be able to 
replace ml2 extension mechanism and become a plugin agnostic way of extending 
core resources by additional plugins (think of port security or qos available 
for ml2 only - that sucks!);

- more changes with less infra tinkering! neutron devs should not need to go to 
infra projects so often to make an impact;
-- make our little neat devstack plugin used for qos and sr-iov only a huge 
pile of bash code that is currently stored in devstack and is proudly called 
neutron-legacy now; and make the latter obsolete and eventually removed from 
devstack;
-- make tempest jobs use a gate hook as we already do for api jobs;

- qos:
-- once we have gate hook triggered, finally introduce qos into tempest runs to 
allow first qos scenarios merged;
-- remove RPC upgrade tech debt that we left in L (that should open path for 
new QoS rules that are currently blocked by it);
-- look into races in rpc.callbacks notification pattern (Kevin mentioned he 
had ideas in mind around that);

- oslo:
-- kill the incubator: we have a single module consumed from there (cache); 
Mitaka is the time for the witch to die in pain;
-- adopt oslo.reports: that is something I failed to do in Liberty so that I 
would have a great chance to do the same in Mitaka; basically, allow neutron 
services to dump ‘useful info’ on SIGUSR2 sent; hopefully will make debugging a 
bit easier;

- upgrades:
-- we should return to partial job for neutron; it’s not ok our upgrade 
strategy works by pure luck;
-- overall, I feel that it’s needed to provide more details about how upgrades 
are expected to work in OpenStack (the order of service upgrades; constraints; 
managing RPC versions and deprecations; etc.) Probably devref should be a good 
start. I talked to some nova folks involved in upgrades there, and we may join 
the armies on that since general upgrade strategy should be similar throughout 
the meta-project.

- stable:
-- with a stadium of the size we have, it becomes a burden for 
neutron-stable-maint to track backports for all projects; we should think of 
opening doors for more per-sub-project stable cores for those subprojects that 
seem sane in terms of development practices and stable awareness side; that way 
we offload neutron-stable-maint folks for stuff with greater impact (aka stuff 
they actually know).

And what are you folks thinking of?

Ihar
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to