[openstack-dev] [nova] Nova Midcycle Summary (i.e. mid mitaka progress report)

2016-02-02 Thread John Garbutt
Hi,

For all the details see this etherpad:
https://etherpad.openstack.org/p/mitaka-nova-midcycle

Here I am attempting a brief summary, picking out some highlights.
Feel free to reply and add your own details / corrections.

**Process**

Non-priority FFE deadline is this Friday (5th Feb).
Now open for Newton specs.
Please move any proposed Mitaka specs to Newton.

**Priorities**

Cells v2:
It is moving forward, see alaski's great summary:
http://lists.openstack.org/pipermail/openstack-dev/2016-January/084545.html
Mitaka aim is around the new create instance flow.
This will make the cell zero and the API database required.
Need to define a list of instance info that is "valid" in the API
before the instance has been built.

v2.1 API:
API docs updates moving forward, as is the removal of project-ids and
related work to support the scheduler. Discussed policy discovery for
newton, in relation to the live-resize blueprint, alaski to follow up
with keystone folks.

Live-Migrate:
Lots of code to review (see usual etherpad for priority order). Some
details around storage pools need agreeing, but the general approach
seems to have reached consensus. CI is making good progress, as its
finding bugs. Folks signed up for manual testing.
Spoke about the need to look into the token expiry fix discussed at the summit.

Scheduler:
Discussed jay's blueprints. For mitaka we agreed to focus on:
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/resource-classes.html,
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/resource-providers.html,
and possibly https://review.openstack.org/253187. The latter is likely
to require a new Scheduler API endpoint within Nova.
Overall there seemed a general agreement on the approach jaypipes
proposed, and happiness that its almost all in written down now in
spec changes.
Discussed the new scheduler plan in relation to IP scheduling for
neutron's routed networks with armax and carl_baldwin. Made a lot of
progress towards better understanding each others requirements
(https://review.openstack.org/#/c/263898/)

priv-sep:
We must have priv-sep in os-brick for mitaka to avoid more upgrade problems.
Go back and do a better job after we fix the burning upgrade issue

os-vif:
Work continues.
Decided it doesn't have to wait for priv-sep.
Agreed base os-vif lib to include ovs, ovs hybrid, and linux-bridge

**Testing**

* Got folks to help get a bleeding edge libvirt test working
* Agreement on the need to improve ironic driver testing
* Agreed on the intent to move forward with Feature Classification
* Reminder about the new CI related review guideline

**Cross Project**

Neutron:
We had armax and carl_baldwin in the room.
Discussed routed networks and the above scheduler impacts.
Spoke about API changes so we have less downtime during live-migrate
when using DVR (or similar tech).
Get me a network Neutron API needs to be idempotent. Still need help
with the patch on the Nova side, jaypipes to find someone. Agreed
overall direction.

Cinder:
Joined Cinder meetup via hangout.
Got a heads up around the issues they are having with nested quotas.
A patch broke backwards compatibility with olderer cinders, so the
patch has been reverted.
Spoke about priv-sep and os-brick, agreed above plan for the brute
force conversion.
Agreed multi-attach should wait till Newton. We have merged the DB
fixes that we need to avoid data corruption. Spoke about using service
version reporting to stop the API allowing mutli-attach until the
upgrade has completed. To make remove_export not race, spoke about the
need for every volume attachment having its own separate host
attachment, rather than trying to share connections. Still questions
around upgrade.

**Other**

Spoke about the need for policy discovery via the API, before we add
something like the live-resize blueprint.

Spoke about the architectural aim to not have computes communicate
with each other, and instead have the conductor send messages between
computes. This was in relation to tdurakov's proposal to refactor the
live-migrate workflow.

**Thank You**

Many thanks for Paul Murray and others at HP for hosting up during our
time in Bristol, UK.

Also many thanks to all who made the long trip to Bristol to help
discuss all these up coming efforts, and start to build consensus
ahead of the Newton design summit in Austin.

Thanks for reading,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova Midcycle Summary (i.e. mid mitaka progress report)

2016-02-02 Thread Carl Baldwin
On Tue, Feb 2, 2016 at 4:07 AM, John Garbutt  wrote:
> Scheduler:
> Discussed jay's blueprints. For mitaka we agreed to focus on:
> http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/resource-classes.html,
> http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/resource-providers.html,
> and possibly https://review.openstack.org/253187. The latter is likely
> to require a new Scheduler API endpoint within Nova.
> Overall there seemed a general agreement on the approach jaypipes
> proposed, and happiness that its almost all in written down now in
> spec changes.
> Discussed the new scheduler plan in relation to IP scheduling for
> neutron's routed networks with armax and carl_baldwin. Made a lot of
> progress towards better understanding each others requirements
> (https://review.openstack.org/#/c/263898/)

This was a highlight for me and made the trip well worth it.  There
was a lot of great discussion that I think will be the start of a
great collaboration.  Nova were great hosts a very gracious.

> Neutron:
> We had armax and carl_baldwin in the room.

It was a pleasure to be in attendance.  Thank you.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova Midcycle Summary (i.e. mid mitaka progress report)

2016-02-02 Thread Matt Riedemann



On 2/2/2016 4:07 AM, John Garbutt wrote:

Hi,

For all the details see this etherpad:
https://etherpad.openstack.org/p/mitaka-nova-midcycle

Here I am attempting a brief summary, picking out some highlights.
Feel free to reply and add your own details / corrections.

**Process**

Non-priority FFE deadline is this Friday (5th Feb).
Now open for Newton specs.
Please move any proposed Mitaka specs to Newton.

**Priorities**

Cells v2:
It is moving forward, see alaski's great summary:
http://lists.openstack.org/pipermail/openstack-dev/2016-January/084545.html
Mitaka aim is around the new create instance flow.
This will make the cell zero and the API database required.
Need to define a list of instance info that is "valid" in the API
before the instance has been built.

v2.1 API:
API docs updates moving forward, as is the removal of project-ids and
related work to support the scheduler. Discussed policy discovery for
newton, in relation to the live-resize blueprint, alaski to follow up
with keystone folks.

Live-Migrate:
Lots of code to review (see usual etherpad for priority order). Some
details around storage pools need agreeing, but the general approach
seems to have reached consensus. CI is making good progress, as its
finding bugs. Folks signed up for manual testing.
Spoke about the need to look into the token expiry fix discussed at the summit.

Scheduler:
Discussed jay's blueprints. For mitaka we agreed to focus on:
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/resource-classes.html,
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/resource-providers.html,
and possibly https://review.openstack.org/253187. The latter is likely
to require a new Scheduler API endpoint within Nova.
Overall there seemed a general agreement on the approach jaypipes
proposed, and happiness that its almost all in written down now in
spec changes.
Discussed the new scheduler plan in relation to IP scheduling for
neutron's routed networks with armax and carl_baldwin. Made a lot of
progress towards better understanding each others requirements
(https://review.openstack.org/#/c/263898/)

priv-sep:
We must have priv-sep in os-brick for mitaka to avoid more upgrade problems.
Go back and do a better job after we fix the burning upgrade issue

os-vif:
Work continues.
Decided it doesn't have to wait for priv-sep.
Agreed base os-vif lib to include ovs, ovs hybrid, and linux-bridge

**Testing**

* Got folks to help get a bleeding edge libvirt test working
* Agreement on the need to improve ironic driver testing
* Agreed on the intent to move forward with Feature Classification
* Reminder about the new CI related review guideline

**Cross Project**

Neutron:
We had armax and carl_baldwin in the room.
Discussed routed networks and the above scheduler impacts.
Spoke about API changes so we have less downtime during live-migrate
when using DVR (or similar tech).
Get me a network Neutron API needs to be idempotent. Still need help
with the patch on the Nova side, jaypipes to find someone. Agreed
overall direction.

Cinder:
Joined Cinder meetup via hangout.
Got a heads up around the issues they are having with nested quotas.
A patch broke backwards compatibility with olderer cinders, so the
patch has been reverted.
Spoke about priv-sep and os-brick, agreed above plan for the brute
force conversion.
Agreed multi-attach should wait till Newton. We have merged the DB
fixes that we need to avoid data corruption. Spoke about using service
version reporting to stop the API allowing mutli-attach until the
upgrade has completed. To make remove_export not race, spoke about the
need for every volume attachment having its own separate host
attachment, rather than trying to share connections. Still questions
around upgrade.

**Other**

Spoke about the need for policy discovery via the API, before we add
something like the live-resize blueprint.


While the policy discovery discussion was mostly prompted by the live 
resize spec, I think it also applies to multi-attach since that's 
backend specific and operators are likely to disable the API if they 
aren't supporting it, e.g. Rackspace since the xenapi driver doesn't 
implement multi-attach.




Spoke about the architectural aim to not have computes communicate
with each other, and instead have the conductor send messages between
computes. This was in relation to tdurakov's proposal to refactor the
live-migrate workflow.

**Thank You**

Many thanks for Paul Murray and others at HP for hosting up during our
time in Bristol, UK.

Also many thanks to all who made the long trip to Bristol to help
discuss all these up coming efforts, and start to build consensus
ahead of the Newton design summit in Austin.

Thanks for reading,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [nova] Nova Midcycle Summary (i.e. mid mitaka progress report)

2016-02-02 Thread Balázs Gibizer
> From: John Garbutt [mailto:j...@johngarbutt.com]
> Sent: February 02, 2016 12:08
> 
> Hi,
> 
> For all the details see this etherpad:
> https://etherpad.openstack.org/p/mitaka-nova-midcycle
> 
> Here I am attempting a brief summary, picking out some highlights.
> Feel free to reply and add your own details / corrections.
> 
> **Process**
> 
> Non-priority FFE deadline is this Friday (5th Feb).
> Now open for Newton specs.
> Please move any proposed Mitaka specs to Newton.
> 
> **Priorities**
> 
> Cells v2:
> It is moving forward, see alaski's great summary:
> http://lists.openstack.org/pipermail/openstack-dev/2016-
> January/084545.html
> Mitaka aim is around the new create instance flow.
> This will make the cell zero and the API database required.
> Need to define a list of instance info that is "valid" in the API
> before the instance has been built.
> 
> v2.1 API:
> API docs updates moving forward, as is the removal of project-ids and
> related work to support the scheduler. Discussed policy discovery for
> newton, in relation to the live-resize blueprint, alaski to follow up
> with keystone folks.
> 
> Live-Migrate:
> Lots of code to review (see usual etherpad for priority order). Some
> details around storage pools need agreeing, but the general approach
> seems to have reached consensus. CI is making good progress, as its
> finding bugs. Folks signed up for manual testing.
> Spoke about the need to look into the token expiry fix discussed at the
> summit.
> 
> Scheduler:
> Discussed jay's blueprints. For mitaka we agreed to focus on:
> http://specs.openstack.org/openstack/nova-
> specs/specs/mitaka/approved/resource-classes.html,
> http://specs.openstack.org/openstack/nova-
> specs/specs/mitaka/approved/resource-providers.html,
> and possibly https://review.openstack.org/253187. The latter is likely
> to require a new Scheduler API endpoint within Nova.
> Overall there seemed a general agreement on the approach jaypipes
> proposed, and happiness that its almost all in written down now in
> spec changes.
> Discussed the new scheduler plan in relation to IP scheduling for
> neutron's routed networks with armax and carl_baldwin. Made a lot of
> progress towards better understanding each others requirements
> (https://review.openstack.org/#/c/263898/)
> 
> priv-sep:
> We must have priv-sep in os-brick for mitaka to avoid more upgrade
> problems.
> Go back and do a better job after we fix the burning upgrade issue
> 
> os-vif:
> Work continues.
> Decided it doesn't have to wait for priv-sep.
> Agreed base os-vif lib to include ovs, ovs hybrid, and linux-bridge
> 
> **Testing**
> 
> * Got folks to help get a bleeding edge libvirt test working
> * Agreement on the need to improve ironic driver testing
> * Agreed on the intent to move forward with Feature Classification
> * Reminder about the new CI related review guideline
> 
> **Cross Project**
> 
> Neutron:
> We had armax and carl_baldwin in the room.
> Discussed routed networks and the above scheduler impacts.
> Spoke about API changes so we have less downtime during live-migrate
> when using DVR (or similar tech).
> Get me a network Neutron API needs to be idempotent. Still need help
> with the patch on the Nova side, jaypipes to find someone. Agreed
> overall direction.
> 
> Cinder:
> Joined Cinder meetup via hangout.
> Got a heads up around the issues they are having with nested quotas.
> A patch broke backwards compatibility with olderer cinders, so the
> patch has been reverted.
> Spoke about priv-sep and os-brick, agreed above plan for the brute
> force conversion.
> Agreed multi-attach should wait till Newton. We have merged the DB
> fixes that we need to avoid data corruption. Spoke about using service
> version reporting to stop the API allowing mutli-attach until the
> upgrade has completed. To make remove_export not race, spoke about the
> need for every volume attachment having its own separate host
> attachment, rather than trying to share connections. Still questions
> around upgrade.
> 
> **Other**
> 
> Spoke about the need for policy discovery via the API, before we add
> something like the live-resize blueprint.
> 
> Spoke about the architectural aim to not have computes communicate
> with each other, and instead have the conductor send messages between
> computes. This was in relation to tdurakov's proposal to refactor the
> live-migrate workflow.

We spoke about versioned notifications as well.
As the versioned-notification infrastructure [1] has been landed we agreed that
we will only allow new notification with versioned payload in nova from now on
and we will continue the work to transform the existing notifications to the new
format in Newton. All the details are on the notification etherpad [2]

Cheers,
Gibi

[1] https://review.openstack.org/#/q/topic:bp/versioned-notification-api 
[2] https://etherpad.openstack.org/p/nova-versioned-notifications 

> 
> **Thank You**
> 
> Many thanks for Paul Murray and others at HP for