Re: [openstack-dev] [nova] shall we do a spec review day next tuesday oct 23?

2018-10-16 Thread Balázs Gibizer


On Mon, Oct 15, 2018 at 7:07 PM, melanie witt  
wrote:
> Hey all,
> 
> Milestone s-1 is coming up next week on Thursday Oct 25 [1] and I was 
> thinking it would be a good idea to have a spec review day next week 
> on Tuesday Oct 23 to spend some focus on spec reviews together.
> 
> Spec freeze is s-2 Jan 10, so the review day isn't related to any 
> deadlines, but would just be a way to organize and make sure we have 
> initial review on the specs that have been proposed so far.
> 
> How does Tuesday Oct 23 work for everyone? Let me know if another day 
> works better.

22nd and 23rd are public holidays in Hungary os I will try to do some 
review on this Friday as a compromise.

Cheers,
gibi

> 
> So far, efried and mriedem are on board when I asked in the 
> #openstack-nova channel. I'm sending this mail to gather more 
> responses asynchronously.
> 
> Cheers,
> -melanie
> 
> [1] https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations

2018-10-10 Thread Balázs Gibizer


On Wed, Oct 10, 2018 at 2:46 PM, Jay Pipes  wrote:
> On 10/10/2018 06:32 AM, Balázs Gibizer wrote:
>> Hi,
>> 
>> Thanks for all the feedback. I feel the following consensus is 
>> forming:
>> 
>> 1) remove the force flag in a new microversion. I've proposed a spec
>> about that API change [1]
> 
> +1
> 
>> 2) in the old microversions change the blind allocation copy to 
>> gather
>> every resource from a nested source RPs too and try to allocate that
>> from the destination root RP. In nested allocation cases putting this
>> allocation to placement will fail and nova will fail the migration /
>> evacuation. However it will succeed if the server does not need 
>> nested
>> allocation neither on the source nor on the destination host (a.k.a 
>> the
>> legacy case). Or if the server has nested allocation on the source 
>> host
>> but does not need nested allocation on the destination host (for
>> example the dest host does not have nested RP tree yet).
> 
> I disagree on this. I'd rather just do a simple check for >1 provider 
> in the allocations on the source and if True, fail hard.
> 
> The reverse (going from a non-nested source to a nested destination) 
> will hard fail anyway on the destination because the POST 
> /allocations won't work due to capacity exceeded (or failure to have 
> any inventory at all for certain resource classes on the 
> destination's root compute node).

If we hard fail on >1 provider in an allocation on the source then we 
lose the (not really common) case when the source allocation is nested 
but the destination node does not have a nested RP tree yet and it 
would support the summarized allocation on the root RP.
But sure simply failing would be a simpler solution.

gibi

> 
> -jay
> 
>> I will start implementing #2) as part of the
>> use-nested-allocation-candidate bp soon and will continue with #1)
>> later in the cycle.
>> 
>> Nothing is set in stone yet so feedback is still very appreciated.
>> 
>> Cheers,
>> gibi
>> 
>> [1] https://review.openstack.org/#/c/609330/
>> 
>> On Tue, Oct 9, 2018 at 11:40 AM, Balázs Gibizer
>>  wrote:
>>> Hi,
>>> 
>>> Setup
>>> -
>>> 
>>> nested allocation: an allocation that contains resources from one or
>>> more nested RPs. (if you have better term for this then please
>>> suggest).
>>> 
>>> If an instance has nested allocation it means that the compute, it
>>> allocates from, has a nested RP tree. BUT if a compute has a nested
>>> RP tree it does not automatically means that the instance, 
>>> allocating
>>> from that compute, has a nested allocation (e.g. bandwidth inventory
>>> will be on a nested RPs but not every instance will require 
>>> bandwidth)
>>> 
>>> Afaiu, as soon as we have NUMA modelling in place the most trivial
>>> servers will have nested allocations as CPU and MEMORY inverntory
>>> will be moved to the nested NUMA RPs. But NUMA is still in the 
>>> future.
>>> 
>>> Sidenote: there is an edge case reported by bauzas when an instance
>>> allocates _only_ from nested RPs. This was discussed on last Friday
>>> and it resulted in a new patch[0] but I would like to keep that
>>> discussion separate from this if possible.
>>> 
>>> Sidenote: the current problem somewhat related to not just nested 
>>> PRs
>>> but to sharing RPs as well. However I'm not aiming to implement
>>> sharing support in Nova right now so I also try to keep the sharing
>>> disscussion separated if possible.
>>> 
>>> There was already some discussion on the Monday's scheduler meeting
>>> but I could not attend.
>>> http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20
>>> 
>>> 
>>> The meat
>>> 
>>> 
>>> Both live-migrate[1] and evacuate[2] has an optional force flag on
>>> the nova REST API. The documentation says: "Force  by 
>>> not
>>> verifying the provided destination host by the scheduler."
>>> 
>>> Nova implements this statement by not calling the scheduler if
>>> force=True BUT still try to manage allocations in placement.
>>> 
>>> To have allocation on the destination host Nova blindly copies the
>>> instance allocation from the source host to the destination host
>>> during these operations. Nova can do that as 1) the whole allocation
>>> is against a

Re: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations

2018-10-10 Thread Balázs Gibizer
Hi,

Thanks for all the feedback. I feel the following consensus is forming:

1) remove the force flag in a new microversion. I've proposed a spec 
about that API change [1]

2) in the old microversions change the blind allocation copy to gather 
every resource from a nested source RPs too and try to allocate that 
from the destination root RP. In nested allocation cases putting this 
allocation to placement will fail and nova will fail the migration / 
evacuation. However it will succeed if the server does not need nested 
allocation neither on the source nor on the destination host (a.k.a the 
legacy case). Or if the server has nested allocation on the source host 
but does not need nested allocation on the destination host (for 
example the dest host does not have nested RP tree yet).

I will start implementing #2) as part of the 
use-nested-allocation-candidate bp soon and will continue with #1) 
later in the cycle.

Nothing is set in stone yet so feedback is still very appreciated.

Cheers,
gibi

[1] https://review.openstack.org/#/c/609330/

On Tue, Oct 9, 2018 at 11:40 AM, Balázs Gibizer 
 wrote:
> Hi,
> 
> Setup
> -
> 
> nested allocation: an allocation that contains resources from one or 
> more nested RPs. (if you have better term for this then please 
> suggest).
> 
> If an instance has nested allocation it means that the compute, it 
> allocates from, has a nested RP tree. BUT if a compute has a nested 
> RP tree it does not automatically means that the instance, allocating 
> from that compute, has a nested allocation (e.g. bandwidth inventory 
> will be on a nested RPs but not every instance will require bandwidth)
> 
> Afaiu, as soon as we have NUMA modelling in place the most trivial 
> servers will have nested allocations as CPU and MEMORY inverntory 
> will be moved to the nested NUMA RPs. But NUMA is still in the future.
> 
> Sidenote: there is an edge case reported by bauzas when an instance 
> allocates _only_ from nested RPs. This was discussed on last Friday 
> and it resulted in a new patch[0] but I would like to keep that 
> discussion separate from this if possible.
> 
> Sidenote: the current problem somewhat related to not just nested PRs 
> but to sharing RPs as well. However I'm not aiming to implement 
> sharing support in Nova right now so I also try to keep the sharing 
> disscussion separated if possible.
> 
> There was already some discussion on the Monday's scheduler meeting 
> but I could not attend.
> http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20
> 
> 
> The meat
> 
> 
> Both live-migrate[1] and evacuate[2] has an optional force flag on 
> the nova REST API. The documentation says: "Force  by not 
> verifying the provided destination host by the scheduler."
> 
> Nova implements this statement by not calling the scheduler if 
> force=True BUT still try to manage allocations in placement.
> 
> To have allocation on the destination host Nova blindly copies the 
> instance allocation from the source host to the destination host 
> during these operations. Nova can do that as 1) the whole allocation 
> is against a single RP (the compute RP) and 2) Nova knows both the 
> source compute RP and the destination compute RP.
> 
> However as soon as we bring nested allocations into the picture that 
> blind copy will not be feasible. Possible cases
> 0) The instance has non-nested allocation on the source and would 
> need non nested allocation on the destination. This works with blindy 
> copy today.
> 1) The instance has a nested allocation on the source and would need 
> a nested allocation on the destination as well.
> 2) The instance has a non-nested allocation on the source and would 
> need a nested allocation on the destination.
> 3) The instance has a nested allocation on the source and would need 
> a non nested allocation on the destination.
> 
> Nova cannot generate nested allocations easily without reimplementing 
> some of the placement allocation candidate (a_c) code. However I 
> don't like the idea of duplicating some of the a_c code in Nova.
> 
> Nova cannot detect what kind of allocation (nested or non-nested) an 
> instance would need on the destination without calling placement a_c. 
> So knowing when to call placement is a chicken and egg problem.
> 
> Possible solutions:
> A) fail fast
> 
> 0) Nova can detect that the source allocatioin is non-nested and try 
> the blindy copy and it will succeed.
> 1) Nova can detect that the source allocaton is nested and fail the 
> operation
> 2) Nova only sees a non nested source allocation. Even if the dest RP 
> tree is nested it does not mean that the allocation will be nested. 
> We ca

Re: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations

2018-10-10 Thread Balázs Gibizer


On Tue, Oct 9, 2018 at 11:01 PM, Eric Fried  wrote:
> 
> 
> On 10/09/2018 02:20 PM, Jay Pipes wrote:
>>  On 10/09/2018 11:04 AM, Balázs Gibizer wrote:
>>>  If you do the force flag removal in a nw microversion that also 
>>> means
>>>  (at least to me) that you should not change the behavior of the 
>>> force
>>>  flag in the old microversions.
>> 
>>  Agreed.
>> 
>>  Keep the old, buggy and unsafe behaviour for the old microversion 
>> and in
>>  a new microversion remove the --force flag entirely and always call 
>> GET
>>  /a_c, followed by a claim_resources() on the destination host.
>> 
>>  For the old microversion behaviour, continue to do the "blind copy" 
>> of
>>  allocations from the source compute node provider to the destination
>>  compute node provider.
> 
> TBC, for nested/sharing source, we should consolidate all the 
> resources
> into a single allocation against the destination's root provider?

Yes, we need to do that not to miss resources allocated from a child RP 
on the source host and succeed without a complete allocation on the 
destination host.

> 
>>  That "blind copy" will still fail if there isn't
>>  capacity for the new allocations on the destination host anyway, 
>> because
>>  the blind copy is just issuing a POST /allocations, and that code 
>> path
>>  still checks capacity on the target resource providers.
> 
> What happens when the migration fails, either because of that POST
> /allocations, or afterwards? Do we still have the old allocation 
> around
> to restore? Cause we can't re-figure it from the now-monolithic
> destination allocation.

For live-migrate we have the source allocation held by the 
migration_uuid so we can simply move that back to the instance_uuid 
when the allocation fails on the destination host.

For evacuate the source host allocaton is also held by the 
instance_uuid (no migration_uuid is used) but it is not a real problem 
here as nova failed to change that allocation so the original source 
allocation is intact in placement.

Cheers,
gibi

> 
>>  There isn't a
>>  code path in the placement API that allows a provider's inventory
>>  capacity to be exceeded by new allocations.
>> 
>>  Best,
>>  -jay
>> 
>>  
>> __
>>  OpenStack Development Mailing List (not for usage questions)
>>  Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations

2018-10-09 Thread Balázs Gibizer


On Tue, Oct 9, 2018 at 5:32 PM, Sylvain Bauza  
wrote:
> 
> 
> Le mar. 9 oct. 2018 à 17:09, Balázs Gibizer 
>  a écrit :
>> 
>> 
>> On Tue, Oct 9, 2018 at 4:56 PM, Sylvain Bauza 
>> 
>> wrote:
>> >
>> >
>> > Le mar. 9 oct. 2018 à 16:39, Eric Fried  a
>> > écrit :
>> >> IIUC, the primary thing the force flag was intended to do - allow 
>> an
>> >> instance to land on the requested destination even if that means
>> >> oversubscription of the host's resources - doesn't happen anymore
>> >> since
>> >> we started making the destination claim in placement.
>> >>
>> >> IOW, since pike, you don't actually see a difference in behavior 
>> by
>> >> using the force flag or not. (If you do, it's more likely a bug 
>> than
>> >> what you were expecting.)
>> >>
>> >> So there's no reason to keep it around. We can remove it in a new
>> >> microversion (or not); but even in the current microversion we 
>> need
>> >> not
>> >> continue making convoluted attempts to observe it.
>> >>
>> >> What that means is that we should simplify everything down to 
>> ignore
>> >> the
>> >> force flag and always call GET /a_c. Problem solved - for nested
>> >> and/or
>> >> sharing, NUMA or not, root resources or no, on the source and/or
>> >> destination.
>> >>
>> >
>> >
>> > While I tend to agree with Eric here (and I commented on the review
>> > accordingly by saying we should signal the new behaviour by a
>> > microversion), I still think we need to properly advertise this,
>> > adding openstack-operators@ accordingly.
>> 
>> Question for you as well: if we remove (or change) the force flag in 
>> a
>> new microversion then how should the old microversions behave when
>> nested allocations would be required?
>> 
> 
> In that case (ie. old microversions with either "force=None and 
> target" or 'force=True', we should IMHO not allocate any migration.
> Thoughts ?

Do you mean on old microversions implement option #D) ?

Cheers,
gibi


> 
>> Cheers,
>> gibi
>> 
>> > Disclaimer : since we have gaps on OSC, the current OSC behaviour
>> > when you "openstack server live-migrate " is to *force* the
>> > destination by not calling the scheduler. Yeah, it sucks.
>> >
>> > Operators, what are the exact cases (for those running clouds newer
>> > than Mitaka, ie. Newton and above) when you make use of the --force
>> > option for live migration with a microversion newer or equal 2.29 ?
>> > In general, even in the case of an emergency, you still want to 
>> make
>> > sure you don't throw your compute under the bus by massively
>> > migrating instances that would create an undetected snowball effect
>> > by having this compute refusing new instances. Or are you disabling
>> > the target compute service first and throw your pet instances up
>> > there ?
>> >
>> > -Sylvain
>> >
>> >
>> >
>> >> -efried
>> >>
>> >> On 10/09/2018 04:40 AM, Balázs Gibizer wrote:
>> >> > Hi,
>> >> >
>> >> > Setup
>> >> > -
>> >> >
>> >> > nested allocation: an allocation that contains resources from 
>> one
>> >> or
>> >> > more nested RPs. (if you have better term for this then please
>> >> suggest).
>> >> >
>> >> > If an instance has nested allocation it means that the compute, 
>> it
>> >> > allocates from, has a nested RP tree. BUT if a compute has a
>> >> nested RP
>> >> > tree it does not automatically means that the instance, 
>> allocating
>> >> from
>> >> > that compute, has a nested allocation (e.g. bandwidth inventory
>> >> will be
>> >> > on a nested RPs but not every instance will require bandwidth)
>> >> >
>> >> > Afaiu, as soon as we have NUMA modelling in place the most 
>> trivial
>> >> > servers will have nested allocations as CPU and MEMORY 
>> inverntory
>> >> will
>> >> > be moved to the nested NUMA RPs. But NUMA is still in the 
>> future.
>> >> >
>> >> > Sidenote: there is an edge case reported by bauzas when an 
>> instance
>> >&g

Re: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations

2018-10-09 Thread Balázs Gibizer


On Tue, Oct 9, 2018 at 4:56 PM, Sylvain Bauza  
wrote:
> 
> 
> Le mar. 9 oct. 2018 à 16:39, Eric Fried  a 
> écrit :
>> IIUC, the primary thing the force flag was intended to do - allow an
>> instance to land on the requested destination even if that means
>> oversubscription of the host's resources - doesn't happen anymore 
>> since
>> we started making the destination claim in placement.
>> 
>> IOW, since pike, you don't actually see a difference in behavior by
>> using the force flag or not. (If you do, it's more likely a bug than
>> what you were expecting.)
>> 
>> So there's no reason to keep it around. We can remove it in a new
>> microversion (or not); but even in the current microversion we need 
>> not
>> continue making convoluted attempts to observe it.
>> 
>> What that means is that we should simplify everything down to ignore 
>> the
>> force flag and always call GET /a_c. Problem solved - for nested 
>> and/or
>> sharing, NUMA or not, root resources or no, on the source and/or
>> destination.
>> 
> 
> 
> While I tend to agree with Eric here (and I commented on the review 
> accordingly by saying we should signal the new behaviour by a 
> microversion), I still think we need to properly advertise this, 
> adding openstack-operators@ accordingly.

Question for you as well: if we remove (or change) the force flag in a 
new microversion then how should the old microversions behave when 
nested allocations would be required?

Cheers,
gibi

> Disclaimer : since we have gaps on OSC, the current OSC behaviour 
> when you "openstack server live-migrate " is to *force* the 
> destination by not calling the scheduler. Yeah, it sucks.
> 
> Operators, what are the exact cases (for those running clouds newer 
> than Mitaka, ie. Newton and above) when you make use of the --force 
> option for live migration with a microversion newer or equal 2.29 ?
> In general, even in the case of an emergency, you still want to make 
> sure you don't throw your compute under the bus by massively 
> migrating instances that would create an undetected snowball effect 
> by having this compute refusing new instances. Or are you disabling 
> the target compute service first and throw your pet instances up 
> there ?
> 
> -Sylvain
> 
> 
> 
>> -efried
>> 
>> On 10/09/2018 04:40 AM, Balázs Gibizer wrote:
>> > Hi,
>> >
>> > Setup
>> > -
>> >
>> > nested allocation: an allocation that contains resources from one 
>> or
>> > more nested RPs. (if you have better term for this then please 
>> suggest).
>> >
>> > If an instance has nested allocation it means that the compute, it
>> > allocates from, has a nested RP tree. BUT if a compute has a 
>> nested RP
>> > tree it does not automatically means that the instance, allocating 
>> from
>> > that compute, has a nested allocation (e.g. bandwidth inventory 
>> will be
>> > on a nested RPs but not every instance will require bandwidth)
>> >
>> > Afaiu, as soon as we have NUMA modelling in place the most trivial
>> > servers will have nested allocations as CPU and MEMORY inverntory 
>> will
>> > be moved to the nested NUMA RPs. But NUMA is still in the future.
>> >
>> > Sidenote: there is an edge case reported by bauzas when an instance
>> > allocates _only_ from nested RPs. This was discussed on last 
>> Friday and
>> > it resulted in a new patch[0] but I would like to keep that 
>> discussion
>> > separate from this if possible.
>> >
>> > Sidenote: the current problem somewhat related to not just nested 
>> PRs
>> > but to sharing RPs as well. However I'm not aiming to implement 
>> sharing
>> > support in Nova right now so I also try to keep the sharing 
>> disscussion
>> > separated if possible.
>> >
>> > There was already some discussion on the Monday's scheduler 
>> meeting but
>> > I could not attend.
>> > 
>> http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20
>> >
>> >
>> > The meat
>> > 
>> >
>> > Both live-migrate[1] and evacuate[2] has an optional force flag on 
>> the
>> > nova REST API. The documentation says: "Force  by not
>> > verifying the provided destination host by the scheduler."
>> >
>> > Nova implements this statement by not calling the scheduler if
>> > force=True BUT still try to manage alloca

Re: [openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations

2018-10-09 Thread Balázs Gibizer


On Tue, Oct 9, 2018 at 4:39 PM, Eric Fried  wrote:
> IIUC, the primary thing the force flag was intended to do - allow an
> instance to land on the requested destination even if that means
> oversubscription of the host's resources - doesn't happen anymore 
> since
> we started making the destination claim in placement.

Can we simply do that still by not creating allocation in placement 
during the move? (see option #D))

> 
> IOW, since pike, you don't actually see a difference in behavior by
> using the force flag or not. (If you do, it's more likely a bug than
> what you were expecting.)

There is still difference between force=True and force=False today. 
When you say force=False nova calls placement a_c and placement try to 
satisfy requested resource, required traits, and aggregate membership. 
When you say force=True nova conductor takes the resource allocation 
from the source host and copies that blindly to the destination but 
does not check any traits or aggregate membership. So force=True is 
still ignores a lot of rules and safeties.

> 
> So there's no reason to keep it around. We can remove it in a new
> microversion (or not); but even in the current microversion we need 
> not
> continue making convoluted attempts to observe it.

If we remove it in a new microversion (option #C)) then we still need 
to define how to behave in the old microversions when nested allocation 
would be needed. I don't fully get what you mean by 'not continue 
making convoluted attempts to observe it.'

> 
> What that means is that we should simplify everything down to ignore 
> the
> force flag and always call GET /a_c. Problem solved - for nested 
> and/or
> sharing, NUMA or not, root resources or no, on the source and/or
> destination.

If you do the force flag removal in a nw microversion that also means 
(at least to me) that you should not change the behavior of the force 
flag in the old microversions.

Cheers,
gibi

> 
> -efried
> 
> On 10/09/2018 04:40 AM, Balázs Gibizer wrote:
>>  Hi,
>> 
>>  Setup
>>  -
>> 
>>  nested allocation: an allocation that contains resources from one or
>>  more nested RPs. (if you have better term for this then please 
>> suggest).
>> 
>>  If an instance has nested allocation it means that the compute, it
>>  allocates from, has a nested RP tree. BUT if a compute has a nested 
>> RP
>>  tree it does not automatically means that the instance, allocating 
>> from
>>  that compute, has a nested allocation (e.g. bandwidth inventory 
>> will be
>>  on a nested RPs but not every instance will require bandwidth)
>> 
>>  Afaiu, as soon as we have NUMA modelling in place the most trivial
>>  servers will have nested allocations as CPU and MEMORY inverntory 
>> will
>>  be moved to the nested NUMA RPs. But NUMA is still in the future.
>> 
>>  Sidenote: there is an edge case reported by bauzas when an instance
>>  allocates _only_ from nested RPs. This was discussed on last Friday 
>> and
>>  it resulted in a new patch[0] but I would like to keep that 
>> discussion
>>  separate from this if possible.
>> 
>>  Sidenote: the current problem somewhat related to not just nested 
>> PRs
>>  but to sharing RPs as well. However I'm not aiming to implement 
>> sharing
>>  support in Nova right now so I also try to keep the sharing 
>> disscussion
>>  separated if possible.
>> 
>>  There was already some discussion on the Monday's scheduler meeting 
>> but
>>  I could not attend.
>>  
>> http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20
>> 
>> 
>>  The meat
>>  
>> 
>>  Both live-migrate[1] and evacuate[2] has an optional force flag on 
>> the
>>  nova REST API. The documentation says: "Force  by not
>>  verifying the provided destination host by the scheduler."
>> 
>>  Nova implements this statement by not calling the scheduler if
>>  force=True BUT still try to manage allocations in placement.
>> 
>>  To have allocation on the destination host Nova blindly copies the
>>  instance allocation from the source host to the destination host 
>> during
>>  these operations. Nova can do that as 1) the whole allocation is
>>  against a single RP (the compute RP) and 2) Nova knows both the 
>> source
>>  compute RP and the destination compute RP.
>> 
>>  However as soon as we bring nested allocations into the picture that
>>  blind copy will not be feasible. Possible cases
>>  0) The instance has non-nested allocation on the source and would 
>> need

[openstack-dev] [nova] Supporting force live-migrate and force evacuate with nested allocations

2018-10-09 Thread Balázs Gibizer
Hi,

Setup
-

nested allocation: an allocation that contains resources from one or 
more nested RPs. (if you have better term for this then please suggest).

If an instance has nested allocation it means that the compute, it 
allocates from, has a nested RP tree. BUT if a compute has a nested RP 
tree it does not automatically means that the instance, allocating from 
that compute, has a nested allocation (e.g. bandwidth inventory will be 
on a nested RPs but not every instance will require bandwidth)

Afaiu, as soon as we have NUMA modelling in place the most trivial 
servers will have nested allocations as CPU and MEMORY inverntory will 
be moved to the nested NUMA RPs. But NUMA is still in the future.

Sidenote: there is an edge case reported by bauzas when an instance 
allocates _only_ from nested RPs. This was discussed on last Friday and 
it resulted in a new patch[0] but I would like to keep that discussion 
separate from this if possible.

Sidenote: the current problem somewhat related to not just nested PRs 
but to sharing RPs as well. However I'm not aiming to implement sharing 
support in Nova right now so I also try to keep the sharing disscussion 
separated if possible.

There was already some discussion on the Monday's scheduler meeting but 
I could not attend.
http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-08-14.00.log.html#l-20


The meat


Both live-migrate[1] and evacuate[2] has an optional force flag on the 
nova REST API. The documentation says: "Force  by not 
verifying the provided destination host by the scheduler."

Nova implements this statement by not calling the scheduler if 
force=True BUT still try to manage allocations in placement.

To have allocation on the destination host Nova blindly copies the 
instance allocation from the source host to the destination host during 
these operations. Nova can do that as 1) the whole allocation is 
against a single RP (the compute RP) and 2) Nova knows both the source 
compute RP and the destination compute RP.

However as soon as we bring nested allocations into the picture that 
blind copy will not be feasible. Possible cases
0) The instance has non-nested allocation on the source and would need 
non nested allocation on the destination. This works with blindy copy 
today.
1) The instance has a nested allocation on the source and would need a 
nested allocation on the destination as well.
2) The instance has a non-nested allocation on the source and would 
need a nested allocation on the destination.
3) The instance has a nested allocation on the source and would need a 
non nested allocation on the destination.

Nova cannot generate nested allocations easily without reimplementing 
some of the placement allocation candidate (a_c) code. However I don't 
like the idea of duplicating some of the a_c code in Nova.

Nova cannot detect what kind of allocation (nested or non-nested) an 
instance would need on the destination without calling placement a_c. 
So knowing when to call placement is a chicken and egg problem.

Possible solutions:
A) fail fast

0) Nova can detect that the source allocatioin is non-nested and try 
the blindy copy and it will succeed.
1) Nova can detect that the source allocaton is nested and fail the 
operation
2) Nova only sees a non nested source allocation. Even if the dest RP 
tree is nested it does not mean that the allocation will be nested. We 
cannot fail fast. Nova can try the blind copy and allocate every 
resources from the root RP of the destination. If the instance require 
nested allocation instead the claim will fail in placement. So nova can 
fail the operation a bit later than in 1).
3) Nova can detect that the source allocation is nested and fail the 
operation. However and enhanced blind copy that tries to allocation 
everything from the root RP on the destinaton would have worked.

B) Guess when to ignore the force flag and call the scheduler
-
0) keep the blind copy as it works
1) Nova detect that the source allocation is nested. Ignores the force 
flag and calls the scheduler that will call placement a_c. Move 
operation can succeed.
2) Nova only sees a non nested source allocation so it will fall back 
to blind copy and fails at the claim on destination.
3) Nova detect that the source allocation is nested. Ignores the force 
flag and calls the scheduler that will call placement a_c. Move 
operation can succeed.

This solution would be against the API doc that states nova does not 
call the scheduler if the operation is forced. However in case of force 
live-migration Nova already verifies the target host from couple of 
perspective in [3].
This solution is alreay proposed for live-migrate in [4] and for 
evacuate in [5] so the complexity of the solution can be seen in the 
reviews.

C) Remove the force flag from the API in a new microversion

[openstack-dev] [nova] Cancelling the notification subteam weekly meeting indefinitely

2018-10-02 Thread Balázs Gibizer
Hi,

Due to low amount of ongoing work in the area there is a low interest to
have this meeting going. So I'm cancelling it indefinitely[1].

Of course I'm still intereseted in helping any notification related 
work in the future and you can reach me in #openstack-nova as usual.

cheers,
gibi


[1]https://review.openstack.org/#/c/607314/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][qa] Should we enable multiattach in tempest-full?

2018-10-01 Thread Balázs Gibizer



On Sat, Sep 29, 2018 at 10:35 PM, Matt Riedemann  
wrote:
Nova, cinder and tempest run the nova-multiattach job in their check 
and gate queues. The job was added in Queens and was a specific job 
because we had to change the ubuntu cloud archive we used in Queens 
to get multiattach working. Since Rocky, devstack defaults to a 
version of the UCA that works for multiattach, so there isn't really 
anything preventing us from running the tempest multiattach tests in 
the integrated gate. The job tries to be as minimal as possible by 
only running tempest.api.compute.* tests, but it still means spinning 
up a new node and devstack for testing.


Given the state of the gate recently, I'm thinking it would be good 
if we dropped the nova-multiattach job in Stein and just enable the 
multiattach tests in one of the other integrated gate jobs.


+1

I initially was just going to enable it in the nova-next job, but we 
don't run that on cinder or tempest changes. I'm not sure if 
tempest-full is a good place for this though since that job already 
runs a lot of tests and has been timing out a lot lately [1][2].


The tempest-slow job is another option, but cinder doesn't currently 
run that job (it probably should since it runs volume-related tests, 
including the only tempest tests that use encrypted volumes).


If the multiattach test qualifies as a slow test then I'm in favor of 
adding it to the tempest-slow and not lengthening the tempest-full 
further.


gibi



Are there other ideas/options for enabling multiattach in another job 
that nova/cinder/tempest already use so we can drop the now mostly 
redundant nova-multiattach job?


[1] http://status.openstack.org/elastic-recheck/#1686542
[2] http://status.openstack.org/elastic-recheck/#1783405

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Balázs Gibizer



On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried  wrote:

It's time somebody said this.

Every time we turn a corner or look under a rug, we find another use
case for provider traits in placement. But every time we have to have
the argument about whether that use case satisfies the original
"intended purpose" of traits.

That's only reason I've ever been able to glean: that it (whatever 
"it"

is) wasn't what the architects had in mind when they came up with the
idea of traits. We're not even talking about anything that would 
require

changes to the placement API. Just, "Oh, that's not a *capability* -
shut it down."

Bubble wrap was originally intended as a textured wallpaper and a
greenhouse insulator. Can we accept the fact that traits have (many,
many) uses beyond marking capabilities, and quit with the arbitrary
restrictions?


How far are we willing to go? Does an arbitrary (key: value) pair 
encoded in a trait name like key_`str(value)` (e.g. 
CURRENT_TEMPERATURE: 85 encoded as CUSTOM_TEMPERATURE_85) something we 
would be OK to see in placement?


Cheers,
gibi



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][python3][nova] starting zuul migration for nova repos

2018-09-24 Thread Balázs Gibizer


On Mon, Sep 24, 2018 at 11:10 AM, Andreas Jaeger  wrote:
> On 11/09/2018 19.13, Stephen Finucane wrote:
>> On Mon, 2018-09-10 at 13:48 -0600, Doug Hellmann wrote:
>>> Melanie gave me the go-ahead to propose the patches, so here's the 
>>> list
>>> of patches for the zuul migration, doc job update, and python 3.6 
>>> unit
>>> tests for the nova repositories.
>> 
>> I've reviewed/+2d all of these on master and think Sylvain will be
>> following up with the +Ws. I need someone else to handle the
>> 'stable/XXX' patches though.
>> 
>> Here's a query for anyone that wants to jump in here.
>> 
>> https://review.openstack.org/#/q/topic:python3-first+status:open+(openstack/nova+OR+project:openstack/nova-specs+OR+openstack/os-traits+OR+openstack/os-vif+OR+openstack/osc-placement+OR+openstack/python-novaclient)
> 
> Most of these are merged - with exception of stable changes and 
> changes to osc-placement. Any nova stable reviewers around to finish 
> this, please?

I've +A-d the osc-placement patches but I cannot do the same for the 
stale patches.
Cheers,
gibi

> 
> Thanks,
> Andreas
> 
> 
>> 
>> Stephen
>> 
>> PS: Thanks, Andreas, for the follow-up cleanup patches. Much
>> appreciated :)
>> 
>>> +--++---+
 Subject  | Repo
| Branch|
>>> 
>>> +--++---+
 remove job settings for nova repositories| 
 openstack-infra/project-config | master|
 import zuul job settings from project-config | openstack/nova  
| master|
 switch documentation job to new PTI  | openstack/nova  
| master|
 add python 3.6 unit test job | openstack/nova  
| master|
 import zuul job settings from project-config | openstack/nova  
| stable/ocata  |
 import zuul job settings from project-config | openstack/nova  
| stable/pike   |
 import zuul job settings from project-config | openstack/nova  
| stable/queens |
 import zuul job settings from project-config | openstack/nova  
| stable/rocky  |
 import zuul job settings from project-config | 
 openstack/nova-specs   | master|
 import zuul job settings from project-config | openstack/os-traits 
| master|
 switch documentation job to new PTI  | openstack/os-traits 
| master|
 add python 3.6 unit test job | openstack/os-traits 
| master|
 import zuul job settings from project-config | openstack/os-traits 
| stable/pike   |
 import zuul job settings from project-config | openstack/os-traits 
| stable/queens |
 import zuul job settings from project-config | openstack/os-traits 
| stable/rocky  |
 import zuul job settings from project-config | openstack/os-vif
| master|
 switch documentation job to new PTI  | openstack/os-vif
| master|
 add python 3.6 unit test job | openstack/os-vif
| master|
 import zuul job settings from project-config | openstack/os-vif
| stable/ocata  |
 import zuul job settings from project-config | openstack/os-vif
| stable/pike   |
 import zuul job settings from project-config | openstack/os-vif
| stable/queens |
 import zuul job settings from project-config | openstack/os-vif
| stable/rocky  |
 import zuul job settings from project-config | 
 openstack/osc-placement| master|
 switch documentation job to new PTI  | 
 openstack/osc-placement| master|
 add python 3.6 unit test job | 
 openstack/osc-placement| master|
 import zuul job settings from project-config | 
 openstack/osc-placement| stable/queens |
 import zuul job settings from project-config | 
 openstack/osc-placement| stable/rocky  |
 import zuul job settings from project-config | 
 openstack/python-novaclient| master|
 switch documentation job to new PTI  | 
 openstack/python-novaclient| master|
 add python 3.6 unit test job | 
 openstack/python-novaclient| master|
 add lib-forward-testing-python3 test job | 
 openstack/python-novaclient| master|
 import zuul job settings from project-config | 
 openstack/python-novaclient| stable/ocata  |
 import zuul job settings from project-config 

Re: [openstack-dev] Nominating Tetsuro Nakamura for placement-core

2018-09-20 Thread Balázs Gibizer



On Wed, Sep 19, 2018 at 5:25 PM, Chris Dent  
wrote:



I'd like to nominate Tetsuro Nakamura for membership in the
placement-core team. Throughout placement's development Tetsuro has
provided quality reviews; done the hard work of creating rigorous
functional tests, making them fail, and fixing them; and implemented
some of the complex functionality required at the persistence layer.
He's aware of and respects the overarching goals of placement and has
demonstrated pragmatism when balancing those goals against the
requirements of nova, blazar and other projects.

Please follow up with a +1/-1 to express your preference. No need to
be an existing placement core, everyone with an interest is welcome.


I'm soft +1 on Tetsuro. I'm +1 as the code and reviews I read from 
Tetsuro looks solid to me. I'm only soft +1 as I did not interface with 
Tetsuro enough to express a hard opinion.


Cheers,
gibi



Thanks.

--
Chris Dent   ٩◔̯◔۶   
https://anticdent.org/

reenode: cdent tw: @anticdent
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict

2018-09-17 Thread Balázs Gibizer


Hi,

Reworked and rebased the series based on this thread. The series starts 
here https://review.openstack.org/#/c/591597


Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possibledeprecation of Nova's legacy notification interface

2018-09-17 Thread Balázs Gibizer

Hi,

On the Stein PTG the nova team agreed to deprecate the legacy, 
unversioned notification interface of nova. We also agreed that we will 
not try to remove the legacy notification sending from the code any 
time soon. So this deprecation means the following:
* by default configuration nova will only emit versioned notifications, 
but the unversioned notifications still can be turned on in the 
configuration.
* nova will not maintain the legacy notification code path further, so 
it can break


I pushed the deprecation patch [2] but it will only be merged after the 
remaining versioned notification transformation patches [3] are merged.


Cheers,
gibi

[2] https://review.openstack.org/#/c/603079
[3] 
https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-stein


On Tue, Aug 28, 2018 at 4:31 PM, Balázs Gibizer 
 wrote:
Thanks for all the responses. I collected them on the nova ptg 
discussion etherpad [1] (L186 at the moment). The nova team will talk 
about deprecation of the legacy interface on Friday on the PTG. If 
you want participate in the discussion but you are not planning to 
sit in the nova room whole day then let me know and I will try to 
ping you over IRC when we about to start the item.


Cheers,
gibi

[1] https://etherpad.openstack.org/p/nova-ptg-stein

On Thu, Aug 9, 2018 at 11:41 AM, Balázs Gibizer 
 wrote:

Dear Nova notification consumers!


The Nova team made progress with the new versioned notification 
interface [1] and it is almost reached feature parity [2] with the 
legacy, unversioned one. So Nova team will discuss on the upcoming 
PTG the deprecation of the legacy interface. There is a list of 
projects (we know of) consuming the legacy interface and we would 
like to know if any of these projects plan to switch over to the 
new interface in the foreseeable future so we can make a well 
informed decision about the deprecation.



* Searchlight [3] - it is in maintenance mode so I guess the answer 
is no

* Designate [4]
* Telemetry [5]
* Mistral [6]
* Blazar [7]
* Watcher [8] - it seems Watcher uses both legacy and versioned nova 
notifications
* Masakari - I'm not sure Masakari depends on nova notifications or 
not


Cheers,
gibi

[1] 
https://docs.openstack.org/nova/latest/reference/notifications.html

[2] http://burndown.peermore.com/nova-notification/

[3] 
https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/notification_handler.py
[4] 
https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py
[5] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml#L2
[6] 
https://github.com/openstack/mistral/blob/master/etc/event_definitions.yml.sample#L2
[7] 
https://github.com/openstack/blazar/blob/5526ed1f9b74d23b5881a5f73b70776ba9732da4/doc/source/user/compute-host-monitor.rst
[8] 
https://github.com/openstack/watcher/blob/master/watcher/decision_engine/model/notification/nova.py#L335






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] Small bandwidth demo on the PTG

2018-09-12 Thread Balázs Gibizer

Hi,

It seems that the Nova room (Ballroom A) does not have any projection 
possibilities. In the other hand the Neutron room (
Vail Meeting Room, Atrium Level) does have a projector. So I suggest to 
move the demo to the Neutron room.


Cheers,
gibi

On Fri, Aug 31, 2018 at 2:29 AM, Balázs Gibizer 
 wrote:



On Thu, Aug 30, 2018 at 8:13 PM, melanie witt  
wrote:

On Thu, 30 Aug 2018 12:43:06 -0500, Miguel Lavalle wrote:

Gibi, Bence,

In fact, I added the demo explicitly to the Neutron PTG agenda from 
1:30 to 2, to give it visiblilty


I'm interested in seeing the demo too. Will the demo be shown at the 
Neutron room or the Nova room? Historically, lunch has ended at 
1:30, so this will be during the same time as the Neutron/Nova 
cross project time. Should we just co-locate together for the demo 
and the session? I expect anyone watching the demo will want to 
participate in the Neutron/Nova session as well. Either room is 
fine by me.




I assume that the nova - neturon cross project session will be in the 
nova room, so I propose to have the demo there as well to avoid 
unnecessarily moving people around. For us it is totally OK to start 
the demo at 1:30.


Cheers,
gibi



-melanie

On Thu, Aug 30, 2018 at 3:55 AM, Balázs Gibizer 
<mailto:balazs.gibi...@ericsson.com>> wrote:


Hi,

Based on the Nova PTG planning etherpad [1] there is a need to 
talk

about the current state of the bandwidth work [2][3]. Bence
(rubasov) has already planned to show a small demo to Neutron 
folks
about the current state of the implementation. So Bence and I 
are
wondering about bringing that demo close to the nova - neutron 
cross

project session. That session is currently planned to happen
Thursday after lunch. So we are think about showing the demo 
right

before that session starts. It would start 30 minutes before the
nova - neutron cross project session.

Are Nova folks also interested in seeing such a demo?

If you are interested in seeing the demo please drop us a line 
or

ping us in IRC so we know who should we wait for.

Cheers,
gibi

[1] https://etherpad.openstack.org/p/nova-ptg-stein
<https://etherpad.openstack.org/p/nova-ptg-stein>
[2]

https://specs.openstack.org/openstack/neutron-specs/specs/rocky/minimum-bandwidth-allocation-placement-api.html

<https://specs.openstack.org/openstack/neutron-specs/specs/rocky/minimum-bandwidth-allocation-placement-api.html>

[3]

https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/bandwidth-resource-provider.html

<https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/bandwidth-resource-provider.html>




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] better name for placement

2018-09-04 Thread Balázs Gibizer



On Tue, Sep 4, 2018 at 7:01 PM, Jay Pipes  wrote:

On 09/04/2018 12:59 PM, Balázs Gibizer wrote:

On Tue, Sep 4, 2018 at 6:25 PM, Jay Pipes  wrote:

On 09/04/2018 12:17 PM, Doug Hellmann wrote:

Excerpts from Jay Pipes's message of 2018-09-04 12:08:41 -0400:

On 09/04/2018 11:44 AM, Doug Hellmann wrote:

Excerpts from Chris Dent's message of 2018-09-04 15:32:12 +0100:

On Tue, 4 Sep 2018, Jay Pipes wrote:

Is there a reason we couldn't have openstack-placement be the 
package name?


I would hope we'd be able to do that, and probably should do 
that.

'openstack-placement' seems a find pypi package name for a think
from which you do 'import placement' to do some openstack stuff,
yeah?


That's still a pretty generic name for the top-level import, but 
I think
the only real risk is that the placement service couldn't be 
installed
at the same time as another package owned by someone else that 
used that

top-level name. I'm not sure how much of a risk that really is.


You mean if there was another Python package that used the 
package name

"placement"?

The alternative would be to make the top-level package something 
like

os_placement instead?


Either one works for me. Though I'm pretty sure that it isn't 
necessary. The reason it isn't necessary is because the stuff in 
the top-level placement package isn't meant to be imported by 
anything at all. It's the placement server code.


What about placement direct and the effort to allow cinder to import 
placement instead of running it as a separate service?


I don't know what placement direct is. Placement wasn't designed to 
be imported as a module. It was designed to be a (micro-)service with 
a REST API for interfacing.


In Vancouver we talked about allowing cinder to import placement as a 
library. See https://etherpad.openstack.org/p/YVR-cinder-placement L47


Cheers,
gibi



Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] better name for placement

2018-09-04 Thread Balázs Gibizer



On Tue, Sep 4, 2018 at 6:25 PM, Jay Pipes  wrote:

On 09/04/2018 12:17 PM, Doug Hellmann wrote:

Excerpts from Jay Pipes's message of 2018-09-04 12:08:41 -0400:

On 09/04/2018 11:44 AM, Doug Hellmann wrote:

Excerpts from Chris Dent's message of 2018-09-04 15:32:12 +0100:

On Tue, 4 Sep 2018, Jay Pipes wrote:

Is there a reason we couldn't have openstack-placement be the 
package name?


I would hope we'd be able to do that, and probably should do that.
'openstack-placement' seems a find pypi package name for a think
from which you do 'import placement' to do some openstack stuff,
yeah?


That's still a pretty generic name for the top-level import, but I 
think
the only real risk is that the placement service couldn't be 
installed
at the same time as another package owned by someone else that 
used that

top-level name. I'm not sure how much of a risk that really is.


You mean if there was another Python package that used the package 
name

"placement"?

The alternative would be to make the top-level package something 
like

os_placement instead?


Either one works for me. Though I'm pretty sure that it isn't 
necessary. The reason it isn't necessary is because the stuff in the 
top-level placement package isn't meant to be imported by anything at 
all. It's the placement server code.


What about placement direct and the effort to allow cinder to import 
placement instead of running it as a separate service?


Cheers,
gibi



Nothing is going to be adding openstack-placement into its 
requirements.txt file or doing:


 from placement import blah

If some part of the server repo is meant to be imported into some 
other system, say nova, then it will be pulled into a separate lib, 
ala ironiclib or neutronlib.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari][vitrage]Possible deprecation of Nova's legacy notification interface

2018-09-04 Thread Balázs Gibizer



On Tue, Sep 4, 2018 at 3:04 PM, Ifat Afek  wrote:

Hi,

Vitrage also uses the Nova legacy notifications.
Unfortunately I will not attend the PTG, but I added the relevant 
information in the etherpad.


Thanks for the pad update.

Cheers,
gibi



Thanks,
Ifat

On Tue, Aug 28, 2018 at 5:31 PM Balázs Gibizer 
 wrote:

Thanks for all the responses. I collected them on the nova ptg
discussion etherpad [1] (L186 at the moment). The nova team will talk
about deprecation of the legacy interface on Friday on the PTG. If 
you

want participate in the discussion but you are not planning to sit in
the nova room whole day then let me know and I will try to ping you
over IRC when we about to start the item.

Cheers,
gibi

[1] https://etherpad.openstack.org/p/nova-ptg-stein





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Notification subteam meeting cancelled

2018-09-04 Thread Balázs Gibizer

Hi,

This week's and next week's notification subteam meeting has been 
cancelled. See you in Denver.


Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominating Chris Dent for placement-core

2018-09-03 Thread Balázs Gibizer


On Fri, Aug 31, 2018 at 5:45 PM, Eric Fried  wrote:

The openstack/placement project [1] and its core team [2] have been
established in gerrit.

I hereby nominate Chris Dent for membership in the placement-core 
team.
He has been instrumental in the design, implementation, and 
stewardship

of the placement API since its inception and has shown clear and
consistent leadership.

As we are effectively bootstrapping placement-core at this time, it
would seem appropriate to consider +1/-1 responses from heavy 
placement

contributors as well as existing cores (currently nova-core).

[1] https://review.openstack.org/#/admin/projects/openstack/placement
[2] https://review.openstack.org/#/admin/groups/1936,members


+1



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] Small bandwidth demo on the PTG

2018-08-31 Thread Balázs Gibizer



On Thu, Aug 30, 2018 at 8:13 PM, melanie witt  
wrote:

On Thu, 30 Aug 2018 12:43:06 -0500, Miguel Lavalle wrote:

Gibi, Bence,

In fact, I added the demo explicitly to the Neutron PTG agenda from 
1:30 to 2, to give it visiblilty


I'm interested in seeing the demo too. Will the demo be shown at the 
Neutron room or the Nova room? Historically, lunch has ended at 1:30, 
so this will be during the same time as the Neutron/Nova cross 
project time. Should we just co-locate together for the demo and the 
session? I expect anyone watching the demo will want to participate 
in the Neutron/Nova session as well. Either room is fine by me.




I assume that the nova - neturon cross project session will be in the 
nova room, so I propose to have the demo there as well to avoid 
unnecessarily moving people around. For us it is totally OK to start 
the demo at 1:30.


Cheers,
gibi



-melanie

On Thu, Aug 30, 2018 at 3:55 AM, Balázs Gibizer 
mailto:balazs.gibi...@ericsson.com>> 
wrote:


Hi,

Based on the Nova PTG planning etherpad [1] there is a need to 
talk

about the current state of the bandwidth work [2][3]. Bence
(rubasov) has already planned to show a small demo to Neutron 
folks

about the current state of the implementation. So Bence and I are
wondering about bringing that demo close to the nova - neutron 
cross

project session. That session is currently planned to happen
Thursday after lunch. So we are think about showing the demo 
right

before that session starts. It would start 30 minutes before the
nova - neutron cross project session.

Are Nova folks also interested in seeing such a demo?

If you are interested in seeing the demo please drop us a line or
ping us in IRC so we know who should we wait for.

Cheers,
gibi

[1] https://etherpad.openstack.org/p/nova-ptg-stein
<https://etherpad.openstack.org/p/nova-ptg-stein>
[2]

https://specs.openstack.org/openstack/neutron-specs/specs/rocky/minimum-bandwidth-allocation-placement-api.html

<https://specs.openstack.org/openstack/neutron-specs/specs/rocky/minimum-bandwidth-allocation-placement-api.html>

[3]

https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/bandwidth-resource-provider.html

<https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/bandwidth-resource-provider.html>




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][nova] Small bandwidth demo on the PTG

2018-08-30 Thread Balázs Gibizer

Hi,

Based on the Nova PTG planning etherpad [1] there is a need to talk 
about the current state of the bandwidth work [2][3]. Bence (rubasov) 
has already planned to show a small demo to Neutron folks about the 
current state of the implementation. So Bence and I are wondering about 
bringing that demo close to the nova - neutron cross project session. 
That session is currently planned to happen Thursday after lunch. So we 
are think about showing the demo right before that session starts. It 
would start 30 minutes before the nova - neutron cross project session.


Are Nova folks also interested in seeing such a demo?

If you are interested in seeing the demo please drop us a line or ping 
us in IRC so we know who should we wait for.


Cheers,
gibi

[1] https://etherpad.openstack.org/p/nova-ptg-stein
[2] 
https://specs.openstack.org/openstack/neutron-specs/specs/rocky/minimum-bandwidth-allocation-placement-api.html
[3] 
https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/bandwidth-resource-provider.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface

2018-08-28 Thread Balázs Gibizer
Thanks for all the responses. I collected them on the nova ptg 
discussion etherpad [1] (L186 at the moment). The nova team will talk 
about deprecation of the legacy interface on Friday on the PTG. If you 
want participate in the discussion but you are not planning to sit in 
the nova room whole day then let me know and I will try to ping you 
over IRC when we about to start the item.


Cheers,
gibi

[1] https://etherpad.openstack.org/p/nova-ptg-stein

On Thu, Aug 9, 2018 at 11:41 AM, Balázs Gibizer 
 wrote:

Dear Nova notification consumers!


The Nova team made progress with the new versioned notification 
interface [1] and it is almost reached feature parity [2] with the 
legacy, unversioned one. So Nova team will discuss on the upcoming 
PTG the deprecation of the legacy interface. There is a list of 
projects (we know of) consuming the legacy interface and we would 
like to know if any of these projects plan to switch over to the new 
interface in the foreseeable future so we can make a well informed 
decision about the deprecation.



* Searchlight [3] - it is in maintenance mode so I guess the answer 
is no

* Designate [4]
* Telemetry [5]
* Mistral [6]
* Blazar [7]
* Watcher [8] - it seems Watcher uses both legacy and versioned nova 
notifications
* Masakari - I'm not sure Masakari depends on nova notifications or 
not


Cheers,
gibi

[1] 
https://docs.openstack.org/nova/latest/reference/notifications.html

[2] http://burndown.peermore.com/nova-notification/

[3] 
https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/notification_handler.py
[4] 
https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py
[5] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml#L2
[6] 
https://github.com/openstack/mistral/blob/master/etc/event_definitions.yml.sample#L2
[7] 
https://github.com/openstack/blazar/blob/5526ed1f9b74d23b5881a5f73b70776ba9732da4/doc/source/user/compute-host-monitor.rst
[8] 
https://github.com/openstack/watcher/blob/master/watcher/decision_engine/model/notification/nova.py#L335






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Notification subteam meeting cancelled

2018-08-28 Thread Balázs Gibizer

Hi,

There won't be notification subteam meeting this week.

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] extraction (technical) update

2018-08-28 Thread Balázs Gibizer



On Tue, Aug 28, 2018 at 1:20 PM, Chris Dent  
wrote:

On Mon, 27 Aug 2018, melanie witt wrote:

I think we should use the openstack review system (gerrit) for 
moving the code. We're moving a critical piece of nova to its own 
repo and I think it's worth having the review and history contained 
in the openstack review system.


This seems a reasonable enough strategy, in broad strokes. I want to
be sure that we're all actually in agreement on the details, as
we've had a few false starts and I think some of the details are
getting confused in the shuffle and the general busy-ness in progress.

Is anyone aware of anyone who hasn't commented yet that should? If
you are, please poke them so we don't surprise them.

Using smaller changes that make it easy to see import vs content 
changes might make review faster than fewer, larger changes.


I _think_ we ought to be able to use the existing commits from the
runs-throughs-to-passing-tests already done, but if we use the
strategy described below it doesn't really matter: the TDD approach
(after fixing paths and test config) is pretty fast.

The most important bit of all of this is making sure we don't break 
anything in the process for operators and users consuming nova and 
placement, and ensure the upgrade path from rocky => stein is 
tested in grenade.


This is one of the areas where pretty active support from all of
nova will be required: getting zuul, upgrade paths, and the like
clearly defined and executed.


The steps I think we should take are:

1. We copy the placement code into the openstack/placement repo and 
have it passing all of its own unit and functional tests.


To break that down to more detail, how does this look?
(note the ALL CAPS where more than acknowledgement is requested)

1.1 Run the git filter-branch on a copy of nova
1.1.1 Add missing files to the file list:
  1.1.1.1 .gitignore
  1.1.1.2 # ANYTHING ELSE?
1.2 Push -f that thing, acknowledge to be broken, to a seed repo on 
github

(ed's repo should be fine)
1.3 Do the repo creation bits described in
https://docs.openstack.org/infra/manual/creators.html
to seed openstack/placement
1.3.1 set zuul jobs. Either to noop-jobs, or non voting basic
func and unit # INPUT DESIRED HERE


I suggest to add a non-voting unit and functional job and iterate on 
the repo to make them green, then turn them to voting.
I also think that we can add a non-voting tempest full job as well. 
Making it green depends on how hard to deploy placement from the new 
repo to tempest. I think as soon as placement repo has passing gabbits 
(e.g functional job) and we can deploy placement in tempest then 
tempest will be green soon.



1.4 Once the repo exists with some content, incrementally bring it to
working
1.4.1 Update tox.ini to be placement oriented
1.4.2 Update setup.cfg to be placement oriented
1.4.3 Correct .stesr.conf
1.4.4 Move base of placement to "right" place
1.4.5 Move unit and functionals to right place
1.4.6 Do automated path fixings
1.4.7 Set up translation domain and i18n.py corectly
1.4.8 Trim placement/conf to just the conf settings required
  (api, base, database, keystone, paths, placement)
1.4.9 Remove database files that are not relevant (the db api is
  not used by placement)
1.4.10 Fix the Database Fixture to be just one database
1.4.11 Disable migrations that can't work (because of
   dependencies on nova code, 014 and 030 are examples)
   # INPUT DESIRED HERE AND ON SCHEMA MIGRATIONS IN GENERAL
1.4.12 Incrementally get tests working
1.4.13 Fix pep8
1.5 Make zuul pep, unit and functional voting
1.6 Create tools for db table sync/create
1.7 Concurrently go to step 2, where the harder magic happens.
1.8 Find and remove dead code (there will be some).
1.9 Tune up and confirm docs
1.10 Grep for remaining "nova" (as string and spirit) and fix


Item 1.4.12 may deserve some discussion. When I've done this the
several times before, the strategy I've used is to be test driven:
run either functional or unit tests, find and fix one of the errors
revealed, commit, move on.

This strategy has worked very well for me because of the "test
driven" part, but I'm hesitant to do it if reviewers are going to
get to a patch and say "why didn't you also change X?" The answer to
that question is "because this is incremental and test driven and
the tests didn't demand that change (yet)". Sometimes that will mean
that things of the same class of change are in different commits.

Are people okay with that and willing to commit to being okay with
that answer in reviews? To some extent we need to have some faith on
the end result: the tests work. If people are not okay with that, we
need the people who are not to determine and prove the alternate
strategy. I've had this one work and work well.


I like this test driven approach. If I start to leave comments like 
"why didn't you 

Re: [openstack-dev] [nova] [placement] extraction (technical) update

2018-08-28 Thread Balázs Gibizer



On Mon, Aug 27, 2018 at 5:31 PM, Matt Riedemann  
wrote:

On 8/24/2018 7:36 AM, Chris Dent wrote:


Over the past few days a few of us have been experimenting with
extracting placement to its own repo, as has been discussed at
length on this list, and in some etherpads:

https://etherpad.openstack.org/p/placement-extract-stein
https://etherpad.openstack.org/p/placement-extraction-file-notes

As part of that, I've been doing some exploration to tease out the
issues we're going to hit as we do it. None of this is work that
will be merged, rather it is stuff to figure out what we need to
know to do the eventual merging correctly and efficiently.

Please note that doing that is just the near edge of a large
collection of changes that will cascade in many ways to many
projects, tools, distros, etc. The people doing this are aware of
that, and the relative simplicity (and fairly immediate success) of
these experiments is not misleading people into thinking "hey, no
big deal". It's a big deal.

There's a strategy now (described at the end of the first etherpad
listed above) for trimming the nova history to create a thing which
is placement. From the first run of that Ed created a github repo
and I branched that to eventually create:

https://github.com/EdLeafe/placement/pull/2

In that, all the placement unit and functional tests are now
passing, and my placecat [1] integration suite also passes.

That work has highlighted some gaps in the process for trimming
history which will be refined to create another interim repo. We'll
repeat this until the process is smooth, eventually resulting in an
openstack/placement.


We talked about the github strategy a bit in the placement meeting 
today [1]. Without being involved in this technical extraction work 
for the past few weeks, I came in with a different perspective on the 
end-game, and it was not aligned with what Chris/Ed thought as far as 
how we get to the official openstack/placement repo.


At a high level, Ed's repo [2] is a fork of nova with large changes 
on top using pull requests to do things like remove the non-placement 
nova files, update import paths (because the import structure changes 
from nova.api.openstack.placement to just placement), and then 
changes from Chris [3] to get tests working. Then the idea was to 
just use that to seed the openstack/placement repo and rather than 
review the changes along the way*, people that care about what 
changed (like myself) would see the tests passing and be happy enough.


However, I disagree with this approach since it bypasses our 
community code review system of using Gerrit and relying on a core 
team to approve changes at the sake of expediency.


What I would like to see are the changes that go into making the seed 
repo and what gets it to passing tests done in gerrit like we do for 
everything else. There are a couple of options on how this is done 
though:


1. Seed the openstack/placement repo with the filter_git_history.sh 
script output as Ed has done here [4]. This would include moving the 
placement files to the root of the tree and dropping nova-specific 
files. Then make incremental changes in gerrit like with [5] and the 
individual changes which make up Chris's big pull request [3]. I am 
primarily interested in making sure there are not content changes 
happening, only mechanical tree-restructuring type changes, stuff 
like that. I'm asking for more changes in gerrit so they can be 
sanely reviewed (per normal).


2. Eric took a slightly different tack in that he's OK with just a 
couple of large changes (or even large patch sets within a single 
change) in gerrit rather than ~30 individual changes. So that would 
be more like at most 3 changes in gerrit for [4][5][3].


3. The 3rd option is we just don't use gerrit at all and seed the 
official repo with the results of Chris and Ed's work in Ed's repo in 
github. Clearly this would be the fastest way to get us to a new repo 
(at the expense of bucking community code review and development 
process - is an exception worth it?).




I assumed that the work on github was done to _discover_ what steps 
needs to be done later to populate the new repo and make the tests 
pass. So I more like the #1 approach.


Option 1 would clearly be a drain on at least 2 nova cores to go 
through the changes. I think Eric is on board for reviewing options 1 
or 2 in either case, but he prefers option 2. Since I'm throwing a 
wrench in the works, I also need to stand up and review the changes 
if we go with option 1 or 2. Jay said he'd review them but consider 
these reviews lower priority. I expect we could get some help from 
some other nova cores though, maybe not on all changes, but at least 
some (thinking gibi, alex_xu, sfinucan).


I will spend time reviewing the patches coming for the new placement 
repo.


Cheers,
gibi



Any CI jobs would be non-voting while going through options 1 or 2 
until we get to a point that tests should finally be 

Re: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict

2018-08-22 Thread Balázs Gibizer



On Fri, Aug 17, 2018 at 5:40 PM, Eric Fried  wrote:

gibi-

 - On migration, when we transfer the allocations in either 
direction, a

 conflict means someone managed to resize (or otherwise change
 allocations?) since the last time we pulled data. Given the global 
lock

 in the report client, this should have been tough to do. If it does
 happen, I would think any retry would need to be done all the way 
back
 at the claim, which I imagine is higher up than we should go. So 
again,

 I think we should fail the migration and make the user retry.


 Do we want to fail the whole migration or just the migration step 
(e.g.

 confirm, revert)?
 The later means that failure during confirm or revert would put the
 instance back to VERIFY_RESIZE. While the former would mean that in 
case
 of conflict at confirm we try an automatic revert. But for a 
conflict at

 revert we can only put the instance to ERROR state.


This again should be "impossible" to come across. What would the
behavior be if we hit, say, ValueError in this spot?


I might not totally follow you. I see two options to choose from for 
the revert case:


a) Allocation manipulation error during revert of a migration causes 
that instance goes to ERROR. -> end user cannot retry the revert the 
instance needs to be deleted.


b) Allocation manipulation error during revert of a migration causes 
that the instance goes back to VERIFY_RESIZE state. -> end user can 
retry the revert via the API.


I see three options to choose from for the confirm case:

a) Allocation manipulation error during confirm of a migration causes 
that instance goes to ERROR. -> end user cannot retry the confirm the 
instance needs to be deleted.


b) Allocation manipulation error during confirm of a migration causes 
that the instance goes back to VERIFY_RESIZE state. -> end user can 
retry the confirm via the API.


c) Allocation manipulation error during confirm of a migration causes 
that nova automatically tries to revert the migration. (For failure 
during this revert the same options available as for the generic revert 
case, see above)


We also need to consider live migration. It is similar in a sense that 
it also use move_allocations. But it is different as the end user 
doesn't explicitly confirm or revert a live migration.


I'm looking for opinions about which option we should take in each 
cases.


gibi



-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-22 Thread Balázs Gibizer



On Sat, Aug 18, 2018 at 2:25 PM, Chris Dent  
wrote:


So my hope is that (in no particular order) Jay Pipes, Eric Fried,
Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov,
Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to
placement whom I'm forgetting [1] would express their preference on
what they'd like to see happen.


+1 for separate git repository
+1 for initializing the placement-core with nova-core team
+1 for talking separately about incuding more cores to the 
placement-core


I'm for taking incremental steps. So if the git repo separation can ben 
done independently of the project separation then why not do the step 
first we seems to be agreeing with.


I think allowing the placement-core team to diverge from the nova-core 
team will help in many ways to talk about the project separate:
* more core reviewers for placement-> more review bandwidth for 
placement-> less review need from nova-cores on placement code -> more 
time for nova-cores to propose solutions for remaining big nova induced 
placement changes (affinity, quota) and implement support in nova for 
existing placement features (consumer gen, nested RP, granular resource 
request)
* possibility to include reviews to the placement core team (over time) 
with other, placement-using module background (cinder, neutron, cyborg, 
etc.) -> fresh viewpoints about the direction of placement from 
placement API consumers
* a divers core team will allow us to test the water about feature 
priorization conflicts if any.


I'm not against making two steps at the same time and doing the project 
separation _if_ there are some level of consensus amongst the 
interested parties. But based on this long mail thread we don't have 
that yet. So I suggest to do the repo and core team change only now and 
spend time gathering experience having the evolved placement-core team.


Cheers,
gibi





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification subteem meeting is cancelled this week

2018-08-21 Thread Balázs Gibizer

Hi,

There won't be subteam meeting this week.

Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict

2018-08-17 Thread Balázs Gibizer



On Thu, Aug 16, 2018 at 5:34 PM, Eric Fried  wrote:

Thanks for this, gibi.


TL;DR: a).

I didn't look, but I'm pretty sure we're not caching allocations in 
the
report client. Today, nobody outside of nova (specifically the 
resource

tracker via the report client) is supposed to be mucking with instance
allocations, right? And given the global lock in the resource tracker,
it should be pretty difficult to race e.g. a resize and a delete in 
any

meaningful way. So short term, IMO it is reasonable to treat any
generation conflict as an error. No retries. Possible wrinkle on 
delete,

where it should be a failure unless forced.


Yes, today the instance_uuid and migraton_uuid consumers in placement 
are only changed from nova.


Right now I don't have any examples where nova is racing with itself on 
a instance or migration consumer. We could try hitting the Nova API in 
parallel with different server lifecycle operations against the same 
server to see if we can find races. But until such race is discovered 
we can go with option a)




Long term, I also can't come up with any scenario where it would be
appropriate to do a narrowly-focused GET+merge/replace+retry. But
implementing the above short-term plan shouldn't prevent us from 
adding
retries for individual scenarios later if we do uncover places where 
it

makes sense.



Later when resources consumed by a server will be handled outside of 
nova, like bandwidth from neutron and accelerators from cyborg we might 
see cases when nova will not be the only module changing a 
instance_uuid consumer. Then we have to decide how to handle that. I 
think one solution could be to make sure Nova knows about the bandwidth 
and accelerator resource needs of a server even if it is provided by 
neutron or cyborg. This knowledge is anyhow necessary to support atomic 
resource claim in the scheduler. For neturon ports this will be done 
through the resource_request attribute of the port. So even if the 
resource need of a port changes nova can go back to neutron and query 
the current need. This way nova can implement the following generic 
algorithm for every operation where nova wants to change the 
instance_uuid consumer in placement:
* collect the server current resource needs (might involve reading it 
from flavor, from neutron port, from cyborg accelerator) and apply the 
change nova wants to make (e.g. delete, move, resize).

* GET current consumer view from placement
* merge the two and push the result back to placement



Here's some stream-of-consciousness that led me to the above opinions:

- On spawn, we send the allocation with a consumer gen of None because
we expect the consumer not to exist. If it exists, that should be a 
hard

fail. (Hopefully the only way this happens is a true UUID conflict.)

- On migration, when we create the migration UUID, ditto above ^


I agree on both. I suggest returning HTTP 500 as we need a bug report 
about these cases.




- On migration, when we transfer the allocations in either direction, 
a

conflict means someone managed to resize (or otherwise change
allocations?) since the last time we pulled data. Given the global 
lock

in the report client, this should have been tough to do. If it does
happen, I would think any retry would need to be done all the way back
at the claim, which I imagine is higher up than we should go. So 
again,

I think we should fail the migration and make the user retry.


Do we want to fail the whole migration or just the migration step (e.g. 
confirm, revert)?
The later means that failure during confirm or revert would put the 
instance back to VERIFY_RESIZE. While the former would mean that in 
case of conflict at confirm we try an automatic revert. But for a 
conflict at revert we can only put the instance to ERROR state.




- On destroy, a conflict again means someone managed a resize despite
the global lock. If I'm deleting an instance and something about it
changes, I would think I want the opportunity to reevaluate my 
decision

to delete it. That said, I would definitely want a way to force it (in
which case we can just use the DELETE call explicitly). But neither 
case

should be a retry, and certainly there is no destroy scenario where I
would want a "merging" of allocations to happen.


Good idea about allowing forcing the delete. So a simple DELETE 
/servers/{instance_uuid} could fail on consumer conflict but a POST 
/servers/{instance_uuid}/action with forceDelete body would use DELETE 
/allocations and therefore will ignore any consumer generation.


Cheers,
gibi



Thanks,
efried


On 08/16/2018 06:43 AM, Balázs Gibizer wrote:

 reformatted for readabiliy, sorry:

 Hi,

 tl;dr: To properly use consumer generation (placement 1.28) in Nova 
we

 need to decide how to handle consumer generation conflict from Nova
 perspective:
 a) Nova reads the current consumer_generation before the allocation
   update operation and use that generation in the allocat

Re: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict

2018-08-16 Thread Balázs Gibizer

reformatted for readabiliy, sorry:

Hi,

tl;dr: To properly use consumer generation (placement 1.28) in Nova we
need to decide how to handle consumer generation conflict from Nova
perspective:
a) Nova reads the current consumer_generation before the allocation
  update operation and use that generation in the allocation update
  operation.  If the allocation is changed between the read and the
  update then nova fails the server lifecycle operation and let the
  end user retry it.
b) Like a) but in case of conflict nova blindly retries the
  read-and-update operation pair couple of times and if only fails
  the life cycle operation if run out of retries.
c) Nova stores its own view of the allocation. When a consumer's
  allocation needs to be modified then nova reads the current state
  of the consumer from placement. Then nova combines the two
  allocations to generate the new expected consumer state. In case
  of generation conflict nova retries the read-combine-update
  operation triplet.

Which way we should go now?

What should be or long term goal?


Details:

There are plenty of affected lifecycle operations. See the patch series
starting at [1].

For example:

The current patch[1] that handles the delete server case implements
option b).  It simly reads the current consumer generation from
placement and uses that to send a PUT /allocatons/{instance_uuid} with
"allocations": {} in its body.

Here implementing option c) would mean that during server delete nova
needs:
1) to compile its own view of the resource need of the server
  (currently based on the flavor but in the future based on the
  attached port's resource requests as well)
2) then read the current allocation of the server from placement
3) then subtract the server resource needs from the current allocation
  and send the resulting allocation back in the update to placement

In the simple case this subtraction would result in an empty allocation
sent to placement. Also in this simple case c) has the same effect as
b) currently implementated in [1].

However if somebody outside of nova modifies the allocation of this
consumer in a way that nova does not know about such changed resource
need then b) and c) will result in different placement state after
server delete.

I only know of one example, the change of neutron port's resource
request while the port is attached. (Note, it is out of scope in the
first step of bandwidth implementation.) In this specific example
option c) can work if nova re-reads the port's resource request during
delete when recalculates its own view of the server resource needs. But
I don't know if every other resource (e.g.  accelerators) used by a
server can be / will be handled this way.


Other examples of affected lifecycle operations:

During a server migration moving the source host allocation from the
instance_uuid to a the migration_uuid fails with consumer generation
conflict because of the instance_uuid consumer generation. [2]

Confirming a migration fails as the deletion of the source host
allocation fails due to the consumer generation conflict of the
migration_uuid consumer that is being emptied.[3]

During scheduling of a new server putting allocation to instance_uuid
fails as the scheduler assumes that it is a new consumer and therefore
uses consumer_generation: None for the allocation, but placement
reports generation conflict. [4]

During a non-forced evacuation the scheduler tries to claim the
resource on the destination host with the instance_uuid, but that
consumer already holds the source allocation therefore the scheduler
cannot assume that the instance_uuid is a new consumer. [4]


[1] https://review.openstack.org/#/c/591597
[2] https://review.openstack.org/#/c/591810
[3] https://review.openstack.org/#/c/591811
[4] https://review.openstack.org/#/c/583667






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict

2018-08-16 Thread Balázs Gibizer

Hi,

tl;dr: To properly use consumer generation (placement 1.28) in Nova we 
need to

decide how to handle consumer generation conflict from Nova perspective:
a) Nova reads the current consumer_generation before the allocation 
update

  operation and use that generation in the allocation update operation.
  If the allocation is changed between the read and the update then 
nova

  fails the server lifecycle operation and let the end user retry it.
b) Like a) but in case of conflict nova blindly retries the 
read-and-update
  operation pair couple of times and if only fails the life cycle 
operation

  if run out of retries.
c) Nova stores its own view of the allocation. When a consumer's 
allocation
  needs to be modified then nova reads the current state of the 
consumer from

  placement. Then nova combines the two allocations to generate the new
  expected consumer state. In case of generation conflict nova retries 
the

  read-combine-update operation triplet.

Which way we should go now?

What should be or long term goal?


Details:

There are plenty of affected lifecycle operations. See the patch series
starting at [1].

For example:

The current patch[1] that handles the delete server case implements 
option b).
It simly reads the current consumer generation from placement and uses 
that to
send a PUT /allocatons/{instance_uuid} with "allocations": {} in its 
body.


Here implementing option c) would mean that during server delete nova 
needs:
1) to compile its own view of the resource need of the server 
(currently based

  on the flavor but in the future based on the attached port's resource
  requests as well)
2) then read the current allocation of the server from placement
3) then subtract the server resource needs from the current allocation 
and

  send the resulting allocation back in the update to placement

In the simple case this subtraction would result in an empty allocation 
sent to
placement. Also in this simple case c) has the same effect as b) 
currently

implementated in [1].

However if somebody outside of nova modifies the allocation of this 
consumer in
a way that nova does not know about such changed resource need then b) 
and c)

will result in different placement state after server delete.

I only know of one example, the change of neutron port's resource 
request while
the port is attached. (Note, it is out of scope in the first step of 
bandwidth
implementation.) In this specific example option c) can work if nova 
re-reads
the port's resource request during delete when recalculates its own 
view of the

server resource needs. But I don't know if every other resource (e.g.
accelerators) used by a server can be / will be handled this way.


Other examples of affected lifecycle operations:

During a server migration moving the source host allocation from the
instance_uuid to a the migration_uuid fails with consumer generation 
conflict

because of the instance_uuid consumer generation. [2]

Confirming a migration fails as the deletion of the source host 
allocation
fails due to the consumer generation conflict of the migration_uuid 
consumer

that is being emptied.[3]

During scheduling of a new server putting allocation to instance_uuid 
fails as

the scheduler assumes that it is a new consumer and therefore uses
consumer_generation: None for the allocation, but placement reports 
generation

conflict. [4]

During a non-forced evacuation the scheduler tries to claim the 
resource on the
destination host with the instance_uuid, but that consumer already 
holds the
source allocation therefore the scheduler cannot assume that the 
instance_uuid

is a new consumer. [4]


Cheers,
gibi

[1] https://review.openstack.org/#/c/591597
[2] https://review.openstack.org/#/c/591810
[3] https://review.openstack.org/#/c/591811
[4] https://review.openstack.org/#/c/583667




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Notification subteam meeting cancelled

2018-08-14 Thread Balázs Gibizer

Hi,

There won't be notification subteam meeting this week.

Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][searchlight][designate][telemetry][mistral][blazar][watcher][masakari]Possible deprecation of Nova's legacy notification interface

2018-08-09 Thread Balázs Gibizer

Dear Nova notification consumers!


The Nova team made progress with the new versioned notification 
interface [1] and it is almost reached feature parity [2] with the 
legacy, unversioned one. So Nova team will discuss on the upcoming PTG 
the deprecation of the legacy interface. There is a list of projects 
(we know of) consuming the legacy interface and we would like to know 
if any of these projects plan to switch over to the new interface in 
the foreseeable future so we can make a well informed decision about 
the deprecation.



* Searchlight [3] - it is in maintenance mode so I guess the answer is 
no

* Designate [4]
* Telemetry [5]
* Mistral [6]
* Blazar [7]
* Watcher [8] - it seems Watcher uses both legacy and versioned nova 
notifications

* Masakari - I'm not sure Masakari depends on nova notifications or not

Cheers,
gibi

[1] https://docs.openstack.org/nova/latest/reference/notifications.html
[2] http://burndown.peermore.com/nova-notification/

[3] 
https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/notification_handler.py
[4] 
https://github.com/openstack/designate/blob/master/designate/notification_handler/nova.py
[5] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml#L2
[6] 
https://github.com/openstack/mistral/blob/master/etc/event_definitions.yml.sample#L2
[7] 
https://github.com/openstack/blazar/blob/5526ed1f9b74d23b5881a5f73b70776ba9732da4/doc/source/user/compute-host-monitor.rst
[8] 
https://github.com/openstack/watcher/blob/master/watcher/decision_engine/model/notification/nova.py#L335





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Notification update week 32

2018-08-07 Thread Balázs Gibizer

Hi,

Here is the latest notification subteam update.

Bugs

No RC potential notification bug is tracked.
No new bug since last week.

Weekly meeting
--
No meeting is planned for this week.

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Notification update week 31

2018-07-30 Thread Balázs Gibizer

Hi,

Here is the latest notification subteam update.

Bugs

No RC potential notification bug is tracked.
No new bug since last week.

Features


We hit FeatureFreeze. Every tracked bp is merged before FF except 
verioned notification transformation. That will be re-proposed to Stein 
to finish up the remaining 7 workitem that is left on the board 
http://burndown.peermore.com/nova-notification/


Weekly meeting
--
The next meeting is planned to be held on 31th of July on 
#openstack-meeting-4

https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180731T17


Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Notification update week 30

2018-07-23 Thread Balázs Gibizer

Hi,

Here is the latest notification subteam update.

Bugs

No new bugs tagged with notifications and no progress with the existing
ones.


Features


Versioned notification transformation
-
We have only a handfull of patches left before we can finally finish 
the multi year effort of transforming every legacy notifiaction to the 
versioned format. 3 of those patches already have a +2:

https://review.openstack.org/#/q/status:open+topic:bp/versioned-notification-transformation-rocky


Weekly meeting
--
No meeting this week. Please ping me on IRC if you have something
important to talk about.

Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Notification update week 28

2018-07-10 Thread Balázs Gibizer



On Mon, Jul 9, 2018 at 12:38 PM, Balázs Gibizer 
 wrote:

Hi,

Here is the latest notification subteam update.

[...]


Weekly meeting
--
The next meeting is planned to be held on 10th of June on
#openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180710T17


I cannot make it to the meeting today. Sorry for the short notice but 
the meeting is cancelled.

Cheers,
gibi



Cheers,
gibi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Notification update week 26

2018-06-26 Thread Balázs Gibizer

Hi,

Here is the latest notification subteam update.

Bugs


[Undecided] "IndexError: list index out of range" in 
ExceptionPayload.from_exception during resize failure 
https://bugs.launchpad.net/nova/+bug/1777540
I failed to reproduce and based on the newly provided logs in the 
parent bug https://bugs.launchpad.net/nova/+bug/1777157 this happens in 
an environment that runs heavily forked nova code. So I marked the bug 
inva

lid.

[Medium] Server operations fail to complete with versioned 
notifications if payload contains unset non-nullable fields 
https://bugs.launchpad.net/nova/+bug/1739325
This bug is still open and reportedly visible in multiple independent 
environment but I failed to find the root cause. So I'm wondering if we 
can implement a nova-manage heal-instance-flavor command for these 
environments.



Features


Sending full traceback in versioned notifications
~
https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications 
has been implemented. \o/



Introduce Pending VM state
~~
The spec https://review.openstack.org/#/c/554212 still not exactly 
define what will be in the select_destination notification payload and 
seems it is deferred to Stein.



Add the user id and project id of the user initiated the instance 
action to the notification


https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
Work progressing in  https://review.openstack.org/#/c/536243


Introduce instance.lock and instance.unlock notifications
-
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances 
has been implemented \o/




Weekly meeting
--
No meeting this week. The next meeting is planned to be held on 3rd of 
June on #openstack-meeting-4 
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180703T17


Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Notification update week 25

2018-06-20 Thread Balázs Gibizer


On Tue, Jun 19, 2018 at 7:07 PM, Matt Riedemann  
wrote:

On 6/18/2018 10:10 AM, Balázs Gibizer wrote:

* Introduce instance.lock and instance.unlock notifications
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances


This hasn't been updated in quite awhile. I wonder if someone else 
wants to pick that up now?


I'm OK if somebody picks it up. I will try to give review support.
Cheers,
gibi



--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Notification update week 25

2018-06-18 Thread Balázs Gibizer

Hi,

Here is the latest notification subteam update.

Bugs


No update on bugs and we have no new bugs tagged with notifications.


Features


Sending full traceback in versioned notifications
~
https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications
We are really close to merge  https://review.openstack.org/#/c/564092/ 
but some nits still needs to be addressed.



Add notification support for trusted_certs
~~
The notification impact of the trusted_certs bp has been merged. \o/


Introduce Pending VM state
~~
The spec https://review.openstack.org/#/c/554212 still not exactly 
define what will be in the select_destination notification payload.



Add the user id and project id of the user initiated the instance 
action to the notification


https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
Good progress on the implementation 
https://review.openstack.org/#/c/536243



No progress:

* Versioned notification transformation
https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open
* Introduce instance.lock and instance.unlock notifications
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances


Blocked:

* Add versioned notifications for removing a member from a server group
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications


Weekly meeting
--
The next meeting will be held on 19th of June on #openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180619T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] review runways check-in and feedback

2018-06-15 Thread Balázs Gibizer



On Wed, Jun 13, 2018 at 10:33 PM, melanie witt  
wrote:

Howdy everyone,

We've been experimenting with a new process this cycle, Review 
Runways [1] and we're about at the middle of the cycle now as we had 
the r-2 milestone last week June 7.


I wanted to start a thread and gather thoughts and feedback from the 
nova community about how they think runways have been working or not 
working and lend any suggestions to change or improve as we continue 
on in the rocky cycle.


We decided to try the runways process to increase the chances of core 
reviewers converging on the same changes and thus increasing reviews 
and merges on approved blueprint work. As of today, we have 69 
blueprints approved and 28 blueprints completed, we just passed r-2 
June 7 and r-3 is July 26 and rc1 is August 9 [2].


Do people feel like they've been receiving more review on their 
blueprints? Does it seem like we're completing more blueprints 
earlier? Is there feedback or suggestions for change that you can 
share?


Looking at the Queens burndown chart from Matt [3] we had 11 completed 
bps at Queens milestone 2. So having 28 completed bps at R-2 means a 
really nice improvement on our bp completion rate. I think the runaways 
process contributed to this improvement.


Did runaway solve the problem that not every equally ready patch gets 
equal attention from reviewers? Clearly not. But I don't think this 
would be a realistic goal for runaways.


I suggest that in the future we continue the runaway process but we 
also revive the priority setting process. Before runaways we had 3-4 
bps agreed as priority work for a given cycle. I think we had this 3-4 
bps in our head for Rocky as well we just did not write them down. I 
feel this causes misunderstanding about priories, like:
a) does reviewer X has the same 3-4 bps in her/his head with priority 
as in mine?
b) does something that I think part of the 3-4 priority bps has more 
importance than what is in a runaway slot?


Of course when I select what to review priority is only a single factor 
and there are others, like:
* Do I have knowledge about the feature? (Did I review the related 
spec? Do I have knowledge in the domain or in the impacted code path?)
* Is it seems easy to review? (e.g. low complexity feature, small 
patches, well written commit message)
* Is it something that feels important to me, regardless of priority 
set by the community. (e.g. Do I get frequent company internal 
questions about the feature? Do I have another feature that depends on 
this feature as prerequisite work?)
So during the cycle it happened that I selected patches to review even 
if they wasn't in a runaway slot and ignored some patches from the 
runaway slots.


Cheers,
gibi

[3] 
https://docs.google.com/spreadsheets/d/e/2PACX-1vRh5glbJ44-Ru2iARidNRa7uFfn2yjiRPjHIEQOc3Fjp5YDAlcMmXkYAEFW0WNhALl010T4rzyChuO9/pubhtml?gid=128173249=true








Thanks all,
-melanie

[1] https://etherpad.openstack.org/p/nova-runways-rocky
[2] https://wiki.openstack.org/wiki/Nova/Rocky_Release_Schedule

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][nova] weird error on 'Validating lower constraints of test-requirements.txt'

2018-06-15 Thread Balázs Gibizer


On Fri, Jun 15, 2018 at 11:36 AM, Chen CH Ji  
wrote:
on patch [1] , PS 50 and PS51 just a minor rebase but PS51 start to 
fail on requirements-check with following error in [2]


Validating lower constraints of test-requirements.txt
*** Incompatible requirement found!
*** See http://docs.openstack.org/developer/requirements

but it doesn't provide enough info to know what's wrong , and because 
I didn't make too much change , curious on why
the job failed... can anyone provide any hint on what happened there? 
thanks


[1]https://review.openstack.org/#/c/523387
[2]http://logs.openstack.org/87/523387/51/check/requirements-check/3598ba0/job-output.txt.gz



Looking at your change and the state of the global requirements repo I 
see the following contradiction 
https://github.com/openstack/requirements/blob/a07ef1c282a37a4bcc93166ddf4cdc97f7626d5d/lower-constraints.txt#L151 
says zVMCloudConnector===0.3.2
while 
https://review.openstack.org/#/c/523387/51/lower-constraints.txt@173 
says zVMCloudConnector==1.1.1


Based on the history of the lower-constraints.txt in the global repo 
you have to manually bump the lower constraint there as well 
https://github.com/openstack/requirements/commits/master/lower-constraints.txt


Cheers,
gibi


Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian 
District, Beijing 100193, PRC



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 24

2018-06-11 Thread Balázs Gibizer

Hi,

Here is the latest notification subteam update.

Bugs


[Medium] https://bugs.launchpad.net/nova/+bug/1739325 Server operations 
fail to complete with versioned notifications if payload contains unset 
non-nullable fields
This is also visible in tssurya's environment. I'm wondering if we can 
implement a nova-manage heal-instance-flavor command for these 
environment as I'm not sure I will be able to find the root cause why 
the disable field is missing from these flavors.


No update on other bugs and we have no new bugs tagged with 
notifications.



Features


Sending full traceback in versioned notifications
~
https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications
We are iterating with Kevin on the implementation and sample test in 
https://review.openstack.org/#/c/564092/ .


Add notification support for trusted_certs
~~
This is part of the bp nova-validate-certificates implementation series
to extend some of the instance notifications.
I'm +2 on the notification impact in
https://review.openstack.org/#/c/563269 waiting for the rest of the
series to merge.

Introduce Pending VM state
~~
The spec https://review.openstack.org/#/c/554212 still not exactly 
define what will be in the select_destination notification payload.


Add the user id and project id of the user initiated the instance 
action to the notification


https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
We are iterating on the implementation in 
https://review.openstack.org/#/c/536243



No progress:

* Versioned notification transformation
https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open
* Introduce instance.lock and instance.unlock notifications
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances


Blocked:

* Add versioned notifications for removing a member from a server group
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications


Weekly meeting
--
We skip the meeting this week (week 24). The next meeting will be held 
on 19th of June on #openstack-meeting-4

https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180619T17

Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-05-31 Thread Balázs Gibizer



On Thu, May 31, 2018 at 11:10 AM, Sylvain Bauza  
wrote:




After considering the whole approach, discussing with a couple of 
folks over IRC, here is what I feel the best approach for a seamless 
upgrade :
 - VGPU inventory will be kept on root RP (for the first type) in 
Queens so that a compute service upgrade won't impact the DB
 - during Queens, operators can run a DB online migration script 
(like the ones we currently have in 
https://github.com/openstack/nova/blob/c2f42b0/nova/cmd/manage.py#L375) 
that will create a new resource provider for the first type and move 
the inventory and allocations to it.
 - it's the responsibility of the virt driver code to check whether a 
child RP with its name being the first type name already exists to 
know whether to update the inventory against the root RP or the child 
RP.


Does it work for folks ?


+1 works for me
gibi

PS : we already have the plumbing in place in nova-manage and we're 
still managing full Nova resources. I know we plan to move Placement 
out of the nova tree, but for the Rocky timeframe, I feel we can 
consider nova-manage as the best and quickiest approach for the data 
upgrade.


-Sylvain





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-05-30 Thread Balázs Gibizer



On Tue, May 29, 2018 at 3:12 PM, Sylvain Bauza  
wrote:



On Tue, May 29, 2018 at 2:21 PM, Balázs Gibizer 
 wrote:



On Tue, May 29, 2018 at 1:47 PM, Sylvain Bauza  
wrote:



Le mar. 29 mai 2018 à 11:02, Balázs Gibizer 
 a écrit :



On Tue, May 29, 2018 at 9:38 AM, Sylvain Bauza 
wrote:
>
>
> On Tue, May 29, 2018 at 3:08 AM, TETSURO NAKAMURA
>  wrote
>
>> > In that situation, say for example with VGPU inventories, that
>> would mean
>> > that the compute node would stop reporting inventories for its
>> root RP, but
>> > would rather report inventories for at least one single child 
RP.
>> > In that model, do we reconcile the allocations that were 
already

>> made
>> > against the "root RP" inventory ?
>>
>> It would be nice to see Eric and Jay comment on this,
>> but if I'm not mistaken, when the virt driver stops reporting
>> inventories for its root RP, placement would try to delete that
>> inventory inside and raise InventoryInUse exception if any
>> allocations still exist on that resource.
>>
>> ```
>> update_from_provider_tree() (nova/compute/resource_tracker.py)
>>   + _set_inventory_for_provider() 
(nova/scheduler/client/report.py)
>>   + put() - PUT /resource_providers//inventories 
with

>> new inventories (scheduler/client/report.py)
>>   + set_inventories() (placement/handler/inventory.py)
>>   + _set_inventory()
>> (placement/objects/resource_proveider.py)
>>   + _delete_inventory_from_provider()
>> (placement/objects/resource_proveider.py)
>>   -> raise exception.InventoryInUse
>> ```
>>
>> So we need some trick something like deleting VGPU allocations
>> before upgrading and set the allocation again for the created 
new

>> child after upgrading?
>>
>
> I wonder if we should keep the existing inventory in the root 
RP, and
> somehow just reserve the left resources (so Placement wouldn't 
pass

> that root RP for queries, but would still have allocations). But
> then, where and how to do this ? By the resource tracker ?
>

AFAIK it is the virt driver that decides to model the VGU resource 
at a
different place in the RP tree so I think it is the responsibility 
of
the same virt driver to move any existing allocation from the old 
place

to the new place during this change.

Cheers,
gibi


Why not instead not move the allocation but rather have the virt 
driver updating the root RP by modifying the reserved value to the 
total size?


That way, the virt driver wouldn't need to ask for an allocation 
but rather continue to provide inventories...


Thoughts?


Keeping the old allocaton at the old RP and adding a similar sized 
reservation in the new RP feels hackis as those are not really 
reserved GPUs but used GPUs just from the old RP. If somebody sums 
up the total reported GPUs in this setup via the placement API then 
she will get more GPUs in total that what is physically visible for 
the hypervisor as the GPUs part of the old allocation reported twice 
in two different total value. Could we just report less GPU 
inventories to the new RP until the old RP has GPU allocations?





We could keep the old inventory in the root RP for the previous vGPU 
type already supported in Queens and just add other inventories for 
other vGPU types now supported. That looks possibly the simpliest 
option as the virt driver knows that.


That works for me. Can we somehow deprecate the previous, already 
supported vGPU types to eventually get rid of the splitted inventory?






Some alternatives from my jetlagged brain:

a) Implement a move inventory/allocation API in placement. Given a 
resource class and a source RP uuid and a destination RP uuid 
placement moves the inventory and allocations of that resource class 
from the source RP to the destination RP. Then the virt drive can 
call this API to move the allocation. This has an impact on the fast 
forward upgrade as it needs running virt driver to do the allocation 
move.




Instead of having the virt driver doing that (TBH, I don't like that 
given both Xen and libvirt drivers have the same problem), we could 
write a nova-manage upgrade call for that that would call the 
Placement API, sure.


The nova-manage is another possible way similar to my idea #c) but 
there I imagined the logic in placement-manage instead of nova-manage.




b) For this I assume that live migrating an instance having a GPU 
allocation on the old RP will allocate GPU for that instance from 
the new RP. In the virt driver do not report GPUs to the new RP 
while there is allocation for such GPUs in the old RP. Let the 
deployer live migrate away the instances. When the virt driver 
detects that there is no more GPU allocations on the old RP it can 
dele

Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-05-29 Thread Balázs Gibizer



On Tue, May 29, 2018 at 1:47 PM, Sylvain Bauza  
wrote:



Le mar. 29 mai 2018 à 11:02, Balázs Gibizer 
 a écrit :



On Tue, May 29, 2018 at 9:38 AM, Sylvain Bauza 
wrote:
>
>
> On Tue, May 29, 2018 at 3:08 AM, TETSURO NAKAMURA
>  wrote
>
>> > In that situation, say for example with VGPU inventories, that
>> would mean
>> > that the compute node would stop reporting inventories for its
>> root RP, but
>> > would rather report inventories for at least one single child 
RP.

>> > In that model, do we reconcile the allocations that were already
>> made
>> > against the "root RP" inventory ?
>>
>> It would be nice to see Eric and Jay comment on this,
>> but if I'm not mistaken, when the virt driver stops reporting
>> inventories for its root RP, placement would try to delete that
>> inventory inside and raise InventoryInUse exception if any
>> allocations still exist on that resource.
>>
>> ```
>> update_from_provider_tree() (nova/compute/resource_tracker.py)
>>   + _set_inventory_for_provider() 
(nova/scheduler/client/report.py)

>>   + put() - PUT /resource_providers//inventories with
>> new inventories (scheduler/client/report.py)
>>   + set_inventories() (placement/handler/inventory.py)
>>   + _set_inventory()
>> (placement/objects/resource_proveider.py)
>>   + _delete_inventory_from_provider()
>> (placement/objects/resource_proveider.py)
>>   -> raise exception.InventoryInUse
>> ```
>>
>> So we need some trick something like deleting VGPU allocations
>> before upgrading and set the allocation again for the created new
>> child after upgrading?
>>
>
> I wonder if we should keep the existing inventory in the root RP, 
and

> somehow just reserve the left resources (so Placement wouldn't pass
> that root RP for queries, but would still have allocations). But
> then, where and how to do this ? By the resource tracker ?
>

AFAIK it is the virt driver that decides to model the VGU resource 
at a

different place in the RP tree so I think it is the responsibility of
the same virt driver to move any existing allocation from the old 
place

to the new place during this change.

Cheers,
gibi


Why not instead not move the allocation but rather have the virt 
driver updating the root RP by modifying the reserved value to the 
total size?


That way, the virt driver wouldn't need to ask for an allocation but 
rather continue to provide inventories...


Thoughts?


Keeping the old allocaton at the old RP and adding a similar sized 
reservation in the new RP feels hackis as those are not really reserved 
GPUs but used GPUs just from the old RP. If somebody sums up the total 
reported GPUs in this setup via the placement API then she will get 
more GPUs in total that what is physically visible for the hypervisor 
as the GPUs part of the old allocation reported twice in two different 
total value. Could we just report less GPU inventories to the new RP 
until the old RP has GPU allocations?


Some alternatives from my jetlagged brain:

a) Implement a move inventory/allocation API in placement. Given a 
resource class and a source RP uuid and a destination RP uuid placement 
moves the inventory and allocations of that resource class from the 
source RP to the destination RP. Then the virt drive can call this API 
to move the allocation. This has an impact on the fast forward upgrade 
as it needs running virt driver to do the allocation move.


b) For this I assume that live migrating an instance having a GPU 
allocation on the old RP will allocate GPU for that instance from the 
new RP. In the virt driver do not report GPUs to the new RP while there 
is allocation for such GPUs in the old RP. Let the deployer live 
migrate away the instances. When the virt driver detects that there is 
no more GPU allocations on the old RP it can delete the inventory from 
the old RP and report it to the new RP.


c) For this I assume that there is no support for live migration of an 
instance having a GPU. If there is GPU allocation in the old RP then 
virt driver does not report GPU inventory to the new RP just creates 
the new nested RPs. Provide a placement-manage command to do the 
inventory + allocation copy from the old RP to the new RP.


Cheers,
gibi





> -Sylvain
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-05-29 Thread Balázs Gibizer



On Tue, May 29, 2018 at 11:52 AM, Sylvain Bauza 
 wrote:



2018-05-29 11:01 GMT+02:00 Balázs Gibizer 
:



On Tue, May 29, 2018 at 9:38 AM, Sylvain Bauza  
wrote:



On Tue, May 29, 2018 at 3:08 AM, TETSURO NAKAMURA 
 wrote


> In that situation, say for example with VGPU inventories, that 
would mean
> that the compute node would stop reporting inventories for its 
root RP, but

> would rather report inventories for at least one single child RP.
> In that model, do we reconcile the allocations that were already 
made

> against the "root RP" inventory ?

It would be nice to see Eric and Jay comment on this,
but if I'm not mistaken, when the virt driver stops reporting 
inventories for its root RP, placement would try to delete that 
inventory inside and raise InventoryInUse exception if any 
allocations still exist on that resource.


```
update_from_provider_tree() (nova/compute/resource_tracker.py)
  + _set_inventory_for_provider() (nova/scheduler/client/report.py)
  + put() - PUT /resource_providers//inventories with 
new inventories (scheduler/client/report.py)

  + set_inventories() (placement/handler/inventory.py)
  + _set_inventory() 
(placement/objects/resource_proveider.py)
  + _delete_inventory_from_provider() 
(placement/objects/resource_proveider.py)

  -> raise exception.InventoryInUse
```

So we need some trick something like deleting VGPU allocations 
before upgrading and set the allocation again for the created new 
child after upgrading?




I wonder if we should keep the existing inventory in the root RP, 
and somehow just reserve the left resources (so Placement wouldn't 
pass that root RP for queries, but would still have allocations). 
But then, where and how to do this ? By the resource tracker ?




AFAIK it is the virt driver that decides to model the VGU resource 
at a different place in the RP tree so I think it is the 
responsibility of the same virt driver to move any existing 
allocation from the old place to the new place during this change.




No. Allocations are done by the scheduler or by the conductor. Virt 
drivers only provide inventories.


I understand that the allocation is made by the scheduler and the 
conductor but today the scheduler and the conductor do not have to know 
the structure for the RP tree to make such allocations. Therefore for 
me the scheduler and the conductor is a bad place to try to move 
allocation around due to a change in the modelling of the resources in 
the RP tree. In the other hand the virt driver knows the structure of 
the RP tree so it has the necessary information to move the existing 
allocaiton from the old place to the new place.


gibi





Cheers,
gibi



-Sylvain




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 22

2018-05-29 Thread Balázs Gibizer

Hi,

Here is the latest notification subteam update.

Bugs

No new bugs, no progress on open bugs.


Features


Sending full traceback in versioned notifications
~
https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications
I left some comment in the implementation patch 
https://review.openstack.org/#/c/564092/


Add notification support for trusted_certs
~~
This is part of the bp nova-validate-certificates implementation series
to extend some of the instance notifications.
I'm +2 on the notification impact in 
https://review.openstack.org/#/c/563269 waiting for the rest of the 
series to merge.


Introduce Pending VM state
~~
The spec https://review.openstack.org/#/c/554212 proposes some
notification change to signal when a VM goes to PENDING state. We 
discussed the notification impact on the summit and agreed to transform 
the legacy scheduler.select_destinations notification and extend it if 
necessary. Detailed discussion still ongoing in the spec review.



No progress:

* Versioned notification transformation 
https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open
* Introduce instance.lock and instance.unlock notifications 
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
* Add the user id and project id of the user initiated the instance 
action to the notification 
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications



Blocked:

* Add versioned notifications for removing a member from a server group 
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications



Weekly meeting
--
The next meeting will be held on 29th of May (Today!) on 
#openstack-meeting-4

https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180529T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-05-29 Thread Balázs Gibizer



On Tue, May 29, 2018 at 9:38 AM, Sylvain Bauza  
wrote:



On Tue, May 29, 2018 at 3:08 AM, TETSURO NAKAMURA 
 wrote


> In that situation, say for example with VGPU inventories, that 
would mean
> that the compute node would stop reporting inventories for its 
root RP, but

> would rather report inventories for at least one single child RP.
> In that model, do we reconcile the allocations that were already 
made

> against the "root RP" inventory ?

It would be nice to see Eric and Jay comment on this,
but if I'm not mistaken, when the virt driver stops reporting 
inventories for its root RP, placement would try to delete that 
inventory inside and raise InventoryInUse exception if any 
allocations still exist on that resource.


```
update_from_provider_tree() (nova/compute/resource_tracker.py)
  + _set_inventory_for_provider() (nova/scheduler/client/report.py)
  + put() - PUT /resource_providers//inventories with 
new inventories (scheduler/client/report.py)

  + set_inventories() (placement/handler/inventory.py)
  + _set_inventory() 
(placement/objects/resource_proveider.py)
  + _delete_inventory_from_provider() 
(placement/objects/resource_proveider.py)

  -> raise exception.InventoryInUse
```

So we need some trick something like deleting VGPU allocations 
before upgrading and set the allocation again for the created new 
child after upgrading?




I wonder if we should keep the existing inventory in the root RP, and 
somehow just reserve the left resources (so Placement wouldn't pass 
that root RP for queries, but would still have allocations). But 
then, where and how to do this ? By the resource tracker ?




AFAIK it is the virt driver that decides to model the VGU resource at a 
different place in the RP tree so I think it is the responsibility of 
the same virt driver to move any existing allocation from the old place 
to the new place during this change.


Cheers,
gibi


-Sylvain




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 20

2018-05-14 Thread Balázs Gibizer

Hi,

Here is the latest notification subteam update.

Bugs


[Low] https://bugs.launchpad.net/nova/+bug/1757407 Notification sending
sometimes hits the keystone API to get glance endpoints
Fix needs some additional work: https://review.openstack.org/#/c/564528/

[Medium] https://bugs.launchpad.net/nova/+bug/1763051 Need to audit
when notifications are sent during live migration
We need to go throught the live migration codepath and make sure that
the different live migartion notifications sent at a proper time.

[Low] https://bugs.launchpad.net/nova/+bug/1764392 Avoid bandwidth
usage db query in notifications when the virt driver does not support
collecting such data

[Medium] https://bugs.launchpad.net/nova/+bug/1739325 Server operations
fail to complete with versioned notifications if payload contains unset
non-nullable fields
No progress. We still need to understand how this problem happens to
find the proper solution.

[Low] https://bugs.launchpad.net/nova/+bug/1487038
nova.exception._cleanse_dict should use
oslo_utils.strutils._SANITIZE_KEYS
Old abandoned patches exist but need somebody to pick them up:
* https://review.openstack.org/#/c/215308/
* https://review.openstack.org/#/c/388345/


Versioned notification transformation
-
https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open
* https://review.openstack.org/#/c/403660 Transform instance.exists 
notification - lost the +2 due to a merge conflict



Introduce instance.lock and instance.unlock notifications
-
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
Implementation proposed but needs some work:
https://review.openstack.org/#/c/526251/ - No progress. I've pinged the 
author but no response.



Add the user id and project id of the user initiated the instance
action to the notification
-
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
Implementation patch exists but still needs work
https://review.openstack.org/#/c/536243/ - No progress. I've pinged the 
author but no response.

Sending full traceback in versioned notifications
-
https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications
The bp was reassigned to Kevin_Zheng and he proposed a WIP patch 
https://review.openstack.org/#/c/564092/



Add versioned notifications for removing a member from a server group
-
The specless bp 
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications
Based on the PoC patch https://review.openstack.org/#/c/559076/ we see 
basic problems with the overal bp. See Matt's mail from the ML 
http://lists.openstack.org/pipermail/openstack-dev/2018-April/129804.html



Add notification support for trusted_certs
--
This is part of the bp nova-validate-certificates implementation series 
to extend some of the instance notifications. The implementation looks 
good to me in: https://review.openstack.org/#/c/563269



Introduce Pending VM state
--
The spec https://review.openstack.org/#/c/554212 proposes some 
notification change to signal when a VM goes to PENDING state. Hovewer 
this information is already available from the versioned 
instance.update notification. The discussion in the spec is ongoing.



Weekly meeting
--
I have to cancel this week's meeting and next week most of us will be 
in Vancouver. So the next meeting will be held on 29th of May on 
#openstack-meeting-4

https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180529T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db

2018-05-14 Thread Balázs Gibizer



On Mon, May 14, 2018 at 11:49 AM, Balázs Gibizer 
<balazs.gibi...@ericsson.com> wrote:



On Thu, May 10, 2018 at 8:48 PM, Dan Smith <d...@danplanet.com> wrote:


Personally, I'd just make the offending tests shut up about the 
warning
and move on, but I'm also okay with the above solution if people 
prefer.


I think that was Takashi's first suggestion as well. As in this 
particular case the value stored in the field is still a UUID just 
not in the canonical format I think it is reasonable to silence the 
warning for these 3 tests.




I proposed a patch to suppress those warnings: 
https://review.openstack.org/#/c/568263


Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db

2018-05-14 Thread Balázs Gibizer



On Thu, May 10, 2018 at 8:48 PM, Dan Smith  wrote:
 The oslo UUIDField emits a warning if the string used as a field 
value

 does not pass the validation of the uuid.UUID(str(value)) call
 [3]. All the offending places are fixed in nova except the 
nova-manage

 cell_v2 map_instances call [1][2]. That call uses markers in the DB
 that are not valid UUIDs.


No, that call uses markers in the DB that don't fit the canonical 
string
representation of a UUID that the oslo library is looking for. There 
are

many ways to serialize a UUID:

https://en.wikipedia.org/wiki/Universally_unique_identifier#Format

The 8-4-4-4-12 format is one of them (and the most popular). Changing
the dashes to spaces does not make it not a UUID, it makes it not the
same _string_ and it's done (for better or worse) in the 
aforementioned

code to skirt the database's UUID-ignorant _string_ uniqueness
constraint.


You are right, this is oslo specific. I think this weakens the severity 
of the warning in this particular case.





 If we could fix this last offender then we could merge the patch [4]
 that changes the this warning to an exception in the nova tests to
 avoid such future rule violations.

 However I'm not sure it is easy to fix. Replacing
 'INSTANCE_MIGRATION_MARKER' at [1] to
 '----' might work


The project_id field on the object is not a UUIDField, nor is it 36
characters in the database schema. It can't be because project ids are
not guaranteed to be UUIDs.


Correct. My bad. Then this does not cause any UUID warning.




 but I don't know what to do with instance_uuid.replace(' ', '-') [2]
 to make it a valid uuid. Also I think that if there is an unfinished
 mapping in the deployment and then the marker is changed in the code
 that leads to inconsistencies.


IMHO, it would be bad to do anything that breaks people in the middle 
of

a mapping procedure. While I understand the desire to have fewer
spurious warnings in the test runs, I feel like doing anything to 
impact

the UX or performance of runtime code to make the unit test output
cleaner is a bad idea.


Thanks for confirming my original bad feelings about these kind of 
solutions.





 I'm open to any suggestions.


We already store values in this field that are not 8-4-4-4-12, and the
oslo field warning is just a warning. If people feel like we need to 
do

something, I propose we just do this:

https://review.openstack.org/#/c/567669/

It is one of those "we normally wouldn't do this with object schemas,
but we know this is okay" sort of situations.


Personally, I'd just make the offending tests shut up about the 
warning
and move on, but I'm also okay with the above solution if people 
prefer.


I think that was Takashi's first suggestion as well. As in this 
particular case the value stored in the field is still a UUID just not 
in the canonical format I think it is reasonable to silence the warning 
for these 3 tests.


Thanks,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db

2018-05-08 Thread Balázs Gibizer

Hi,

The oslo UUIDField emits a warning if the string used as a field value 
does not pass the validation of the uuid.UUID(str(value)) call [3]. All 
the offending places are fixed in nova except the nova-manage cell_v2 
map_instances call [1][2]. That call uses markers in the DB that are 
not valid UUIDs. If we could fix this last offender then we could merge 
the patch [4] that changes the this warning to an exception in the nova 
tests to avoid such future rule violations.


However I'm not sure it is easy to fix. Replacing 
'INSTANCE_MIGRATION_MARKER' at [1] to 
'----' might work but I don't know what to 
do with instance_uuid.replace(' ', '-') [2] to make it a valid uuid. 
Also I think that if there is an unfinished mapping in the deployment 
and then the marker is changed in the code that leads to 
inconsistencies.


I'm open to any suggestions.

Cheers,
gibi


[1] 
https://github.com/openstack/nova/blob/09af976016a83288df22ac6ed1cce1676c2294cc/nova/cmd/manage.py#L1168
[2] 
https://github.com/openstack/nova/blob/09af976016a83288df22ac6ed1cce1676c2294cc/nova/cmd/manage.py#L1180
[3] 
https://github.com/openstack/oslo.versionedobjects/blob/29e643e4a9866b33965b68fc8dfb8acf30fa/oslo_versionedobjects/fields.py#L359

[4] https://review.openstack.org/#/c/540386


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 19

2018-05-08 Thread Balázs Gibizer

Hi,

After a bit of silence here is the latest notification status.

Bugs


[Low] https://bugs.launchpad.net/nova/+bug/1757407 Notification sending
sometimes hits the keystone API to get glance endpoints
Fix has been proposed has many +1s 
https://review.openstack.org/#/c/564528/


[Medium] https://bugs.launchpad.net/nova/+bug/1763051 Need to audit
when notifications are sent during live migration
We need to go throught the live migration codepath and make sure that
the different live migartion notifications sent at a proper time.

[Low] https://bugs.launchpad.net/nova/+bug/1764392 Avoid bandwidth
usage db query in notifications when the virt driver does not support
collecting such data

[High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations
fail to complete with versioned notifications if payload contains unset
non-nullable fields
No progress. We still need to understand how this problem happens to
find the proper solution.

[Low] https://bugs.launchpad.net/nova/+bug/1487038
nova.exception._cleanse_dict should use
oslo_utils.strutils._SANITIZE_KEYS
Old abandoned patches exist but need somebody to pick them up:
* https://review.openstack.org/#/c/215308/
* https://review.openstack.org/#/c/388345/


Versioned notification transformation
-
https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open
* https://review.openstack.org/#/c/403660 Transform instance.exists 
notification - lost the +2 due to a merge conflict
* https://review.openstack.org/#/c/410297/  Transform missing delete 
notifications - many +1s, needs core review



Introduce instance.lock and instance.unlock notifications
-
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
Implementation proposed but needs some work:
https://review.openstack.org/#/c/526251/ - No progress. I've pinged the 
author but no response.



Add the user id and project id of the user initiated the instance
action to the notification
-
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
Implementation patch exists but still needs work
https://review.openstack.org/#/c/536243/ - No progress. I've pinged the 
author but no response.



Sending full traceback in versioned notifications
-
https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications
The bp was reassigned to Kevin_Zheng and he proposed a WIP patch 
https://review.openstack.org/#/c/564092/



Add versioned notifications for removing a member from a server group
-
The specless bp 
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications
Based on the PoC patch https://review.openstack.org/#/c/559076/ we see 
basic problems with the overal bp. See Matt's mail from the ML 
http://lists.openstack.org/pipermail/openstack-dev/2018-April/129804.html



Add notification support for trusted_certs
--
This is part of the bp nova-validate-certificates implementation series 
to extend some of the instance notifications: 
https://review.openstack.org/#/c/563269
I have to re-review the patch as it seems Brianna updated it based on 
my suggestions.



Introduce Pending VM state
--
The spec https://review.openstack.org/#/c/554212 proposed to introduce 
new notification along with the new state. I have to give a detailed 
review about this proposal.



Weekly meeting
--
The next meeting will be held on 8th of May on #openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180508T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Next notification subteam meeting is cancelled

2018-04-27 Thread Balázs Gibizer

Hi,

I have to cancel the next notification subteam meeting as it happens to 
be on 1st of May which is (inter)national holiday. So the next meeting 
expected to be held on 8th of May.


Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild

2018-04-24 Thread Balázs Gibizer



On Tue, Apr 24, 2018 at 9:08 AM, Alex Xu  wrote:



2018-04-24 5:51 GMT+08:00 Arvind N :

Thanks for the detailed options Matt/eric/jay.

Just few of my thoughts,

For #1, we can make the explanation very clear that we rejected the 
request because the original traits specified in the original image 
and the new traits specified in the new image do not match and hence 
rebuild is not supported.


For #2,

Other Cons:
None of the filters currently make other API requests and my 
understanding is we want to avoid reintroducing such a pattern. But 
definitely workable solution.
If the user disables the image properties filter, then traits based 
filtering will not be run in rebuild case

For #3,

Even though it handles the nested provider, there is a potential 
issue.


Lets say a host with two SRIOV nic. One is normal SRIOV nic(VF1), 
another one with some kind of offload feature(VF2).(Described by 
alex)


Initial instance launch happens with VF:1 allocated, rebuild 
launches with modified request with traits=HW_NIC_OFFLOAD_X, so 
basically we want the instance to be allocated VF2.


But the original allocation happens against VF1 and since in rebuild 
the original allocations are not changed, we have wrong allocations.



Yes, that is the case what I said, and none of #1,2,3,4 and the 
proposal in this threads works also.


The problem isn't just checking the traits in the nested resource 
provider. We also need to ensure the trait in the exactly same child 
resource provider. Or we need to adjust allocations for the child 
resource provider.


I agree that in_tree only ensure that the compute node tree has the 
required traits but it does not take into account that only some of 
those RPs from the tree provides resources for the current allocation. 
The algorithm Eric provided in a previous mail do the filtering for the 
RPs that are part of the instance allocation so that sounds good to me.


I think we should not try to adjust allocations during a rebuild. 
Changing the allocation would mean it is not a rebuild any more but a 
resize.


Cheers,
gibi






for #4, there is good amount of pushback against modifying the 
allocation_candiadates api to not have resources.


Jay:
for the GET 
/resource_providers?in_tree==, 
nested resource providers and allocation pose a problem see #3 above.


I will investigate erics option and update the spec.
--
Arvind N

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 17

2018-04-23 Thread Balázs Gibizer

Hi,

New week, new status mail.

Bugs


New bugs


[Undecided] https://bugs.launchpad.net/nova/+bug/1764927 Should send 
out notification when instance metadata get updated
Nova already sends instance.update notification when instance.metadata 
is changed so I marked the bug invalid.


Still open bugs
~~~

[Low] https://bugs.launchpad.net/nova/+bug/1757407 Notification sending
sometimes hits the keystone API to get glance endpoints
As the versioned notifications does not use the glance endpoints info
we can avoid hitting the keystone API if notification_format is set to
'versioned'

[Medium] https://bugs.launchpad.net/nova/+bug/1763051 Need to audit
when notifications are sent during live migration
We need to go throught the live migration codepath and make sure that
the different live migartion notifications sent at a proper time.


[Low] https://bugs.launchpad.net/nova/+bug/1764390 Replace passing
system_metadata to notification functions with instance.system_metadata
usage
Fix has been proposed in https://review.openstack.org/#/c/561724 and 
needs a final +2


[Low] https://bugs.launchpad.net/nova/+bug/1764392 Avoid bandwidth
usage db query in notifications when the virt driver does not support
collecting such data

[High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations
fail to complete with versioned notifications if payload contains unset
non-nullable fields
No progress. We still need to understand how this problem happens to
find the proper solution.

[Low] https://bugs.launchpad.net/nova/+bug/1487038
nova.exception._cleanse_dict should use
oslo_utils.strutils._SANITIZE_KEYS
Old abandoned patches exist but need somebody to pick them up:
* https://review.openstack.org/#/c/215308/
* https://review.openstack.org/#/c/388345/


Versioned notification transformation
-
https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open
* https://review.openstack.org/#/c/403660 Transform instance.exists 
notification - needs a rebase and a final +2



Introduce instance.lock and instance.unlock notifications
-
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
Implementation proposed but needs some work:
https://review.openstack.org/#/c/526251/ - No progress. I've pinged the 
author.



Add the user id and project id of the user initiated the instance
action to the notification
-
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
Implementation patch exists but still needs work
https://review.openstack.org/#/c/536243/ - No progress. I've pinged the 
author.



Add request_id to the InstanceAction versioned notifications

https://blueprints.launchpad.net/nova/+spec/add-request-id-to-instance-action-notifications
The main implementation patch has been merged. The follow up patch 
https://review.openstack.org/#/c/562757 needs the final +2 Then the bp 
can be marked as implemented.



Sending full traceback in versioned notifications
-
https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications
I have to propose the implementation.


Add versioned notifications for removing a member from a server group
-
The specless bp
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications
is pending approval as we would like to see the POC code first. Takashi
has been proposed the POC code https://review.openstack.org/#/c/559076/
so we have to look at it.


Factor out duplicated notification sample
-
https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open
It seems we are done with this. Every notification sample is either 
small on its own (e.g. flavor.create) or already based on common sample 
fragments. Thanks to everybody who contributed time to this effort. \o/



Weekly meeting
--
The next meeting will be held on 24th of April on #openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180424T17

Cheers,





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Balázs Gibizer



On Thu, Apr 19, 2018 at 2:27 PM, Eric Fried  wrote:

gibi-

 Can the proximity param specify relationship between the 
un-numbered and

 the numbered groups as well or only between numbered groups?
 Besides that I'm +1 about proxyimity={isolate|any}


Remembering that the resources in the un-numbered group can be spread
around the tree and sharing providers...

If applying "isolate" to the un-numbered group means that each 
resource
you specify therein must be satisfied by a different provider, then 
you

should have just put those resources into numbered groups.

If "isolate" means that *none* of the numbered groups will land on 
*any*
of the providers satisfying the un-numbered group... that could be 
hard

to reason about, and I don't know if it's useful.

So thus far I've been thinking about all of these semantics only in
terms of the numbered groups (although Jay's `can_split` was
specifically aimed at the un-numbered group).


Thanks for the explanation. Now it make sense to me to limit the 
proximity param to the numbered groups.




That being the case (is that a bikeshed on the horizon?) perhaps
`granular_policy={isolate|any}` is a more appropriate name than 
`proximity`.


The policy term is more general than proximity therefore the 
granular_policy=any query fragment isn't descriptive enough any more. 



gibi



-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Balázs Gibizer



On Thu, Apr 19, 2018 at 12:45 AM, Eric Fried  wrote:
 I have a feeling we're just going to go back and forth on this, as 
we
 have for weeks now, and not reach any conclusion that is 
satisfactory to
 everyone. And we'll delay, yet again, getting functionality into 
this
 release that serves 90% of use cases because we are obsessing over 
the

 0.01% of use cases that may pop up later.


So I vote that, for the Rocky iteration of the granular spec, we add a
single `proximity={isolate|any}` qparam, required when any numbered
request groups are specified.  I believe this allows us to satisfy the
two NUMA use cases we care most about: "forced sharding" and "any 
fit".

And as you demonstrated, it leaves the way open for finer-grained and
more powerful semantics to be added in the future.


Can the proximity param specify relationship between the un-numbered 
and the numbered groups as well or only between numbered groups?

Besides that I'm +1 about proxyimity={isolate|any}

Cheers,
gibi



-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 16

2018-04-16 Thread Balázs Gibizer

Hi,

After the long silence here is the current notification status info.

Bugs


New bugs


[Low] https://bugs.launchpad.net/nova/+bug/1757407 Notification sending 
sometimes hits the keystone API to get glance endpoints
As the versioned notifications does not use the glance endpoints info 
we can avoid hitting the keystone API if notification_format is set to 
'versioned'



[Medium] https://bugs.launchpad.net/nova/+bug/1763051 Need to audit 
when notifications are sent during live migration
We need to go throught the live migration codepath and make sure that 
the different live migartion notifications sent at a proper time.



[Low] https://bugs.launchpad.net/nova/+bug/1761405 impossible to 
disable notifications
The way to turn off emitting notification from nova is to set the 
oslo_messaging_notifications.driver config option to 'noop'. We need to 
document this better in the notification devref and in the 
notification_format config option.



There are two follow up bugs opened based on the Matt's review comments 
in https://review.openstack.org/#/c/403660:


[Low] https://bugs.launchpad.net/nova/+bug/1764390 Replace passing 
system_metadata to notification functions with instance.system_metadata 
usage


[Low] https://bugs.launchpad.net/nova/+bug/1764392 Avoid bandwidth 
usage db query in notifications when the virt driver does not support 
collecting such data



Old bugs


[High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when
sending notification during attach_interface
Fix merged to most of the stable branches. The backport for ocata is 
still open but has +2 from Tony.

https://review.openstack.org/#/c/531746/

[High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations
fail to complete with versioned notifications if payload contains unset
non-nullable fields
No progress. We still need to understand how this problem happens to
find the proper solution.

[Low] https://bugs.launchpad.net/nova/+bug/1487038
nova.exception._cleanse_dict should use
oslo_utils.strutils._SANITIZE_KEYS
Old abandoned patches exist but need somebody to pick them up:
* https://review.openstack.org/#/c/215308/
* https://review.openstack.org/#/c/388345/


Versioned notification transformation
-
https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open

There are some patches that only needs a second +2:
* https://review.openstack.org/#/c/460625 Transform 
aggregate.update_metadata notification
* https://review.openstack.org/#/c/403660 Transform instance.exists 
notification


Introduce instance.lock and instance.unlock notifications
-
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
Implementation proposed but needs some work: 
https://review.openstack.org/#/c/526251/



Add the user id and project id of the user initiated the instance
action to the notification
-
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
Implementation patch exists but still needs work 
https://review.openstack.org/#/c/536243/



Add request_id to the InstanceAction versioned notifications

https://blueprints.launchpad.net/nova/+spec/add-request-id-to-instance-action-notifications
Implemenation needs a rebase and review 
https://review.openstack.org/#/c/553288/



Sending full traceback in versioned notifications
-
https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications
I have to propose the implementation.

Add versioned notifications for removing a member from a server group
-
The specless bp
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications
is pending approval as we would like to see the POC code first. Takashi 
has been proposed the POC code https://review.openstack.org/#/c/559076/ 
so we have to look at it.



Factor out duplicated notification sample
-
https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open
Kevin proposed a lot of patches. \o/ Now I have to go and review them.

Weekly meeting
--
The next meeting will be held on 17th of April on #openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180417T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][xenapi] does get_all_bw_counters driver call nova-network only?

2018-04-16 Thread Balázs Gibizer

Hi,

The get_all_bw_counters() virt driver [1] is only supported by xenapi 
today. However Matt raised the question [2] if this is a nova-network 
only feature. As in that case we can simply remove it.


Cheers,
gibi

[1] 
https://github.com/openstack/nova/blob/68afe71e26e60a3e4ad30083cc244c57540d4da9/nova/virt/xenapi/driver.py#L383
[2] 
https://review.openstack.org/#/c/403660/78/nova/compute/manager.py@6855





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposing Eric Fried for nova-core

2018-03-27 Thread Balázs Gibizer

+1

On Tue, Mar 27, 2018 at 4:00 AM, melanie witt  
wrote:

Howdy everyone,

I'd like to propose that we add Eric Fried to the nova-core team.

Eric has been instrumental to the placement effort with his work on 
nested resource providers and has been actively contributing to many 
other areas of openstack [0] like project-config, gerritbot, 
keystoneauth, devstack, os-loganalyze, and so on.


He's an active reviewer in nova [1] and elsewhere in openstack and 
reviews in-depth, asking questions and catching issues in patches and 
working with authors to help get code into merge-ready state. These 
are qualities I look for in a potential core reviewer.


In addition to all that, Eric is an active participant in the project 
in general, helping people with questions in the #openstack-nova IRC 
channel, contributing to design discussions, helping to write up 
outcomes of discussions, reporting bugs, fixing bugs, and writing 
tests. His contributions help to maintain and increase the health of 
our project.


To the existing core team members, please respond with your comments, 
+1s, or objections within one week.


Cheers,
-melanie

[0] https://review.openstack.org/#/q/owner:efried
[1] http://stackalytics.com/report/contribution/nova/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 12

2018-03-20 Thread Balázs Gibizer

Hi,

Here is the status update / focus settings mail for w12.


Bugs


One new bug from last week:

[Undecided] https://bugs.launchpad.net/nova/+bug/1756360 Serializer 
strips Exception kwargs
The bug refers to an oslo.serialization change as the reason of the 
changed behavior but I failed to reproduce the expected behavior with 
older than that oslo.serialization version. Also there is a fix 
proposed that I have to look at https://review.openstack.org/#/c/554607/



Versioned notification transformation
-
There are 3 patches that has positive feedback (but no +2 as I'm the 
author of those) and needs core attention

https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open


Introduce instance.lock and instance.unlock notifications
-
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
The implementation still needs work 
https://review.openstack.org/#/c/526251/



Add the user id and project id of the user initiated the instance
action to the notification
-
The bp has been approved 
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications 
but implementation hasn't been proposed yet.


Add request_id to the InstanceAction versioned notifications

https://blueprints.launchpad.net/nova/+spec/add-request-id-to-instance-action-notifications
Kevin has a WIP patch up https://review.openstack.org/#/c/553288 . I 
promised to go through it soon.



Sending full traceback in versioned notifications
-
The specless bp has been approved 
https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications



Add versioned notifications for removing a member from a server group
-
The specless bp 
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications 
was discussed and due to possible complications with looking up the 
server group when a server is deleted we would like to see some WIP 
implementation patch proposed before the bp is approved.



Factor out duplicated notification sample
-
https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open
No progress.

Weekly meeting
--
The next meeting will be held on 27th of Marc on #openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180327T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Rocky PTG summary - nova/neutron

2018-03-20 Thread Balázs Gibizer



On Fri, Mar 16, 2018 at 12:04 AM, Matt Riedemann  
wrote:

On 3/15/2018 3:30 PM, melanie witt wrote:
 * We don't need to block bandwidth-based scheduling support for 
doing port creation in conductor (it's not trivial), however, if 
nova creates a port on a network with a QoS policy, nova is going 
to have to munge the allocations and update placement (from 
nova-compute) ... so maybe we should block this on moving port 
creation to conductor after all


This is not the current direction in the spec. The spec is *large* 
and detailed, and this is one of the things being discussed in there. 
For the latest on all of it, gonna need to get caught up on the spec. 
But it won't be updated for awhile because Brother Gib is on vacation.



In the current state of the spec I try to keep this case out of scope 
[1]. Having QoS policy requires a special port or network and nova 
server create with network_id only expected to work is simple network 
and port setup. If the user want some special port (like SRIOV) she has 
to pre-create that port in neutron anyhow.


Cheers,
gibi

[1] 
https://review.openstack.org/#/c/502306/18/specs/rocky/approved/bandwidth-resource-provider.rst@126



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][notification] Full traceback in ExceptionPayload

2018-03-13 Thread Balázs Gibizer


On Fri, Mar 9, 2018 at 5:08 PM, Balázs Gibizer 
<balazs.gibi...@ericsson.com> wrote:



On Fri, Mar 9, 2018 at 3:46 PM, Matt Riedemann <mriede...@gmail.com> 
wrote:

On 3/9/2018 6:26 AM, Balázs Gibizer wrote:
The instance-action REST API has already provide the traceback to 
the user (to the admin by default) and the notifications are 
also admin only things as they are emitted to the message bus by 
default. So I assume that security is not a bigger concern for 
the notification than for the REST API. So I think the only 
issue we have to accept is that the traceback object in the 
ExceptionPayload will not be a well defined field but a simple 
string containing a serialized traceback.


If there is no objection then Kevin or I can file a specless bp to 
extend the ExceptionPayload.


I think that's probably fine. As you said, if we already provide 
tracebacks in instance action event details (and faults), then the 
serialized traceback in the error notification payload also seems 
fine, and is what the legacy notifications did so it's not like 
there wasn't precedent.


I don't think we need a blueprint for this, it's just a bug.


I thought about a bp because it was explicitly defined in the 
original spec not have traceback so for me it does not feels like a 
bug.


I filed the bp 
https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications




Cheers,
gibi



--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 11

2018-03-12 Thread Balázs Gibizer

Hi,

Here is the status update / focus settings mail for w11.


Bugs

No new bug and no changes in the existing bugs from last week report 
http://lists.openstack.org/pipermail/openstack-dev/2018-March/127992.html



Versioned notification transformation
-
We already have some patches proposed to the rock bp. Let's go and 
review them.

https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open


Introduce instance.lock and instance.unlock notifications
-
The bp
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
is approved. Implementation patch exists but still needs work 
https://review.openstack.org/#/c/526251/



Add the user id and project id of the user initiated the instance
action to the notification
-
The bp
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
is approved. Implementation patch exists but still needs work 
https://review.openstack.org/#/c/536243/



Add request_id to the InstanceAction versioned notifications

The bp 
https://blueprints.launchpad.net/nova/+spec/add-request-id-to-instance-action-notifications 
is approved and assigned to Keving_Zheng. Patch has been proposed 
https://review.openstack.org/#/c/551982/ and needs review.



Sending full traceback in versioned notifications
-
Based on a short investigation it seems that it was a conscious 
decision not to include the full traceback. See details in the ML post 
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128105.html
I will file a specless bp to add the full traceback if nobody objects 
in the ML thread.



Add versioned notifications for removing a member from a server group
-
The specless bp 
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications 
is proposed and it looks good to me.



Factor out duplicated notification sample
-
https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open
No open patches, but I would like to progress with this through the 
Rocky cycle.



Weekly meeting
--
The next meeting will be held on 13th of Marc on #openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180313T17

Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][notification] Full traceback in ExceptionPayload

2018-03-09 Thread Balázs Gibizer



On Fri, Mar 9, 2018 at 3:46 PM, Matt Riedemann <mriede...@gmail.com> 
wrote:

On 3/9/2018 6:26 AM, Balázs Gibizer wrote:
The instance-action REST API has already provide the traceback to 
the user (to the admin by default) and the notifications are also 
admin only things as they are emitted to the message bus by 
default. So I assume that security is not a bigger concern for the 
notification than for the REST API. So I think the only issue we 
have to accept is that the traceback object in the ExceptionPayload 
will not be a well defined field but a simple string containing a 
serialized traceback.


If there is no objection then Kevin or I can file a specless bp to 
extend the ExceptionPayload.


I think that's probably fine. As you said, if we already provide 
tracebacks in instance action event details (and faults), then the 
serialized traceback in the error notification payload also seems 
fine, and is what the legacy notifications did so it's not like there 
wasn't precedent.


I don't think we need a blueprint for this, it's just a bug.


I thought about a bp because it was explicitly defined in the original 
spec not have traceback so for me it does not feels like a bug.


Cheers,
gibi



--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][notification] Full traceback in ExceptionPayload

2018-03-09 Thread Balázs Gibizer

Hi,

On the PTG a question was raised that why don't we have the full 
traceback in the versioned error notifications as the legacy 
notifications has the full traceback.


I dig into the past and found out that this difference was intentional. 
During the original versioned notification spec review [2] there was 
couple of back and forth what to add to the ExceptionPayload and what 
not. I think the main reasons not to add the full traceback was that it 
cannot be well defined what goes in that field (it would have been a 
single serialized string) and possible security implications. Then in 
the review we ended up agreing on the ExceptionPayload structure [3] 
that was later implemented and merged.


The instance-action REST API has already provide the traceback to the 
user (to the admin by default) and the notifications are also admin 
only things as they are emitted to the message bus by default. So I 
assume that security is not a bigger concern for the notification than 
for the REST API. So I think the only issue we have to accept is that 
the traceback object in the ExceptionPayload will not be a well defined 
field but a simple string containing a serialized traceback.


If there is no objection then Kevin or I can file a specless bp to 
extend the ExceptionPayload.


Cheers,
gibi

[1] L387 in https://etherpad.openstack.org/p/nova-ptg-rocky
[2] https://review.openstack.org/#/c/286675/
[3] 
https://review.openstack.org/#/c/286675/12/specs/newton/approved/versioned-notification-transformation.rst@405



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] PTG Summary and Rocky Priorities

2018-03-09 Thread Balázs Gibizer






- Multiple agreements about strict minimum bandwidth support feature 
in nova -  Spec has already been updated accordingly: 
https://review.openstack.org/#/c/502306/


   - For now we keep the hostname as the information connecting the 
nova-compute and the neutron-agent on the same host but we are 
aiming for having the hostname as an FQDN to avoid possible 
ambiguity.


   - We agreed not to make this feature dependent on moving the nova 
port create to the conductor. The current scope is to support 
pre-created neutron port only.


I could rat-hole in the spec, but figured it would be good to also 
mention it here. When we were talking about this in Dublin, someone 
also mentioned that depending on the network on which nova-compute 
creates a port, the port could have a QoS policy applied to it for 
bandwidth, and then nova-compute would need to allocate resources in 
Placement for that port (with the instance as the consumer). So then 
we'd be doing allocations both in the scheduler for pre-created ports 
and in the compute for ports that nova creates. So the scope 
statement here isn't entirely true, and leaves us with some technical 
debt until we move port creation to conductor. Or am I missing 
something?




I was sloppy and did not include all the details here. The spec goes 
into a lot more detail about what and how needs to be supported in the 
first iteration[1]. I still think that moving the port creation to the 
conductor is not a hard dependency of the first iteration of this 
feature. I also feel that we agreed on this on the PTG.


Cheers,
gibi

[1] 
https://review.openstack.org/#/c/502306/15/specs/rocky/approved/bandwidth-resource-provider.rst@111








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 10 (PTG)

2018-03-07 Thread Balázs Gibizer

Hi,

Here is the status update / focus settings mail for w10. We discussed 
couple of new notification related changes during the PTG. I tried to 
mention all of them below but if I missed something then please extend 
my list.


Bugs


[High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when
sending notification during attach_interface
Fix merged. The backport for ocata is still open: 
https://review.openstack.org/#/c/531746/


[High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations
fail to complete with versioned notifications if payload contains unset
non-nullable fields
No progress. We still need to understand how this problem happens to 
find the proper solution.


[Low] https://bugs.launchpad.net/nova/+bug/1487038
nova.exception._cleanse_dict should use
oslo_utils.strutils._SANITIZE_KEYS
Old abandoned patches exist but need somebody to pick them up:
* https://review.openstack.org/#/c/215308/
* https://review.openstack.org/#/c/388345/

[Whislist] https://bugs.launchpad.net/nova/+bug/1639152 Send out 
notification about server group changes when delete instances
It was discussed in the Rocky PTG and agreed to do this. A new specless 
bp has been created to track the effort 
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications 
The bp is assigned to Takashi



Versioned notification transformation
-
We already have some patches proposed to the rock bp. I will go and 
review them this week.

https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open


Introduce instance.lock and instance.unlock notifications
-
The bp
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
is approved. Waiting for the implementation to be proposed.


Add the user id and project id of the user initiated the instance
action to the notification
-
The bp
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
is approved. Implementation patch exists but still needs work 
https://review.openstack.org/#/c/526251/



Add request_id to the InstanceAction versioned notifications

The bp 
https://blueprints.launchpad.net/nova/+spec/add-request-id-to-instance-action-notifications 
is approved and assigned to Keving_Zheng.



Sending full traceback in versioned notifications
-
On the PTG we discussed the need of sending full tracebacks in error 
notifications. I will go and dig out why we decided not to send the 
full traceback when we created the versioned notifications.



Add versioned notifications for removing a member from a server group
-
The specless bp 
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications 
is proposed and it looks good to me.



Factor out duplicated notification sample
-
https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open
No open patches, but I would like to progress with this through the 
Rocky cycle.



Weekly meeting
--
The next meeting will be held on 13th of Marc on #openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180313T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] signing up as a bug tag owner

2018-02-16 Thread Balázs Gibizer

Hi,

On the weekly meeting melwitt suggested [1] to have people signed up 
for certain bug tags. I've already been trying to follow the bugs 
tagged with the 'notifications' tag so I sign up for this tag.


Cheers,
gibi

[1]http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-02-15-21.01.log.html#l-86


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [all] project pipeline definition should stay in project-config or project side ?

2018-02-14 Thread Balázs Gibizer



On Wed, Feb 14, 2018 at 6:45 AM, Andreas Jaeger  wrote:

On 2018-02-14 01:28, Ghanshyam Mann wrote:
 On Wed, Feb 14, 2018 at 12:06 AM, Paul Belanger 
 wrote:

 On Tue, Feb 13, 2018 at 11:05:34PM +0900, gmann wrote:

 Hi Infra Team,

 I have 1 quick question on zuulv3 jobs and their migration part. 
From
 zuulv3 doc [1], it is clear about migrating the job definition 
and use

 those among cross repo pipeline etc.

 But I did not find clear recommendation that whether project's
 pipeline definition should stay in project-config or we should 
move

 that to project side.

 IMO,
 'template' part(which has system level jobs) can stay in
 project-config. For example below part-

 
https://github.com/openstack-infra/project-config/blob/e2b82623a4ab60261b37a91e38301927b9b6/zuul.d/projects.yaml#L10507-L10523


 Other pipeline definition- 'check', 'gate', 'experimental' etc 
should

 be move to project repo, mainly this list-
 
https://github.com/openstack-infra/project-config/blob/master/zuul.d/projects.yaml#L10524-L11019


 If we move those past as mentioned above then, we can have a
 consolidated place to control the project pipeline for
 'irrelevant-files', specific branch etc

 ..1 https://docs.openstack.org/infra/manual/zuulv3.html

 As it works today, pipeline stanza needs to be in a config 
project[1]
 (project-config) repo. So what you are suggestion will not work. 
This was done
 to allow zuul admins to control which pipelines are setup / 
configured.


 I am mostly curious why a project would need to modify a pipeline 
configuration
 or duplicate it into all projects, over having it central located 
in

 project-config.


 pipeline stanza  and configuration stay in project-config. I mean 
list

 of jobs defined in each pipeline for specific project for example
 here[2]. Now we have list of jobs for each pipeline in 2 places, one
 in project-config [2] and second in project repo[3].

 Issue in having it in 2 places:
 - No single place to check what all jobs project will run with what 
conditions

 - If we need to modify the list of jobs in pipeline or change other
 bits like irrelevant-files etc then it has to be done in
 project-config. So no full control by project side.




For me it is even more than two places as the project templates like 
'integarted-gate'[4] defines jobs to be executed on a project that 
includes the template in the project-config. Which leads to problems 
like [5]. This shows that tracking down why some job runs on a change 
is fairly non-trivial from a developer perspective. Therefore I support 
to define which jobs run on a given project as close to the project as 
possible and as small number of different places as possible. I even 
volunteer to help with the moving from nova perspective.




This should be explained in:
https://docs.openstack.org/infra/manual/zuulv3.html#what-to-convert

So, the standard templates/jobs - incl. PTI mandated ones - should 
stay

in project-config, you can move everything else in-tree,


As far as I understand this list allows us to solve [5] by simply 
moving every jobs from 'integrated-gate' to the respective project in 
tree as the jobs in that template are not part of the PTI.



[4] 
https://github.com/openstack-infra/openstack-zuul-jobs/blob/df8a8e8ee41c1ceb4da458a8681e39de39eafded/zuul.d/zuul-legacy-project-templates.yaml#L93

[5] https://review.openstack.org/#/c/538908

Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Notification update week 7

2018-02-13 Thread Balázs Gibizer



On Mon, Feb 12, 2018 at 8:47 PM, Matt Riedemann <mriede...@gmail.com> 
wrote:

On 2/12/2018 11:11 AM, Balázs Gibizer wrote:

Add the user id and project id of the user initiated the instance
action to the notification
-
The bp 
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications 
was shortly discussed on the last nova weekyl meeting, there was no 
objection but it still pending approval.


This is approved now. We agreed to approve this in the the Feb 8 
meeting, I just forgot to do it.


Cool, thanks!
Cheers,
gibi



--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 7

2018-02-12 Thread Balázs Gibizer

Hi,

Here is the status update / focus settings mail for w7.

Bugs

No new bugs. No change from last week's bug status.


Versioned notification transformation
-
The rocky bp has been created 
https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-rocky
Every open patch needs to be reproposed to this bp as soon as master 
opens for Rocky.


Introduce instance.lock and instance.unlock notifications
-
The bp 
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances 
was shortly discussed on the last nova weekly meeting and approved.


Add the user id and project id of the user initiated the instance
action to the notification
-
The bp 
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications 
was shortly discussed on the last nova weekyl meeting, there was no 
objection but it still pending approval.


Factor out duplicated notification sample
-
https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open
No open patches. We can expect some as soon as master opens for Rocky.

Weekly meeting
--
The next three meetings are cancelled. The next meeting will be help 
after the PTG.


Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Adding Takashi Natsume to python-novaclient core

2018-02-09 Thread Balázs Gibizer



On Fri, Feb 9, 2018 at 4:01 PM, Matt Riedemann  
wrote:

I'd like to add Takashi to the python-novaclient core team.

python-novaclient doesn't get a ton of activity or review, but 
Takashi has been a solid reviewer and contributor to that project for 
quite awhile now:


http://stackalytics.com/report/contribution/python-novaclient/180

He's always fast to get new changes up for microversion support and 
help review others that are there to keep moving changes forward.


So unless there are objections, I'll plan on adding Takashi to the 
python-novaclient-core group next week.


+1

Cheers,
gibi



--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Notification update week 6

2018-02-07 Thread Balázs Gibizer


On Tue, Feb 6, 2018 at 7:04 PM, Matt Riedemann <mriede...@gmail.com> 
wrote:

On 2/5/2018 9:32 AM, Balázs Gibizer wrote:

Introduce instance.lock and instance.unlock notifications
-
A specless bp has been proposed to the Rocky cycle
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances 


Some preliminary discussion happened in an earlier patch
https://review.openstack.org/#/c/526251/

Add the user id and project id of the user initiated the instance
action to the notification
-
A new bp has been proposed
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications 


As the user who initiates the instance action (e.g. reboot) could be
different from the user owning the instance it would make sense to
include the user_id and project_id of the action initiatior to the
versioned instance action notifications as well.


Both should be mentioned during the 'open discussion' part of the 
weekly nova meeting but at first glance I think these are both OK.


I've added them to the agenda for tomorrow.

cheers,
gibi


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 6

2018-02-05 Thread Balázs Gibizer

Hi,

Here is the status update / focus settings mail for w6.

Bugs


No new bugs and the below bug status is the same as last week.

[High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when
sending notification during attach_interface
Fix merged to master. Backports have been proposed:
* Pike: https://review.openstack.org/#/c/531745/
* Queens: https://review.openstack.org/#/c/531746/

[High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations
fail to complete with versioned notifications if payload contains unset
non-nullable fields
We need to understand first how this can happen. Based on the comments
from the bug it seems it happens after upgrading an old deployment. So
it might be some problem with the online data migration that moves the
flavor into the instance.

[Low] https://bugs.launchpad.net/nova/+bug/1487038
nova.exception._cleanse_dict should use
oslo_utils.strutils._SANITIZE_KEYS
Old abandoned patches exist but need somebody to pick them up:
* https://review.openstack.org/#/c/215308/
* https://review.openstack.org/#/c/388345/


Versioned notification transformation
-
The rocky bp has been created 
https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-rocky
Every open patch needs to be reproposed to this bp as soon as master 
opens for Rocky.


Introduce instance.lock and instance.unlock notifications
-
A specless bp has been proposed to the Rocky cycle
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
Some preliminary discussion happened in an earlier patch
https://review.openstack.org/#/c/526251/

Add the user id and project id of the user initiated the instance
action to the notification
-
A new bp has been proposed
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
As the user who initiates the instance action (e.g. reboot) could be
different from the user owning the instance it would make sense to
include the user_id and project_id of the action initiatior to the
versioned instance action notifications as well.

Factor out duplicated notification sample
-
https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open
No open patches. We can expect some as soon as master opens for Rocky.

Weekly meeting
--
The next meeting will be held on 6th of February on #openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180206T17

Cheers,
gibi





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 5

2018-01-29 Thread Balázs Gibizer

Hi,

Here is the status update / focus settings mail for w5.

Bugs

[High] https://bugs.launchpad.net/nova/+bug/1742962 nova functional 
test does not triggered on notification sample only changes
Fix merged to master, backports are on the gate. When backport lands we 
can merge the removal of the triggering of the old jobs for nova by 
merging https://review.openstack.org/#/c/533608/


As a followup I did some investigation to see if other jobs are 
affected with the same problem, see ML 
http://lists.openstack.org/pipermail/openstack-dev/2018-January/126616.html


[High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when
sending notification during attach_interface
Fix merged to master. Backports have been proposed:
* Pike: https://review.openstack.org/#/c/531745/
* Queens: https://review.openstack.org/#/c/531746/

[High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations
fail to complete with versioned notifications if payload contains unset
non-nullable fields
We need to understand first how this can happen. Based on the comments 
from the bug it seems it happens after upgrading an old deployment. So 
it might be some problem with the online data migration that moves the

flavor into the instance.

[Low] https://bugs.launchpad.net/nova/+bug/1487038
nova.exception._cleanse_dict should use
oslo_utils.strutils._SANITIZE_KEYS
Old abandoned patches exist but need somebody to pick them up:
* https://review.openstack.org/#/c/215308/
* https://review.openstack.org/#/c/388345/

Versioned notification transformation
-
Feature Freeze hit but the team made a good last minute push. 
Altogether we merged 17 transformation patches in Queens. \o/ Thanks 
for everybody who contributed with code, review, or encuragement. We 
have 22 transformations left to reach feature parity which means we 
have a chance to finish this work in Rocky. I also put up this as a 
possible intership idea on the wiki: 
https://wiki.openstack.org/wiki/GSoC2018#Internship_ideas


Reno for the Queens work is up to date: 
https://review.openstack.org/#/c/518018


Introduce instance.lock and instance.unlock notifications
-
A specless bp has been proposed to the Rocky cycle
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
Some preliminary discussion happened in an earlier patch
https://review.openstack.org/#/c/526251/

Add the user id and project id of the user initiated the instance
action to the notification
-
A new bp has been proposed
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
As the user who initiates the instance action (e.g. reboot) could be
different from the user owning the instance it would make sense to
include the user_id and project_id of the action initiatior to the
versioned instance action notifications as well.

Factor out duplicated notification sample
-
As https://bugs.launchpad.net/nova/+bug/1742962 is merged it is safe to 
look at the patches on 
https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open 
again.


Weekly meeting
--
The next meeting will be held on 30th of January on #openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180130T17

Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][infra] zuul job definitions overrides and the irrelevant-file attribute

2018-01-29 Thread Balázs Gibizer


On Fri, Jan 26, 2018 at 6:57 PM, James E. Blair <cor...@inaugust.com> 
wrote:

Balázs Gibizer <balazs.gibi...@ericsson.com> writes:


 Hi,

 I'm getting more and more confused how the zuul job hierarchy works 
or

 is supposed to work.


Hi!

First, you (or others) may or may not have seen this already -- some 
of
it didn't exist when we first rolled out v3, and some of it has 
changed
-- but here are the relevant bits of the documentation that should 
help

explain what's going on.  It helps to understand freezing:

  https://docs.openstack.org/infra/zuul/user/config.html#job

and matching:

  https://docs.openstack.org/infra/zuul/user/config.html#matchers


Thanks for the doc references they are really helpful.




 First there was a bug in nova that some functional tests are not
 triggered although the job (re-)definition in the nova part of the
 project-config should not prevent it to run [1].

 There we figured out that irrelevant-files parameter of the jobs are
 not something that can be overriden during re-definition or through
 parent-child relationship. The base job openstack-tox-functional has
 an irrelevant-files attribute that lists '^doc/.*$' as a path to be
 ignored [2]. In the other hand the nova part of the project-config
 tries to make this ignore less broad by adding only 
'^doc/source/.*$'
 . This does not work as we expected and the job did not run on 
changes

 that only affected ./doc/notification_samples path. We are fixing it
 by defining our own functional job in nova tree [4].

 [1] https://bugs.launchpad.net/nova/+bug/1742962
 [2]
 
https://github.com/openstack-infra/openstack-zuul-jobs/blob/1823e3ea20e6dfaf37786a6ff79c56cb786bf12c/zuul.d/jobs.yaml#L380

 [3]
 
https://github.com/openstack-infra/project-config/blob/1145ab1293f5fa4d34c026856403c22b091e673c/zuul.d/projects.yaml#L10509

 [4] https://review.openstack.org/#/c/533210/


This is correct.  The issue here is that the irrelevant-files 
definition

on openstack-tox-functional is too broad.  We need to be *extremely*
careful applying matchers to jobs like that.  Generally I think that
irrelevant-files should be reserved for the project-pipeline 
invocations

only.  That's how they were effectively used in Zuul v2, after all.

Essentially, when someone puts an irrelevant-files section on a job 
like
that, they are saying "this job will never apply to these files, 
ever."

That's clearly not correct in this case.

So our solutions are to acknowledge that it's over-broad, and reduce 
or

eliminate the list in [2] and expand it elsewhere (as in [3]).  Or we
can say "we were generally correct, but nova is extra special so it
needs its own job".  If that's the choice, then I think [4] is a fine
solution.


The [4] just get merged this morning so I think that is OK for us now.




 Then I started looking into other jobs to see if we made similar
 mistakes. I found two other examples in the nova related jobs where
 redefining the irrelevant-files of a job caused problems. In these
 examples nova tried to ignore more paths during the override than 
what

 was originally ignored in the job definition but that did not work
 [5][6].

 [5] https://bugs.launchpad.net/nova/+bug/1745405 (temptest-full)


As noted in that bug, the tempest-full job is invoked on nova via this
stanza:

https://github.com/openstack-infra/project-config/blob/5ddbd62a46e17dd2fdee07bec32aa65e3b637ff3/zuul.d/projects.yaml#L10674-L10688

As expected, that did not match.  There is a second invocation of
tempest-full on nova here:

http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/zuul-legacy-project-templates.yaml#n126

That has no irrelevant-files matches, and so matches everything.  If 
you
drop the use of that template, it will work as expected.  Or, if you 
can

say with some certainty that nova's irrelevant-files set is not
over-broad, you could move the irrelevant-files from nova's invocation
into the template, or even the job, and drop nova's individual
invocation.


Thanks for the explanation, it is much clearer now. With this info I 
think I was able to propose a patcha that fixes the two bugs: 
https://review.openstack.org/#/c/538908/





 [6] https://bugs.launchpad.net/nova/+bug/1745431 (neutron-grenade)


The same template invokes this job as well.


 So far the problem seemed to be consistent (i.e. override does not
 work). But then I looked into neutron-grenade-multinode. That job is
 defined in neutron tree (like neutron-grenade) but nova also refers 
to

 it in nova section of the project-config with different
 irrelevant-files than their original definition. So I assumed that
 this will lead to similar problem than in case of neutron-grenade, 
but

 it doesn't.

 The neutron-grenade-multinode original definition [1] does not try 
to

 ignore the 'nova/tests' path but the nova side of the definition in
 the project config does try to ignore that path [8]. Interestingly a
 patch in nova 

[openstack-dev] [nova][neutron][infra] zuul job definitions overrides and the irrelevant-file attribute

2018-01-26 Thread Balázs Gibizer

Hi,

I'm getting more and more confused how the zuul job hierarchy works or 
is supposed to work.


First there was a bug in nova that some functional tests are not 
triggered although the job (re-)definition in the nova part of the 
project-config should not prevent it to run [1].


There we figured out that irrelevant-files parameter of the jobs are 
not something that can be overriden during re-definition or through 
parent-child relationship. The base job openstack-tox-functional has an 
irrelevant-files attribute that lists '^doc/.*$' as a path to be 
ignored [2]. In the other hand the nova part of the project-config 
tries to make this ignore less broad by adding only '^doc/source/.*$' . 
This does not work as we expected and the job did not run on changes 
that only affected ./doc/notification_samples path. We are fixing it by 
defining our own functional job in nova tree [4].


[1] https://bugs.launchpad.net/nova/+bug/1742962
[2] 
https://github.com/openstack-infra/openstack-zuul-jobs/blob/1823e3ea20e6dfaf37786a6ff79c56cb786bf12c/zuul.d/jobs.yaml#L380
[3] 
https://github.com/openstack-infra/project-config/blob/1145ab1293f5fa4d34c026856403c22b091e673c/zuul.d/projects.yaml#L10509

[4] https://review.openstack.org/#/c/533210/

Then I started looking into other jobs to see if we made similar 
mistakes. I found two other examples in the nova related jobs where 
redefining the irrelevant-files of a job caused problems. In these 
examples nova tried to ignore more paths during the override than what 
was originally ignored in the job definition but that did not work 
[5][6].


[5] https://bugs.launchpad.net/nova/+bug/1745405 (temptest-full)
[6] https://bugs.launchpad.net/nova/+bug/1745431 (neutron-grenade)

So far the problem seemed to be consistent (i.e. override does not 
work). But then I looked into neutron-grenade-multinode. That job is 
defined in neutron tree (like neutron-grenade) but nova also refers to 
it in nova section of the project-config with different 
irrelevant-files than their original definition. So I assumed that this 
will lead to similar problem than in case of neutron-grenade, but it 
doesn't.


The neutron-grenade-multinode original definition [1] does not try to 
ignore the 'nova/tests' path but the nova side of the definition in the 
project config does try to ignore that path [8]. Interestingly a patch 
in nova that only changes under the path: nova/tests/ does not trigger 
the job [9]. So in this case overriding the irrelevant-files of a job 
works. (It seems that overriding neutron-tempest-linuxbridge 
irrelevant-files works too).


[7] 
https://github.com/openstack/neutron/blob/7e3d6a18fb928bcd303a44c1736d0d6ca9c7f0ab/.zuul.yaml#L140-L159
[8] 
https://github.com/openstack-infra/project-config/blob/5ddbd62a46e17dd2fdee07bec32aa65e3b637ff3/zuul.d/projects.yaml#L10516-L10530

[9] https://review.openstack.org/#/c/537936/

I don't see what is the difference between neutron-grenade and 
neutron-grenade-multinode jobs definitions from this perspective but it 
seems that the irrelevent-files attribute behaves  inconsistently in 
these two jobs. Could you please help me undestand how irrelevant-files 
in overriden jobs supposed to work?


cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] PTL Election Season

2018-01-23 Thread Balázs Gibizer



On Tue, Jan 23, 2018 at 12:09 AM, Matt Riedemann  
wrote:

On 1/15/2018 11:04 AM, Kendall Nelson wrote:

Election details: https://governance.openstack.org/election/

Please read the stipulations and timelines for candidates and 
electorate contained in this governance documentation.


Be aware, in the PTL elections if the program only has one 
candidate, that candidate is acclaimed and there will be no poll. 
There will only be a poll if there is more than one candidate 
stepping forward for a program's PTL position.


There will be further announcements posted to the mailing list as 
action is required from the electorate or candidates. This email is 
for information purposes only.


If you have any questions which you feel affect others please reply 
to this email thread.




To anyone that cares, I don't plan on running for Nova PTL again for 
the Rocky release. Queens was my fourth tour and it's definitely time 
for someone else to get the opportunity to lead here. I don't plan on 
going anywhere and I'll be here to help with any transition needed 
assuming someone else (or a couple of people hopefully) will run in 
the election. It's been a great experience and I thank everyone that 
has had to put up with me and my obsessive paperwork and process 
disorder in the meantime.


--

Thanks,

Matt


Thank you Matt! You did an excellent job and helped the whole community 
to grow.


Cheers,
gibi



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 4

2018-01-22 Thread Balázs Gibizer

Hi,

Here is the status update / focus settings mail for w4.

Bugs

[High] https://bugs.launchpad.net/nova/+bug/1742962 nova functional
test does not triggered on notification sample only changes
During the zuul v3 migration the project-config generated based on the
zuul v2 jobs. It contained a proper definition of when nova wants to
trigger the functional job. Unfortunately this job definition does not
override the openstack-tox-functional job definition from the
openstack-zuul-jobs repo. This caused that the openstack-tox-functional
(and functional-py35) jobs were not triggered for certain commits. The
fix is to create a nova specific tox-functional job in tree. Patches
has been proposed:
* https://review.openstack.org/#/c/533210/ Make sure that functional
test triggered on sample changes
* https://review.openstack.org/#/c/533608/ Moving nova functional test
def to in tree
In general we have to review all nova jobs in the project-config and
move those in-tree that try to override parameters of the job
definitions in openstack-zuul-jobs repo.

[High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when
sending notification during attach_interface
Fix merged to master. Backports have been proposed:
* Pike: https://review.openstack.org/#/c/531745/
* Queens: https://review.openstack.org/#/c/531746/

[High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations
fail to complete with versioned notifications if payload contains unset
non-nullable fields
Patch has been proposed: https://review.openstack.org/#/c/529194/
Dan left feedback on it and I accept his comment that this is mostly
papering over a problem that we don't fully understand how can happen
in the first place. In the other hand I don't know how can we figure
out what happend. So if somebody has an idea then don't hesistate to
tell me. This bug is still stuck.

[Low] https://bugs.launchpad.net/nova/+bug/1487038
nova.exception._cleanse_dict should use
oslo_utils.strutils._SANITIZE_KEYS
Old abandoned patches exist but need somebody to pick them up:
* https://review.openstack.org/#/c/215308/
* https://review.openstack.org/#/c/388345/

Versioned notification transformation
-
Thanks for Takashi we have multiple patches needing only a second +2:
* https://review.openstack.org/#/c/482148 Transform instance-evacuate 
notification
* https://review.openstack.org/#/c/465081 Transform 
instance.resize_prep notification
* https://review.openstack.org/#/c/482557 Transform 
instance.resize_confirm notification


Also there are patches ready for cores to review:
* https://review.openstack.org/#/c/403660 Transform instance.exists
notification
* https://review.openstack.org/#/c/410297 Transform missing delete
notifications
* https://review.openstack.org/#/c/476459 Send soft_delete from context
manager

Introduce instance.lock and instance.unlock notifications
---
A specless bp has been proposed to the Rocky cycle
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
Some preliminary discussion happened in an earlier patch
https://review.openstack.org/#/c/526251/

Add the user id and project id of the user initiated the instance
action to the notification

A new bp has been proposed
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
As the user who initiates the instance action (e.g. reboot) could be
different from the user owning the instance it would make sense to
include the user_id and project_id of the action initiatior to the
versioned instance action notifications as well.

Factor out duplicated notification sample
-
https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open
We have to be carefull to approve these type of commits until the
solution for https://bugs.launchpad.net/nova/+bug/1742962 merged as
functional tests could be broken silently.

Weekly meeting
--
The next meeting will be held on 23th of January on #openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180123T17

Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Notification update week 3

2018-01-15 Thread Balázs Gibizer

Hi,

Here is the status update / focus settings mail for 2018 w3.

Bugs

[High] 
https://bugs.launchpad.net/nova/+bug/1742935TestServiceUpdateNotificationSample 
fails intermittently: u'host2' != u'host1': path: 
root.payload.nova_object.data.host
The openstack-tox-functional (and functional-py35) test environment was 
totally broken during last Friday. Sorry for that. The patch that 
caused the break has been reverted 
https://review.openstack.org/#/c/533190/
A follow up bug has been opened (see next) to avoid similar break in 
the future.


[High] https://bugs.launchpad.net/nova/+bug/1742962 nova functional 
test does not triggered on notification sample only changes
During the zuul v3 migration the project-config generated based on the 
zuul v2 jobs. It contained a proper definition of when nova wants to 
trigger the functional job. Unfortunately this job definition does not 
override the openstack-tox-functional job definition from the 
openstack-zuul-jobs repo. This caused that the openstack-tox-functional 
(and functional-py35) jobs were not triggered for certain commits. The 
fix is to create a nova specific tox-functional job in tree. Patches 
has been proposed:
* https://review.openstack.org/#/c/533210/ Make sure that functional 
test triggered on sample changes
* https://review.openstack.org/#/c/533608/ Moving nova functional test 
def to in tree
In general we have to review all nova jobs in the project-config and 
move those in-tree that try to override parameters of the job 
definitions in openstack-zuul-jobs repo.


[High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when
sending notification during attach_interface
Fix merged to master. Backports have been proposed:
* Pike: https://review.openstack.org/#/c/531745/
* Queens: https://review.openstack.org/#/c/531746/

[High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations
fail to complete with versioned notifications if payload contains unset
non-nullable fields
Patch has been proposed: https://review.openstack.org/#/c/529194/
Dan left feedback on it and I accept his comment that this is mostly 
papering over a problem that we don't fully understand how can happen 
in the first place. In the other hand I don't know how can we figure 
out what happend. So if somebody has an idea then don't hesistate to 
tell me.


[Low] https://bugs.launchpad.net/nova/+bug/1742688 
test_live_migration_actions notification sample test fails 
intermittently with 'notification 
instance.live_migration_rollback.start hasn't been received'
It seems that test execution in CI is a lot slower than before and it 
makes the 1 second timeout in the notification test too small. Fix is 
on the gate: https://review.openstack.org/#/c/532816


[Low] https://bugs.launchpad.net/nova/+bug/1487038
nova.exception._cleanse_dict should use
oslo_utils.strutils._SANITIZE_KEYS
Old abandoned patches exist but need somebody to pick them up:
* https://review.openstack.org/#/c/215308/
* https://review.openstack.org/#/c/388345/

Versioned notification transformation
-
Here are the patches ready to review:
* https://review.openstack.org/#/c/385644 Transform rescue/unrescue 
instance notifications

Needs only a second +2
* https://review.openstack.org/#/c/403660 Transform instance.exists 
notification
* https://review.openstack.org/#/c/410297 Transform missing delete 
notifications
* https://review.openstack.org/#/c/476459 Send soft_delete from context 
manager


Introduce instance.lock and instance.unlock notifications
---
A specless bp has been proposed to the Rocky cycle
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
Some preliminary discussion happened in an earlier patch
https://review.openstack.org/#/c/526251/

Add the user id and project id of the user initiated the instance 
action to the notification


A new bp has been proposed 
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
As the user who initiates the instance action (e.g. reboot) could be 
different from the user owning the instance it would make sense to 
include the user_id and project_id of the action initiatior to the 
versioned instance action notifications as well.


Factor out duplicated notification sample
-
https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open
We have to be carefull to approve these type of commits until the 
solution for https://bugs.launchpad.net/nova/+bug/1742962 merged as 
functional tests could be broken silently.


Weekly meeting
--
There will not be a meeting this week. The next meeting will be held on 
23th of January.

https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180123T17

Cheers,
gibi



[openstack-dev] [nova] [libvirt] [scaleio] ScaleIO libvirt volume driver native mode

2018-01-08 Thread Balázs Gibizer

Hi,

Two years ago a patch merged [1] that set AIO mode of some of the 
libivrt volume drivers to 'native' instead of the default 'threading'. 
At that time the ScaleIO driver was not modified. Recently we did some 
measurements (on Mitaka base) and we think that the ScaleIO volume 
driver could also benefit from the 'native' mode. So in Rocky we would 
like to propose a small change to set the 'native' mode for the ScaleIO 
volume driver too. Does anybody have opposing measurements or views?


Cheers,
gibi

[1] https://review.openstack.org/#/c/251829/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 2

2018-01-08 Thread Balázs Gibizer

Hi,

Here is the status update / focus settings mail for 2018 w2.

Bugs

[High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when 
sending notification during attach_interface

Fix merged to master. Backports have been proposed:
* Pike: https://review.openstack.org/#/c/531745/
* Queens: https://review.openstack.org/#/c/531746/

[High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations 
fail to complete with versioned notifications if payload contains unset 
non-nullable fields

Patch has been proposed: https://review.openstack.org/#/c/529194/

[Low] https://bugs.launchpad.net/nova/+bug/1487038 
nova.exception._cleanse_dict should use 
oslo_utils.strutils._SANITIZE_KEYS

Old abandoned patches exist:
* https://review.openstack.org/#/c/215308/
* https://review.openstack.org/#/c/388345/


Versioned notification transformation
-
Here are the patches ready to review the rest are in merge  conflict or 
failing tests:
* https://review.openstack.org/#/c/410297 Transform missing delete 
notifications
* https://review.openstack.org/#/c/476459 Send soft_delete from context 
manager
* https://review.openstack.org/#/c/403660 Transform instance.exists 
notification



Introduce instance.lock and instance.unlock notifications
---
A specless bp has been proposed to the Rocky cycle 
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
Some preliminary discussion happened in an earlier patch 
https://review.openstack.org/#/c/526251/



Factor out duplicated notification sample
-
https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open
There are two ongoing patches to look at.

Weekly meeting
--
The first meeting of 2018 is expected will be held on 9th of January.
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180109T17

Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight] status of an instance on the REST API and in the instance notifications

2017-08-07 Thread Balázs Gibizer
> -Original Message-
> From: Matt Riedemann [mailto:mriede...@gmail.com]
> Sent: July 26, 2017 03:23
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova][searchlight] status of an instance on the
> REST API and in the instance notifications
> 
> On 7/19/2017 10:54 AM, Balazs Gibizer wrote:
> > On Wed, Jul 19, 2017 at 5:38 PM, McLellan, Steven
> >  wrote:
> >> Thanks Balazs for noticing and replying to my message!
> >>
> >> The Status field is quite important to us since it's the indicator of
> >> VM state that Horizon displays most prominently and the most simple
> >> description of whether a VM is currently usable or not without having
> >> to parse the various _state fields. If we can't get this change added
> >> in Pike I'll probably implement a simplified version of the mapping in
> >> [2], but it would be really good to get it into the notifications in
> >> Pike if possible. I understand though that this late in the cycle it
> >> may not be possible.
> >
> > I can create a patch to add the status to the instance notifications but
> > I don't know if nova cores accept it for this late in Pike.
> > @Cores: Do you?
> >
> > Cheers,
> > gibi
> 
> It's probably too late to be dealing with this right now in Pike. I'd
> like to defer this to Queens where we can refactor the REST API common
> view code into a better place where it could be re-used with the
> notifications code if this is something we're going to add to the
> versioned notifications, and it's probably easy enough to do that.

I opened a bp for Queens [1].
@Searchlight folks: please check it and come back with feedback and additional 
requirements.

Cheers,
gibi
[1] 
https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight-queens

> 
> --
> 
> Thanks,
> 
> Matt
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Next notification meeting

2017-01-16 Thread Balázs Gibizer
Hi,

The next notification subteam meeting will be held on 2017.01.17 17:00 UTC [1] 
on #openstack-meeting-4.

Cheers,
gibi

[1]
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170117T17

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova]Accessing nullable, not set versioned object field

2016-12-20 Thread Balázs Gibizer
From: Dan Smith 
Sent: 16 December 2016 16:33
>> NotImplementedError: Cannot load 'nullable_string' in the base class
>>
>> Is this the correct behavior?
> 
> Yes, that's the expected behaviour.

Yes.

>> Then what is the expected behavior if the field is also defaulted to
>> None?
>>
>> fields = {
>> 'nullable_string': fields.StringField(nullable=True,
>> default=None),
>> }
>>
>> The actual behavior is still the same exception above. Is it the
>> correct behavior?
> 
> Yes. So, what the default=None does is describe the behaviour of the
> field when obj_set_defaults() is called. It does *not* describe what is
> returned if the field *value* is accessed before being populated.
>
> What you're looking for is the obj_attr_is_set() method:
>
> 
> if MyObject.obj_attr_is_set('nullable_string'):
> print my_obj.nullable_string

I think you meant s/MyObject/my_obj/ above. However, in modern times,
it's better to use:

 if 'nullable_string' in myobj

On a per-object basis, it may also be reasonable to define
obj_load_attr() to provide the default for a field if it's not set and
attempted to be loaded.

> In addition to the obj_attr_is_set() method, use the obj_set_defaults()
> method to manually set all fields that have a default=XXX value to XXX
> if those fields have not yet been manually set:

There's another wrinkle here. The default=XXX stuff was actually
introduced before we had obj_set_defaults(), and for a very different
reason. That reason was confusing and obscure, and mostly supportive of
the act of converting nova from dicts to objects. If you look in fields,
there is an obscure handling of default, where if you _set_ a field to
None that has a default and is not nullable, it will gain the default value.

It's confusing and I wish we had never done it, but.. it's part of the
contract now and I'd have to do a lot of digging to see if we can remove
it (probably can from Nova, but...).

Your use above is similar to this, so I just wanted to point it out in
case you came across it and it led you to thinking your original example
would work.

--Dan

Thank you for the answers. Following up on this. Is it considered a good
practice to instantiate an ovo but keeping some non-lazy loaded
fields unset?

I think the user of the ovo instance should be able to assume that the
fields declared in the ovo are accessible after the ovo is instantiated
without manually checking obj_attr_is_set().

I know that lazy-loaded fields are a special case because the user of 
the ovo instance will see that the lazy-loaded field is not set if calls 
obj_attr_is_set() but as soon as user code tries to access it the 
backend will fetch and return the value of the field.

However there are cases in nova where an instantiated ovo has some
not set, not lazy-loaded field. For example Service.availability_zone  is
not lazy-loaded [2] but it is allowed to be not set by [1].
Is it considered a bug? Should the code [1] set Serivce.availability_zone to 
None instead of keeping it not set?

Cheers,
gibi

[1] https://github.com/openstack/nova/blob/master/nova/objects/service.py#L197
[2] https://github.com/openstack/nova/blob/master/nova/objects/service.py#L221

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


openstack-dev mailing list
lists.openstack.org
This list for the developers of OpenStack to discuss development issues and 
roadmap. It is focused on the next release of OpenStack: you should post on 
this list if ...


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][nova]Accessing nullable, not set versioned object field

2016-12-16 Thread Balázs Gibizer
Hi, 

What is the expected behavior of accessing a nullable and 
not set versioned object field?
See the following example code:

from oslo_versionedobjects import base
from oslo_versionedobjects import fields


@base.VersionedObjectRegistry.register
class MyObject(base.VersionedObject):

VERSION = '1.0'
fields = {
'nullable_string': fields.StringField(nullable=True),
}


my_obj = MyObject()
my_obj.nullable_string

#EOF

My naïve expectation would be that the value of my_obj.nullable_string
is None but the actual behavior is an exception:

Traceback (most recent call last):
  File "ovo_nullable_test.py", line 15, in 
my_obj.nullable_string
  File 
".tox/functional/local/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 67, in getter
self.obj_load_attr(name)
  File 
".tox/functional/local/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 603, in obj_load_attr
_("Cannot load '%s' in the base class") % attrname)
NotImplementedError: Cannot load 'nullable_string' in the base class

Is this the correct behavior?

Then what is the expected behavior if the field is also defaulted to None?

fields = {
'nullable_string': fields.StringField(nullable=True, default=None),
}

The actual behavior is still the same exception above. Is it the correct 
behavior?

More real life example:
Nova has a Service object that has an availability_zone nullable field [1].
When a Service object is loaded from the db the code allows not to fill
the availability_zone field [2]. This result in a Service object instance that
will produce the above exception if the code tries to access the 
availability_zone
field later. 

The whole problem arises when we try to send a service status notification and 
we
want to copy the fields from the Service object blindly to the notification 
payload.
To avoid the above exception we added a check to the notification payload 
generation
[3] to see if the given field is set or not. But this causes that if a field is 
lazy-loaded
but not loaded yet then we simply handle that the same way as non lazy-loaded
not set field. In the lazy-load case we might want to trigger a lazy-load 
during the copy but
in the non lazy-load case accessing the field would cause an exception.
Currently I don't see a way to distinguish between the two cases without 
triggering the
lazy-load / exception itself.

Cheers,
gibi

[1] https://github.com/openstack/nova/blob/master/nova/objects/service.py#L133 
[2] https://github.com/openstack/nova/blob/master/nova/objects/service.py#L197 
[3] 
https://github.com/openstack/nova/blob/master/nova/notifications/objects/base.py#L97
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Next notification meeting in 2017

2016-12-15 Thread Balázs Gibizer
Hi, 

Due to the holiday season the next notification subteam meeting will be held on 
3rd of January [1].

Cheers,
gibi
[1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170103T17 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] next notification subteam meeting

2016-11-15 Thread Balázs Gibizer
Hi, 

The next notification subteam meeting will be held on 2016.11.15 17:00 UTC [1] 
on #openstack-meeting-4. 

Cheers,
gibi

[1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20161115T17

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] next notification subteam meeting

2016-11-03 Thread Balázs Gibizer
Hi, 

The next notification subteam meeting will be held on 2016.11.08 17:00 UTC [1] 
on #openstack-meeting-4. Also I proposed [2] to change the meeting frequency 
from biweekly to weekly until the feature freeze of Ocata.

Cheers,
Gibi

[1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20161108T17
[2] https://review.openstack.org/#/c/393223/ 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [searchlight] Discrepancies between nova server notifications and API response

2016-11-03 Thread Balázs Gibizer
> -Original Message-
> From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> Sent: October 31, 2016 14:01
> 
> On 10/28/2016 4:32 PM, McLellan, Steven wrote:
> > Hi,
> >
> > I was unfortunately unable to make the summit but I'm told there were
> > some good discussions around possible integration with searchlight to
> > help some scaling cases with nova cells. One issue we've had with
> > Searchlight is differences between the notifications and API
> > representation of server state, and Travis asked me to file some bug
> > reports against Nova to get a conversation started. I've filed four
> > bugs at https://bugs.launchpad.net/nova/+bugs?field.tag=searchlight
> > (the reason for separating them is that some may not be
> > straightforward/possible) to that end. I am out next week, but it
> > would be great to get some time at one of the nova weekly meetings the
> > following weeks to discuss it further.
> >
> > Thanks, and safe travels for those returning home from the summit,
> >
> > Steve
> >
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> Thanks, I'll start going through these. Some of them might be covered by
> specs that are being worked in Ocata.

I checked all the three bug reports tagged with searchlight and left some
questions in them. We can discuss them on the next notification subteam
meeting [1] if needed.

Cheers,
gibi

[1] https://wiki.openstack.org/wiki/Meetings/NovaNotification#Next_Meeting 

> 
> Also note that others that are for resources which get proxied from nova, like
> security groups in this bug:
> 
> https://bugs.launchpad.net/nova/+bug/1637635
> 
> Won't be fixed because nova is deprecating the APIs to perform CRUD
> operations on proxy resources, like network resources. You'd get that
> information from Neutron.
> 
> --
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Event notification descriptors/schemas (? swagger ?)

2016-10-14 Thread Balázs Gibizer
> -Original Message-
> From: Joshua Harlow [mailto:harlo...@fastmail.com]
> Sent: October 11, 2016 22:36
> 
> Chris Dent wrote:
> > On Tue, 11 Oct 2016, Joshua Harlow wrote:
> >
> >> Damn, that's crazy that the projects emitting events don't want to
> >> own the formats and versions (and schemas) that they emit. That is
> >> ummm, like ummm, what the, ha, words can't describe... And the fact
> >> that nothing much has changed since kilo, ya, also a what the...
> >
> > Nova started with versioning and schematizing notifications with this
> > blueprint:
> >
> > https://blueprints.launchpad.net/nova/+spec/versioned-notification-api
> >
> > That's sort of in the realm of what's being discussed here, but
> > centralized in nova for now.
> >
> > I agree that siloing the stuff in the code is bad in the long term,
> > but I guess it is good that it has started somewhere.
> >
> 
> Thanks for sharing, didn't recall that work being done.
> 
>  From glancing at it, it seems to be nova versioning its notifications (which 
> is
> good) but I'm unclear what the objectification of those notifications means to
> the outside world (the actual consumers of notifications). Said outside world
> uses more than just python so it feels like some other intermediary format
> should be exposed to consumers as the schema that various python and
> non-python languages can consume (either via auto-generation of code or
> other).
> 
> Perhaps just jsonschema is enough (though it doesn't feel like it)? Has
> anyone tried 'to_json_schema' on those objects and outputting that schema
> into a nova/schema/notifications folder (or equivalent)?

This is exactly what we are planning to do.  Work is ongoing to add 
to_json_schema
support for every VersionedObject field [1]. Then we would like to add a small
tool to nova that makes it possible to generate the json schemas for the 
versioned
notifications [2]. Meanwhile we continue to transform legacy notifications to a 
versioned
format [3].

As soon as you have json schema you can find (or create) tools that generate an 
object
model and a parser from the json schema of the notifications in any modern 
language.

I hope this work in nova will servers as an example for other OpenStack project 
and
in the end OpenStack will have well defined and easy to consume notifications.

Any feedback on our plans are highly appreciated. 

Cheers,
gibi

[1] 
https://review.openstack.org/#/q/topic:bp/json-schema-for-versioned-object,n,z 
[2] 
https://blueprints.launchpad.net/nova/+spec/json-schema-for-versioned-notifications
 
[3] https://vntburndown-gibi.rhcloud.com/index.html 

> 
> -Josh
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification tranformation burndown chart

2016-10-05 Thread Balázs Gibizer
Hi, 

Notification transformation work continues in nova for Ocata [1]. I've created 
a burndown chart that is automatically updated from gerrit based on  sdague's  
api-ref burndown code. Please use that page [2] to follow the work; to see what 
to do and what to review.

[1] 
https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-ocata
 
[2] https://vntburndown-gibi.rhcloud.com/index.html 

Cheers,
gibi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] next notification subteam meeting

2016-10-03 Thread Balázs Gibizer
Hi, 

The next notification subteam meeting will be held on 2016.10.04 17:00 UTC [1] 
on #openstack-meeting-4.

Cheers,
Gibi

[1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20161004T17

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] next notification subteam meeting

2016-09-19 Thread Balázs Gibizer
Hi, 

The next notification subteam meeting will be held on 2016.09.20 17:00 UTC [1] 
on #openstack-meeting-4.

Cheers,
Gibi

[1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160920T17

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] next notification subteam meeting

2016-09-06 Thread Balázs Gibizer
Hi, 

The next notification subteam meeting will be held on 2016.09.06 17:00 UTC [1] 
on #openstack-meeting-4.

Cheers,
Gibi

[1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160906T17

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >