Re: [openstack-dev] [heat] api-ref gate job is active

2016-05-18 Thread Sergey Kraynev
Hi guys,

just for clarification: do I correctly understand, that now we store all
API related docs in local (Heat) repository and we have not copy of this
staff somewhere else? so we can easily update it ourself and don't push the
same commit to another repository ?

And also library mentioned in patch from Sean allows to build and publish
this doc ? or it does something else?

On 18 May 2016 at 17:23, Sean Dague  wrote:

> On 05/18/2016 09:58 AM, Jay Dobies wrote:
> > Just a quick note that there is a new job active called
> > gate-heat-api-ref. Our API documentation has been pulled into our tree
> > [1] and you can run it locally with `tox -e api-ref`.
> >
> > For now, it's a direct port of our existing API docs, but I'm planning
> > on taking a pass over them to double check that they are still valid.
> > Feel free to ping me if you have any questions/issues.
> >
> > [1] https://review.openstack.org/#/c/312712/
>
> Very cool. I proposed a review which switches over to the extracted
> library today - https://review.openstack.org/#/c/318019/ which passes
> with all your data.
>
> Thanks for digging in so early in this process.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][oslo.messaging] stable/mitaka is broken by oslo.messaging 5.0.0

2016-05-18 Thread Renat Akhmerov
https://github.com/openstack/requirements/blame/master/README.rst#L95-L100 


Doug, Is it still relevant? I’m just trying to understand how to enforce 
upper-constraints.txt in the best way for our jobs like py27, py34 etc.

Renat Akhmerov
@Nokia

> On 19 May 2016, at 12:15, Renat Akhmerov  wrote:
> 
>> 
>> On 18 May 2016, at 22:51, Joshua Harlow > > wrote:
>> 
>> Roman Dobosz wrote:
>>> On Tue, 17 May 2016 21:41:11 -0700
>>> Joshua Harlowmailto:harlo...@fastmail.com>>  wrote:
>>> 
>> Options I see:
>> Constrain oslo.messaging in global-requirements.txt for
>> stable/mitaka with 4.6.1. Hard to do since it requires wide
>> cross-project coordination.
>> Remove that hack in stable/mitaka as we did with master. It may
>> be bad because this was wanted very much by some of the users
>> 
>> Not sure what else we can do.
> You could set up your test jobs to use the upper-constraints.txt
> file in
> the requirements repo.
> 
> What was the outcome of the discussion about adding the
> at-least-once
> semantics to oslo.messaging?
 So there are a few options I am seeing so far (there might be more
 that I don't see also), others can hopefully correct me if they are
 wrong (which they might be) ;)
 
 Option #1
 
 Oslo.messaging (and the dispatcher part that does this) stays as is,
 doing at-most-once for RPC (notifications are in a different
 category here so let's not discuss them) and doing at-most-once well
 and battle-hardened (it's current goal) across the various backend
 drivers it supports.
 
 At that point at-least-once will have to done via some other library
 where this kind of semantics can be placed, that might be tooz via
 https://review.openstack.org/#/c/260246/ 
  (which has similar
 semantics, but is not based on a kind of RPC, instead it's more like
 a job-queue).
 
 Option #2
 
 Oslo.messaging (and the dispatcher part that does this) changes
 (possibly allowing it to be replaced with a different type of
 dispatcher, ie like in https://review.openstack.org/#/c/314732/ 
 );
 the default class continues being great at for RPC (notifications
 are in a different category here so let's not discuss them) and
 doing at-most-once well and battle-hardened (it's current goal)
 across the various backend drivers it supports. If people want to
 provide an alternate class with different semantics they are
 somewhat on there own (but at least they can do this).
 
 Issues raised: this though may not be wanted, as some of the
 oslo.messaging folks do not want the dispatcher class to be exposed
 at all (and would prefer to make it totally private, so exposing it
 would be against that goal); though people are already 'hacking'
 this kind of functionality in, so it might be the best we can get at
 the current time?
 
 Option #3
 
 Do nothing.
 
 Issues raised: everytime oslo.messaging changes this *mostly*
 internal dispatcher API a project will have to make a new 'hack' to
 replace it and hope that the semantics that it has 'hacked' in will
 continue to be compatible with the various drivers in
 oslo.messaging. Not IMHO a sustainable way to keep on working (and
 I'd be wary of doing this in a project if I was the owner of one,
 because it's ummm, 'dirty').
 
 My thoughts on what could work:
 
 What I'd personally like to see is a mix of option #1 and #2, where
 we have commitment from folks (besides myself, lol) to work on
 option #1 and we temporarily move forward with option #2 with a
 strict-statement that the functionality we would be enabling will
 only exist for say a single release (and then it will be removed).
 
 Thoughts from others?
>>> 
>>> Option #4
>>> 
>>> (Which might be obvious) Directly use RabbitMQ driver, like
>>> pika/kombu, which can expose all the message queue features to the
>>> project.
>>> 
>>> Issues raised: Pushback from the community due not using
>>> oslo.messaging and potential necessity for implementing it for other
>>> drivers/transports, or forcing to use particular message queue/driver
>>> in every project.
>>> 
>> 
>> Isn't this similar/same as to option #1? Nothing about option #1 says (from 
>> my understanding) that it must be implemented via oslo.messaging (and in all 
>> likely-hood it can't be).
> 
> 
> We’ll most likely proceed with #1 (special case is #4) for now just for 
> progress sake.
> 
> It seems to me that we as a community may just need to accumulate more 
> data/experience/patterns well realized so that we could explain value of 
> certain patterns more clearly. Through our endeavo

Re: [openstack-dev] [kolla] Vagrant environment for kolla-kubernetes

2016-05-18 Thread Michal Rostecki
OK, looks like we have consensus on implementing that in the main kolla 
repo.


Thanks,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [AodhClient] "composite alarm" unit test missing in aodhclient ?

2016-05-18 Thread Ryota Mibu
Hi,


The test case you pointed is for threshold alarm, so it is OK and expected that 
"composite_rule" is None.

Checking test codes is good idea. You can add new tests when you found 
something missing by posting new patch.


BR,
Ryota

> -Original Message-
> From: li.yuanz...@zte.com.cn [mailto:li.yuanz...@zte.com.cn]
> Sent: Thursday, May 19, 2016 12:27 PM
> To: openstack-dev@lists.openstack.org
> Cc: aji.zq...@gmail.com; ildiko.van...@ericsson.com; lianhao...@intel.com; 
> liusheng2...@gmail.com; Mibu Ryota(壬生 亮
> 太); Julien Danjou
> Subject: [Openstack] [AodhClient] "composite alarm" unit test missing in 
> aodhclient ?
> 
> HI All,
> in aodhclient/tests/unit/test_alarm_cli.py[1]
> 
>  , the "composite_rule" is None.
> is the composite_rule test missing? and should we add it ?
> 
> [1] 
> https://github.com/openstack/python-aodhclient/blob/master/aodhclient/tests/unit/test_alarm_cli.py
> 
> 
> Rajen(liyuanzhen)
> 
> 
> 
> ZTE Information Security Notice: The information contained in this mail (and 
> any attachment transmitted herewith) is
> privileged and confidential and is intended for the exclusive use of the 
> addressee(s).  If you are not an intended recipient,
> any disclosure, reproduction, distribution or other dissemination or use of 
> the information contained is strictly prohibited.
> If you have received this mail in error, please delete it and notify us 
> immediately.
> 
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][oslo.messaging] stable/mitaka is broken by oslo.messaging 5.0.0

2016-05-18 Thread Renat Akhmerov

> On 18 May 2016, at 22:51, Joshua Harlow  wrote:
> 
> Roman Dobosz wrote:
>> On Tue, 17 May 2016 21:41:11 -0700
>> Joshua Harlow  wrote:
>> 
> Options I see:
> Constrain oslo.messaging in global-requirements.txt for
> stable/mitaka with 4.6.1. Hard to do since it requires wide
> cross-project coordination.
> Remove that hack in stable/mitaka as we did with master. It may
> be bad because this was wanted very much by some of the users
> 
> Not sure what else we can do.
 You could set up your test jobs to use the upper-constraints.txt
 file in
 the requirements repo.
 
 What was the outcome of the discussion about adding the
 at-least-once
 semantics to oslo.messaging?
>>> So there are a few options I am seeing so far (there might be more
>>> that I don't see also), others can hopefully correct me if they are
>>> wrong (which they might be) ;)
>>> 
>>> Option #1
>>> 
>>> Oslo.messaging (and the dispatcher part that does this) stays as is,
>>> doing at-most-once for RPC (notifications are in a different
>>> category here so let's not discuss them) and doing at-most-once well
>>> and battle-hardened (it's current goal) across the various backend
>>> drivers it supports.
>>> 
>>> At that point at-least-once will have to done via some other library
>>> where this kind of semantics can be placed, that might be tooz via
>>> https://review.openstack.org/#/c/260246/ (which has similar
>>> semantics, but is not based on a kind of RPC, instead it's more like
>>> a job-queue).
>>> 
>>> Option #2
>>> 
>>> Oslo.messaging (and the dispatcher part that does this) changes
>>> (possibly allowing it to be replaced with a different type of
>>> dispatcher, ie like in https://review.openstack.org/#/c/314732/);
>>> the default class continues being great at for RPC (notifications
>>> are in a different category here so let's not discuss them) and
>>> doing at-most-once well and battle-hardened (it's current goal)
>>> across the various backend drivers it supports. If people want to
>>> provide an alternate class with different semantics they are
>>> somewhat on there own (but at least they can do this).
>>> 
>>> Issues raised: this though may not be wanted, as some of the
>>> oslo.messaging folks do not want the dispatcher class to be exposed
>>> at all (and would prefer to make it totally private, so exposing it
>>> would be against that goal); though people are already 'hacking'
>>> this kind of functionality in, so it might be the best we can get at
>>> the current time?
>>> 
>>> Option #3
>>> 
>>> Do nothing.
>>> 
>>> Issues raised: everytime oslo.messaging changes this *mostly*
>>> internal dispatcher API a project will have to make a new 'hack' to
>>> replace it and hope that the semantics that it has 'hacked' in will
>>> continue to be compatible with the various drivers in
>>> oslo.messaging. Not IMHO a sustainable way to keep on working (and
>>> I'd be wary of doing this in a project if I was the owner of one,
>>> because it's ummm, 'dirty').
>>> 
>>> My thoughts on what could work:
>>> 
>>> What I'd personally like to see is a mix of option #1 and #2, where
>>> we have commitment from folks (besides myself, lol) to work on
>>> option #1 and we temporarily move forward with option #2 with a
>>> strict-statement that the functionality we would be enabling will
>>> only exist for say a single release (and then it will be removed).
>>> 
>>> Thoughts from others?
>> 
>> Option #4
>> 
>> (Which might be obvious) Directly use RabbitMQ driver, like
>> pika/kombu, which can expose all the message queue features to the
>> project.
>> 
>> Issues raised: Pushback from the community due not using
>> oslo.messaging and potential necessity for implementing it for other
>> drivers/transports, or forcing to use particular message queue/driver
>> in every project.
>> 
> 
> Isn't this similar/same as to option #1? Nothing about option #1 says (from 
> my understanding) that it must be implemented via oslo.messaging (and in all 
> likely-hood it can't be).


We’ll most likely proceed with #1 (special case is #4) for now just for 
progress sake.

It seems to me that we as a community may just need to accumulate more 
data/experience/patterns well realized so that we could explain value of 
certain patterns more clearly. Through our endeavours and researches hopefully 
we’ll be able to communicate our thoughts better. As far as semantical 
differences RPC vs Messaging vs Jobs vs Notifications vs Concrete 
implementation of any of those, it’s simply a matter of how we want to do it, 
less a matter of what is right. I’m just saying this that it’s OK for now that 
we can’t come to a consensus. We need to keep in touch and exchange our ideas. 
Excuse me, it’s just a lyric digression.

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.ope

Re: [openstack-dev] [mistral][oslo.messaging] stable/mitaka is broken by oslo.messaging 5.0.0

2016-05-18 Thread Renat Akhmerov

> On 19 May 2016, at 01:46, Doug Hellmann  wrote:
> 
> Excerpts from Renat Akhmerov's message of 2016-05-18 11:07:29 +0700:
>> 
>>> On 17 May 2016, at 21:50, Doug Hellmann  wrote:
>>> 
>>> Excerpts from Renat Akhmerov's message of 2016-05-17 19:10:55 +0700:
 Team,
 
 Our stable/mitaka branch is now broken by oslo.messaging 5.0.0. Global 
 requirements for stable/mitaka has oslo.messaging>=4.0.0 so it can fetch 
 5.0.0.
 
 Just reminding that it breaks us because we intentionally modified 
 RPCDispatcher like in [1]. It was needed for “at-least-once” delivery. In 
 master we already agreed to remove that hack and work towards having a 
 decent solution (there are options). The patch is [2]. But we need to 
 handle it in mitaka somehow.
 
 Options I see:
 Constrain oslo.messaging in global-requirements.txt for stable/mitaka with 
 4.6.1. Hard to do since it requires wide cross-project coordination.
 Remove that hack in stable/mitaka as we did with master. It may be bad 
 because this was wanted very much by some of the users
 
 Not sure what else we can do.
>>> 
>>> You could set up your test jobs to use the upper-constraints.txt file in
>>> the requirements repo.
>> 
>> Yes, it’s an option. I’m just thinking from a regular user perspective. 
>> There will be a lot of people who don’t know about upper-constraints.txt and 
>> they will be stumbling on it just using our requirements.txt. My question 
>> here is: is upper-constraints.txt something that’s officially promoted and 
>> should be used by everyone or it’s mostly introduced for our internal 
>> OpenStack gating system?
> 
> Anyone installing from PyPI should use the constraints list. Anyone
> installing from system packages won't need it.


Ok, thanks Doug.

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]The backend-group concept in Cinder

2016-05-18 Thread John Griffith
On Wed, May 18, 2016 at 9:45 PM, chenying  wrote:

> Hi all:
> I want to know whether the backend-group concept  have been
> discussed or have other recommendations to us?
> The backend-group concept means can be regarded as a mechanism to
> manage same type cinder backends .
>

​Sorry, I'm not quite following the analogy here.  Cinder allows multiple
backends and always has (similar to compute nodes in Nova) and we also
allow you to run them on a single node (particularly for cases like
external storage backends, no need to deploy a node to configure them)
​


> (backend-group  concept  like nova  Aggregate. The Falvor in Nova
> correspond to VolumeType  in Cinder)
> While backends are visible to users, backend-group are only visible to
> admin.
>

​So actually in Cinder backends are not visible to regular users.  We
abstract any/all devices from the user.  Not quite sure I follow.​


>
> We can use this mechanism dynamic  to add/delete one backend
> form  backend-group without restarting volume services.
>
>User case 1:
>The  backends in backend-group-1 have SSD disk, more memory . The
> backend-group-1 can provide higher performance to user.
>   The other  backends  in  backend-group-2 have HHD disk, more
> capacity. The backend-group-2 can provide more storage space to user .
>
​Not sure, but we sort of do some of this already via the filter
scheduler.  An Admin can define various types (they may be set up based on
performance, ssd, spinning-rust etc).  Those types are then given arbitrary
definitions via a type (again details hidden from end user) and he/she can
create volumes of a specific type.
​


>User case 2:
>  The backend-group is set with specific metadata/extra-spec
> (capability), Each node can have multiple backend-group, each backend-group
> can have multiple key-value pairs, and the same key-value pair can be
> assigned to multiple backend-group. This information can be used in
> the scheduler to enable advanced scheduling,
>  scheduler will select the backends from backend-group only.
>

​We have the capability to do this already, at least to an extent.  Perhaps
if you provide more details in this use case I can better understand.  It
is possible today to group multiple backends into a single Volume Type.  So
for example you could say "I want all backends with capability XYZ" and the
filter scheduler will handle that for you already (well, there are some
details on what those capabilities are currently).​


>
>
>
> Thanks
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
​I'd be interested in hearing more about what you're thinking.  The concept
of dynamically adding/removing backends I think I kinda get, the risk there
is dealing with the ​existing data (volumes) on the backend when you remove
it.  We could do a migration, but that gets kinda ugly sometimes.  One
thing I have always wanted to see is a way to dynamically add/remove
backends, by dynamically I mean without restarting the c-vol services.  I'm
not sure there's a great use case or need for it though so I've never
really spent much time on it.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-18 Thread Hong Hui Xiao
Thanks for the clarification. If we are going to have dhcp service in 
every segment separately, then I think the current behavior is reasonable. 
The remaining segments can use dhcp by the dhcp ports in their own 
segments.

HongHui Xiao(肖宏辉)




From:   Carl Baldwin 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   05/19/2016 05:34
Subject:Re: [openstack-dev] [Neutron][ML2][Routed Networks]



On Wed, May 18, 2016 at 5:24 AM, Hong Hui Xiao  
wrote:
> I update [1] to auto delete dhcp port if there is no other ports. But
> after the dhcp port is deleted, the dhcp service is not usable. I can

I think this is what I expect.

> resume the dhcp service by adding another subnet, but I don't think it 
is
> a good way. Do we need to consider bind dhcp port to another segment 
when
> deleting the existing one?

Where would you bind the port?  DHCP requires L2 connectivity to the
segment which it serves.  But, you deleted the segment.  So, it makes
sense that it wouldn't work.

Brandon is working on DHCP scheduling which should take care of this.
DHCP should be scheduled to all of the segments with DHCP enabled
subnets.  It should have a port for each of these segments.  So, if a
segment (and its ports) are deleted, I think the right thing to do is
to make sure that DHCP scheduling removes DHCP from that segment.  I
would expect this to happen automatically when the subnet is deleted.
We should check with Brandon to make sure this works (or will work
when his work merges).

Carl

> [1] https://review.openstack.org/#/c/317358

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [aodhclient] "composite alarm" unit test missing in aodhclient ?

2016-05-18 Thread li . yuanzhen
HI All,
in aodhclient/tests/unit/test_alarm_cli.py[1], the 
"composite_rule" is None.
is the composite_rule test missing? and should we add it ?

[1] 
https://github.com/openstack/python-aodhclient/blob/master/aodhclient/tests/unit/test_alarm_cli.py

Rajen(liyuanzhen)

ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder]The backend-group concept in Cinder

2016-05-18 Thread chenying
Hi all:
I want to know whether the backend-group concept  have been discussed 
or have other recommendations to us?
The backend-group concept means can be regarded as a mechanism to 
manage same type cinder backends .
(backend-group  concept  like nova  Aggregate. The Falvor in Nova correspond to 
VolumeType  in Cinder)
While backends are visible to users, backend-group are only visible to admin.
 
We can use this mechanism dynamic  to add/delete one backend form  
backend-group without restarting volume services.
 
   User case 1:
   The  backends in backend-group-1 have SSD disk, more memory . The 
backend-group-1 can provide higher performance to user.
  The other  backends  in  backend-group-2 have HHD disk, more 
capacity. The backend-group-2 can provide more storage space to user .
   User case 2:
 The backend-group is set with specific metadata/extra-spec 
(capability), Each node can have multiple backend-group, each backend-group
can have multiple key-value pairs, and the same key-value pair can be assigned 
to multiple backend-group. This information can be used in
the scheduler to enable advanced scheduling,
 scheduler will select the backends from backend-group only.

   

Thanks
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] [AodhClient] "composite alarm" unit test missing in aodhclient ?

2016-05-18 Thread li . yuanzhen
HI All,
in aodhclient/tests/unit/test_alarm_cli.py[1], the 
"composite_rule" is None.
is the composite_rule test missing? and should we add it ?

[1] 
https://github.com/openstack/python-aodhclient/blob/master/aodhclient/tests/unit/test_alarm_cli.py

Rajen(liyuanzhen)

ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Gap between host cpu features and guest cpu's

2016-05-18 Thread 金运通
If this is a real gap, it more sounds like a bug in cpu capability, and the
fix won't change any API.
Will this fix depend  on the host capability framework?

BR,
YunTongJin

2016-05-11 15:28 GMT+08:00 Sylvain Bauza :

>
>
> Le 11/05/2016 05:12, Jin, Yuntong a écrit :
>
> Hi everyone,
>
> Currently nova exposes all the host CPU instruction set extensions
> available
> on the compute node in the host state, and there is a scheduler filter
> `ComputeCapabilitiesFilter` which looks at these.
>
> But the limits on this is:
> CPU instruction set in ComputeCapabilitiesFilter should be guest's view
> instead of host's.
>
> Admin may use specific set of CPU instruction to deploy instance to make
> it migratable in a heterogeneous cloud.
> This is actually by design in nova as nova is using baselineCPU
> andallowed to pass/config guest CPU instruction feature for instance.
>
> Shall we add a string “guest_features” in ``ComputeNode`` object as
> ``ComputeNode:cpu_info:guest_features``
> And let ComputeCapabilitiesFilter use guest_features instead of host
> features here?
>
> Is this a real gap ? and the above easy fix is the right way ?
>
>
>
> FWIW, we had a discussion during the Design Summit on the scheduler Nova
> design session about host capabilities and what we call "qualitative
> resources" [1]
>
> A first step for helping our users to discover the CPU capabilities is to
> provide a Nova abstraction between all our related hypervisor driver
> features and you can comment on a proposal [2]
>
> HTH,
> -Sylvain
>
> [1] https://etherpad.openstack.org/p/newton-nova-scheduler
>
> [2] https://review.openstack.org/#/c/309762/
>
> Thanks
> -yuntongjin
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [API] os-api-ref HTTP Response code formating

2016-05-18 Thread Qiming Teng


> Having some generic language for each error message if. Typing "User
> must authenticate" every time there is a 401 is tiresome, and will just
> mean typos. Ideally even generating links to an overview of what each
> code means in general would be nice. I was assuming we were going to
> write up a dedicated page about error codes.
> 
> There are times when what a 409 means pretty specific things, and I
> wonder if we want to reference it there instead of elsewhere.
> 
> I kind of wonder if:
> 
> .. rest_status_code:: 400, 401, 403, 405, 409, 500
> 
>   - 409: The fizbuz is locked and can't be updated until unlock is
> performed.
> 
> And generic messages for everything without the extra entry would work.
> 

While I do like this generalization because it will help kill typos, I'm
a little worried that we are introducing more and more REST stanza into
the docs. It would be desirable that the docs can serve both human
readers and programs that can parse it for API checking/validation.
Team, please consider this aspect when improving the doc formatting.

Regards,
  Qiming


>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fixtures] Re: [Neutron][Stable][Liberty][CLI] Gate broken

2016-05-18 Thread Robert Collins
On 18 May 2016 at 19:55, Ihar Hrachyshka  wrote:
>
>> On 18 May 2016, at 05:31, Darek Smigiel  wrote:
>>
>> Hello Stable Maint Team,
>> It seems that python-neutronclient gate for Liberty is broken[1][2] by 
>> update for keystoneclient. OpenStack proposal bot already sent update to 
>> requirements [3], but it needs to be merged.
>> If you have enough power, please unblock gate.
>>
>> Thanks,
>> Darek
>>
>> [1] https://review.openstack.org/#/c/296580/
>> [2] https://review.openstack.org/#/c/296576/
>> [3] https://review.openstack.org/#/c/258336/
>
> Right.
>
> I actually looked at the requirements update yesterday, and the problem is 
> that it also fails in gate due to fixtures 2.0.0 being used in client gate, 
> and apparently misbehaving for python3:

FWIW there is actually a fairly deep bug in MonkeyPatch that we'd
simply been fortunate to not hit, though folk had been unknowningly
working around edge cases for some time. We applied what looked like a
shallow fix leading to 2.0.0, but investigating the fallout from that
got us to understand the actual depth of the rabbit hole and Andrew
and I have been collaborating on a comprehensive, fix it forever patch
set since.

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk] Request for volunteer trainers at PyCon

2016-05-18 Thread David F Flanders
The Foundation has been given an opportunity to do a training for
Application Developers at PyCon on Saturday May 28th at 11am for 90minutes.

This is a request for those familiar with the OpenStack SDK to help work
through the "My 1st App Guide" with the SDK of your choice.

This is a great opportunity to road test the SDKs with our main user
audience: application developers.

As many of you will have discussed in Austin, we are yet to have consensus
on a standard Python SDK, with the community split over Shade, LibCloud and
the Python SDK which the CLI is build upon, let alone SDKs purpose built
for single clouds:

https://wiki.openstack.org/wiki/SDKs#Python

This training is a great 'early doors' opportunity for us to work with
Application Developers at the coal face, to find some signal from noise.

Naturally, some consensus is needed so we can encourage OpenStack clouds to
have a shared SDK for Application Developers.

One question which AppDev (from outside our community continue to ask):
"why would I use an SDK which only works with OpenStack, I don't want to be
locked into working with just one cloud?"

This is a great opportunity for us to work with Application Developers at
the coal face, to find some signal from noise.

If you are interested in helping out please contact me ASAP.  I'm working
on wrangling some Portland Stackers to provide on-hand support as well, so
you won't be alone.

​Greatly appreciated for any considerations, guidance and/or bravery to go
into the breach!

​Kind Regards,

Flanders​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc]Leadership training sign-up available for entire community

2016-05-18 Thread Colette Alexander
On Wed, May 11, 2016 at 5:02 AM, Colette Alexander
 wrote:


> [0] 
> http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-05-10-20.02.log.html
> [1] https://etherpad.openstack.org/p/Leadershiptraining

Hey all,

Just a quick update and housekeeping on leadership training:

Steve Martinelli had to drop out, so his spot can be taken by anyone
in the community interested at this point in time. Please see the sign
up sheet, and add your name if you can make it!

Also -
1. If you are listed as tentative (which at this point is Carol
Barrett I think, only) please be kind and update your information as
your travel plans solidify - I'd love to have anyone marked
'tentative' as a final sign up by this Friday, May 20th.
2. If you signed up and haven't done so already, please mark next to
your name how many days you're staying so we know who will be around
on Thursday for the optional 'debrief' day.

Thanks again!

-colette/gothicmindfood

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]PTL election conclusion

2016-05-18 Thread joehuang
Hi, Shinobu,

Contribution is more exactly the meaning I would like to express. Thank you for 
the correction:
" So I would like to serve as the PTL of Tricircle in Newton release, thanks 
for your contribution, let's move Tricircle forward together."

Cheers,

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Shinobu Kinjo [mailto:shinobu...@gmail.com] 
Sent: Wednesday, May 18, 2016 6:38 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle]PTL election conclusion

Hi Chaoyi,

Great!

On Wed, May 18, 2016 at 5:31 PM, joehuang  wrote:
> Hello, Team,
>
> From last Monday to today, only single candidate (myself, if I missed 
> someone's nomination, please point out) for the PTL election of Tricircle for 
> Newton release, according to the guideline from the community, no election is 
> needed.
>
> So I would like to serve as the PTL of Tricircle in Newton release, thanks 
> for your support, let's move Tricircle forward together.

But let me modify your message a bit because I'm picky, as you might have 
noticed already -; We do not support this project but do contribute, at least 
me.
Anyhow we're ready to become awesome!!

Cheers,
Shinobu

>
> Best Regards
> Chaoyi Huang ( Joe Huang )
>
> -Original Message-
> From: joehuang
> Sent: Monday, May 09, 2016 10:08 AM
> To: 'OpenStack Development Mailing List (not for usage questions)'
> Subject: RE: [openstack-dev][tricircle]PTL election of Tricircle for 
> Newton release
>
> Hi, team,
>
> If you want to be the PTL for Newton release of Tricircle, please send your 
> self nomination letter in the mail-list this week.
>
> Best Regards
> Chaoyi Huang ( Joe Huang )
>
> -Original Message-
> From: joehuang
> Sent: Thursday, May 05, 2016 9:44 AM
> To: 'ski...@redhat.com'; OpenStack Development Mailing List (not for 
> usage questions)
> Subject: [openstack-dev][tricircle]PTL election of Tricircle for 
> Newton release
>
> Hello,
>
> As discussed in yesterday weekly meeting, PTL nomination period from 
> May 9 ~ May 13, election from May 16 ~ May 20 if more than one 
> nomination . If you want to be the PTL for Newton release of 
> Tricircle, please send your self nomination letter in the mail-list. 
> You can refer to the nomination letter of other projects, for example, 
> Kuryr[1], Glance[2], Neutron[3], others can also be found in [4]
>
>
> [1]http://git.openstack.org/cgit/openstack/election/plain//candidates/
> newton/Kuryr/Gal_Sagie.txt 
> [2]http://git.openstack.org/cgit/openstack/election/plain//candidates/
> newton/Glance/Nikhil_Komawar.txt 
> [3]http://git.openstack.org/cgit/openstack/election/plain//candidates/
> newton/Neutron/Armando_Migliaccio.txt
> [4]https://wiki.openstack.org/wiki/PTL_Elections_March_2016
>
> Best Regards
> Chaoyi Huang ( Joe Huang )
>
>
> -Original Message-
> From: Shinobu Kinjo [mailto:shinobu...@gmail.com]
> Sent: Wednesday, May 04, 2016 5:35 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [tricircle] Requirements for becoming 
> approved official project
>
> Hi Team,
>
> There is an additional work to become an official (approval) project.
> Once we complete PTL election with everyone's consensus, we need to update 
> projects.yaml. [1] I think that the OSPF to become approval project is to 
> elect PTL, then talk to other PTLs of other projects.
>
> [1] 
> https://github.com/openstack/governance/blob/master/reference/projects
> .yaml
>
> Cheers,
> Shinobu
>
>
> On Mon, May 2, 2016 at 10:40 PM, joehuang  wrote:
>> Hi, Shinobu,
>>
>> Many thanks for the check for Tricircle to be an OpenStack project, and 
>> Thierry for the clarification. glad to know that we are close to OpenStack 
>> offical project criteria.
>>
>> Let's discuss the initial PTL election in weekly meeting, and start initial 
>> PTL election after that if needed.
>>
>> Best Regards
>> Chaoyi Huang ( joehuang )
>> 
>> From: Shinobu Kinjo [shinobu...@gmail.com]
>> Sent: 02 May 2016 18:48
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [tricircle] Requirements for becoming 
>> approved official project
>>
>> Hi Thierry,
>>
>> On Mon, May 2, 2016 at 5:45 PM, Thierry Carrez  wrote:
>>> Shinobu Kinjo wrote:

 I guess, it's usable. [1] [2] [3], probably and more...

 The reason why still I can just guess is that there is a bunch of 
 documentations!!
 It's one of great works but too much.
>>>
>>>
>>> We have transitioned most of the documentation off the wiki, but 
>>> there are still a number of pages that are not properly deprecated.
>>>
 [1] https://wiki.openstack.org/wiki/PTL_Guide
>>>
>>>
>>> This is now mostly replaced by the project team guide, so I marked 
>>> this one as deprecated.
>>
>> Honestly, frankly we should clean up something deprecated since there 
>> is a bunch of documentations -; 

Re: [openstack-dev] [neutron][stable] proposing Brian Haley for neutron-stable-maint

2016-05-18 Thread Tony Breeds
On Tue, May 17, 2016 at 01:07:48PM +0200, Ihar Hrachyshka wrote:
> Hi stable-maint-core and all,
> 
> I would like to propose Brian for neutron specific stable team.

+1

It'd be nice to see some comments on the reviews to indicate that the various
aspects of the policy have been thought about.

I know it gets a little repetitive but it's hard to assess the quality of
reviews without it :/

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] Refresher on how to use global requirements process

2016-05-18 Thread Robert Collins
On 19 May 2016 at 01:40, Ihar Hrachyshka  wrote:
> Great write-up. Do you plan to capture it e.g. in project guide?

It's already fully documented in the requirements repo README; if
there is no link to that from the project-guide, we should add one,
but I would not duplicate the content.

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-18 Thread Eric Larson


Dmitry Tantsur writes:

This is pretty subjective, I would say. I personally don't feel 
Go (especially its approach to error handling) any natural (at 
least no more than Rust or Scala, for example). If familiarity 
for Python developers is an argument here, mastering Cython or 
making OpenStack run on PyPy must be much easier for a random 
Python developer out there to seriously bump the 
performance. And it would not require introducing a completely 
new language to the picture.


In one sense you are correct. It is easier for a Pythonista to 
pick up Cython and use that for performance specific areas of 
code. At the same time, I'd argue that OpenStack as a community is 
not the same as Python at large. There are packaging requirements 
and cross project standards that also come into play, not to 
mention operators that end up bearing the brunt of those 
decisions. For example, Debian will likely not package a PyPy only 
version of Designate along with all its requirements. Similarly, 
while 50% of operators use packaged versioned, that means 50% work 
from source control to build, test, and release OpenStack 
projects.


You are correct that my position is subjective, but it is based on 
my experiences trying to operate and deploy OpenStack in addition 
to writing code. The draw of Go, in my experience, has been easily 
deploying a single binary I've been able to build and test 
consistently. The target system has doesn't require Go installed 
at all and it works on old distros. And it has been much faster.


Coming from Python, the reason Go has been easy to get started 
with is that it offers some protections that are useful such as 
memory management. Features such as slices are extremely similar 
to Python and go routines / channels allow supporting more complex 
patterns such as generators. Yes, you are correct, error handling 
is controversial, but at the same time, it is no better in C.


I'm not an expert in Go, but from what I've seen, Go has been 
easier to build and deploy than Python, while being 
faster. Picking it up has been trivial and becoming reasonably 
proficient has been a quick process. When considered within the 
scope of OpenStack, it adds a minimal overhead for testing, 
packaging and deployment, especially when compared to C 
extensions, PyPy or Cython.


I hope that contextualizes my opinion a bit to make clear the 
subjective aspects are based on OpenStack specific constraints.


--

Eric Larson | eric.lar...@rackspace.com Software Developer 
| Cloud DNS | OpenStack Designate Rackspace Hosting   | Austin, 
Texas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] driver (authentication issue)

2016-05-18 Thread Tim Hinrichs
When you're doing testing like this, the easiest solution is to disable
authentication.  Here are the instructions from the previous email:

To use curl, the easiest thing to do is go into /etc/congress/congress.conf
and change the auth_strategy from keystone to noauth, and restart the
congress server.  Then Congress won't ask for authentication credentials.

auth_strategy = noauth

Tim

On Tue, May 17, 2016 at 8:36 PM Yue Xin  wrote:

> Hi Tim and all,
>
> Thank you very much. I put congress in a tag so I missed your email and
> relay late. Sorry about that.
>
> My problem here is that i want to use the command line to put the data
> into the congress datasource.
> I use the command openstack congress datasource create test test(i have a
> test_driver), and successed in creating a table.
> Then I want to check the table use the command(openstack congress
> datasource list table test) the errer is (internal server error 501).
> Then I try to push the data to the test table, use
>
> curl -g -i -X PUThttp://localhost:1789/v1/data-sources/id/tables/ 
> 
>
> the response is "authentication required" which means I can't push data into 
> congress datasource. I have no idea how to fix it. Can you give me some hints?
>
> Thank you very much.
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-18 Thread Carl Baldwin
On Wed, May 18, 2016 at 5:24 AM, Hong Hui Xiao  wrote:
> I update [1] to auto delete dhcp port if there is no other ports. But
> after the dhcp port is deleted, the dhcp service is not usable. I can

I think this is what I expect.

> resume the dhcp service by adding another subnet, but I don't think it is
> a good way. Do we need to consider bind dhcp port to another segment when
> deleting the existing one?

Where would you bind the port?  DHCP requires L2 connectivity to the
segment which it serves.  But, you deleted the segment.  So, it makes
sense that it wouldn't work.

Brandon is working on DHCP scheduling which should take care of this.
DHCP should be scheduled to all of the segments with DHCP enabled
subnets.  It should have a port for each of these segments.  So, if a
segment (and its ports) are deleted, I think the right thing to do is
to make sure that DHCP scheduling removes DHCP from that segment.  I
would expect this to happen automatically when the subnet is deleted.
We should check with Brandon to make sure this works (or will work
when his work merges).

Carl

> [1] https://review.openstack.org/#/c/317358

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] upgrade connection_info when Ceph mon IP changed

2016-05-18 Thread Matt Riedemann

On 5/18/2016 3:05 PM, melanie witt wrote:

On Wed, 18 May 2016 14:30:00 -0500, Matt Riedemann wrote:

While convenient as a workaround, I'm not in favor of the idea of adding
something to the REST API so a user can force refresh the connection
info - this is a bug and leaks information out of the API about how the
cloud is configured. If you didn't have volumes attached to the instance
at all then this wouldn't matter.

I think in an earlier version of the patch it was reloading and checking
the connection info every time the BDM list was retrieved for an
instance, which was a major issue for normal operations where this isn't
a problem.

Since it's been scoped to just start/reboot operations, it's better, and
there are comments in the patch to make it a bit more efficient also
(avoid calling the DB multiple times for the same information).

I'm not totally opposed to doing the refresh on start/reboot. We could
make it configurable, so if you're using a storage server backend where
the IP might change, then set this flag, but that's a bit clunky. And a
periodic task wouldn't help us out.

I'm open to other ideas if anyone has them.



I was thinking it may be possible to do something similar to how network
info is periodically refreshed in _heal_instance_info_cache [1]. The
task interval is configurable (defaults to 60 seconds) and works on a
queue of instances such that one is refreshed per period, to control the
load on the host. To avoid doing anything for storage backends that
can't change IP, maybe we could make the task return immediately after
calling a driver method that would indicate whether the storage backend
can be affected by an IP change.

There would be some delay until the task runs on an affected instance,
though.

-melanie


[1]
https://github.com/openstack/nova/blob/9a05d38/nova/compute/manager.py#L5549


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I like this idea. Sure it's a delay, but it resolves the problem 
eventually and doesn't add the overhead to the start/reboot operations 
that should mostly be unnecessary if things are working.


I like the short-circuit idea too, although that's a nice to have. A 
deployer can always disable the periodic task if they don't want that 
running.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Re: [Neutron][L2GW] Mitaka release of L2 Gateway now available

2016-05-18 Thread Doug Hellmann
Excerpts from Armando M.'s message of 2016-05-17 14:34:22 -0700:
> On 17 May 2016 at 03:25, Ihar Hrachyshka  wrote:
> 
> >
> > > On 16 May 2016, at 21:16, Armando M.  wrote:
> > >
> > >
> > >
> > > On 16 May 2016 at 05:15, Ihar Hrachyshka  wrote:
> > >
> > > > On 11 May 2016, at 22:05, Sukhdev Kapur 
> > wrote:
> > > >
> > > >
> > > > Folks,
> > > >
> > > > I am happy to announce that Mitaka release for L2 Gateway is released
> > and now available at https://pypi.python.org/pypi/networking-l2gw.
> > > >
> > > > You can install it by using "pip install networking-l2gw"
> > > >
> > > > This release has several enhancements and fixes for issues discovered
> > in liberty release.
> > >
> > > How do you release it? I think the way to release new deliverables as of
> > Newton dev cycle is thru openstack/releases repository, as documented in
> > https://review.openstack.org/#/c/308984/
> > >
> > > Have you pushed git tag manually?
> > >
> > > I can only see the stable branch, tags can only be pushed by the
> > neutron-release team.
> >
> > 2016.1.0 tag is in the repo, and is part of stable/mitaka branch.
> >
> 
> Weren't we supposed to use semver?

Yes, all releases should be using semver now.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-18 Thread John McDowall
Ryan,

OK all three repos and now aligned with their masters. I have done some simple 
level system tests and I can steer traffic to a single VNF.  Note: some 
additional changes to networking-sfc to catch-up with their changes.

https://github.com/doonhammer/networking-sfc
https://github.com/doonhammer/networking-ovn
https://github.com/doonhammer/ovs

The next tasks I see are:



  1.  Decouple networking-sfc and networking-ovn. I am thinking that I will 
pass a nested port-chain dictionary holding 
port-pairs/port-pair-groups/flow-classifiers from networking-sfc to 
networking-ovn.
  2.  Align the interface between networking-ovn and ovs/ovn to match the 
nested dictionary in 1.
  3.  Modify the ovn-nb schema and ovn-northd.c to march the port-chain model.
  4.  Add ability to support chain of port-pairs
  5.  Think about flow-classifiers and how best to map them, today I just map 
the logical-port and ignore everything else.

Any other suggestions/feedback?

Regards

John

From: Ryan Moats mailto:rmo...@us.ibm.com>>
Date: Wednesday, May 11, 2016 at 1:39 PM
To: John McDowall 
mailto:jmcdow...@paloaltonetworks.com>>
Cc: "disc...@openvswitch.org" 
mailto:disc...@openvswitch.org>>, OpenStack 
Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN


John McDowall 
mailto:jmcdow...@paloaltonetworks.com>> wrote 
on 05/11/2016 12:37:40 PM:

> From: John McDowall 
> mailto:jmcdow...@paloaltonetworks.com>>
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: "disc...@openvswitch.org" 
> mailto:disc...@openvswitch.org>>, "OpenStack
> Development Mailing List" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: 05/11/2016 12:37 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> I have done networking-sfc the files that I see as changed/added are:
>
> devstack/settings   Modified Runtime setting to pick up OVN Driver
> networking_sfc/db/migration/alembic_migrations/versions/mitaka/
> expand/5a475fc853e6_ovs_data_model.py Hack to work around
> flow_classifier issue - need to resolve with SFC team.
> networking_sfc/services/sfc/drivers/ovn/__init__.py   Added for OVN Driver
> networking_sfc/services/sfc/drivers/ovn/driver.py Added ovn driver file
> setup.cfg Inserted OVN driver entry
>
> I am currently working to clean up ovs/ovn.
>
> Regards
>
> John

I can confirm that the networking-sfc rebase goes in clean against
master for me :) - Looking forward to ovs ...

Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-18 Thread Zane Bitter

On 17/05/16 20:27, Crag Wolfe wrote:

Now getting very Heat-specific. W.r.t. to
https://review.openstack.org/#/c/303692/ , the goal is to de-duplicate
raw_template.files (this is a dict of template filename to contents),
both in the DB and in RAM. The approach this patch is taking is that,
when one template is created by reference to another, we just re-use the
original template's files (ultimately in a new table,
raw_template_files). In the case of nested stacks, this saves on quite a
bit of duplication.

If we follow the 3-step pattern discussed earlier in this thread, we
would be looking at P release as to when we start seeing DB storage
improvements. As far as RAM is concerned, we would see improvement in
the O release since that is when we would start reading from the new
column location (and could cache the template files object by its ID).
It also means that for the N release, we wouldn't see any RAM or DB
improvements, we'll just start writing template files to the new
location (in addition to the old location). Is this acceptable, or do
impose some sort of downtime restrictions on the next Heat upgrade?

A compromise could be to introduce a little bit of downtime:

For the N release:


There's also a step 0, which is to run the DB migrations for Newton.


  1. Add the new column (no need to shut down heat-engine).
  2. Shut down all heat-engine's.
  3. Upgrade code base to N throughout cluster.
  4. Start all heat engine's. Read from new and old template files
locations, but only write to the new one.

For the O release, we could perform a rolling upgrade with no downtime
where we are only reading and writing to the new location, and then drop
the old column as a post-upgrade migration (i.e, the typical N+2 pattern
[1] that Michal referenced earlier and I'm re-referencing :-).

The advantage to the compromise is we would immediately start seeing RAM
and DB improvements with the N-release.


+1, and in fact this has been the traditional way of doing it. To be 
able to stop recommending that to operators, we need a solution both to 
the DB problem we're discussing here and to the problem of changes to 
the RPC API parameters. (Before anyone asks, and I know someone will... 
NO, versioned objects do *not* solve either of those problems.)


I've already personally made one backwards-incompatible change to the 
RPC in this version:


https://review.openstack.org/#/c/315275/

So we won't be able to recommend rolling updates from Mitaka->Newton anyway.

I suggest that as far as this patch is concerned, we should implement 
the versioning that allows the VO to decide whether to write old or new 
data and leave it at that. That way, if someone manages to implement 
rolling upgrade support in Newton we'll have it, and if we don't we'll 
just fall back to the way we've done it in the past.


cheers,
Zane.


[1]
http://docs.openstack.org/developer/cinder/devref/rolling.upgrades.html#database-schema-and-data-migrations

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Etherpad with review priorities

2016-05-18 Thread Vitaly Gridnev
Dear Sahara team,

There is the etherpad [0] for collecting 'hot' changes that we need to
review with a top priority. All changes are combined by branch right now,
maybe it will be a good idea to indicate top bugfixes / features and so on.
Feel free to put important changes to this etherpad.

[0] https://etherpad.openstack.org/p/sahara-review-priorities

-- 
Best Regards,
Vitaly Gridnev,
Project Technical Lead of OpenStack DataProcessing Program (Sahara)
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] upgrade connection_info when Ceph mon IP changed

2016-05-18 Thread melanie witt

On Wed, 18 May 2016 14:30:00 -0500, Matt Riedemann wrote:

While convenient as a workaround, I'm not in favor of the idea of adding
something to the REST API so a user can force refresh the connection
info - this is a bug and leaks information out of the API about how the
cloud is configured. If you didn't have volumes attached to the instance
at all then this wouldn't matter.

I think in an earlier version of the patch it was reloading and checking
the connection info every time the BDM list was retrieved for an
instance, which was a major issue for normal operations where this isn't
a problem.

Since it's been scoped to just start/reboot operations, it's better, and
there are comments in the patch to make it a bit more efficient also
(avoid calling the DB multiple times for the same information).

I'm not totally opposed to doing the refresh on start/reboot. We could
make it configurable, so if you're using a storage server backend where
the IP might change, then set this flag, but that's a bit clunky. And a
periodic task wouldn't help us out.

I'm open to other ideas if anyone has them.



I was thinking it may be possible to do something similar to how network 
info is periodically refreshed in _heal_instance_info_cache [1]. The 
task interval is configurable (defaults to 60 seconds) and works on a 
queue of instances such that one is refreshed per period, to control the 
load on the host. To avoid doing anything for storage backends that 
can't change IP, maybe we could make the task return immediately after 
calling a driver method that would indicate whether the storage backend 
can be affected by an IP change.


There would be some delay until the task runs on an affected instance, 
though.


-melanie


[1] 
https://github.com/openstack/nova/blob/9a05d38/nova/compute/manager.py#L5549


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Libvirt error code for failing volume in nova

2016-05-18 Thread Radhakrishnan, Siva
Hi All!
Currently I am working on this bug https://bugs.launchpad.net/nova/+bug/1168011 
which says we have to change  error message displayed  when attaching a volume 
fails. Currently it catches all operation errors that libvirt can raise and 
assume that all of them are the source of device being busy. You can find the 
source of this code here 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1160.
 I have few questions on this bug
 
1.  what kind of error message and other required info should we include in 
the exception to make it look more  generalized instead of the current one ?
 
2.  Should we raise separate exception for "Device is Busy" or a single 
general exception would work fine ?
 
3.  If we need separate exception for device being busy what would be the 
equivalent libvirt error code  for that
 
I would like to have your feedback/suggestions to proceed further with this bug.
 
Regards,
Siva

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] DHCP Agent Scheduling for Segments

2016-05-18 Thread Kevin Benton
>I may have wrongly assumed that segments MAY have the possibility of being
l2 adjacent, even if the entire network they are in is not, which would
mean that viewing and scheduling these in the context of a segment could be
useful.

Segments could be L2 adjacent, but I think it would be pretty uncommon for
a DHCP agent to have access to multiple L2 adjacent segments for the same
network. But even if that happens, the main use case I see for the
scheduler API is taking networks off of dead agents, agents going under
maintenance, or agents under heavy load. With the introduction of segments,
all of those are still possible via the network-based API.

>Do you feel like it'd be beneficial to show what segment a dhcp agent is
bound to in the API?

Probably useful in some cases. This will already be possible by showing the
port details for the DHCP agent's port, but it might be worth adding in
just to eliminate the extra steps.

On Wed, May 18, 2016 at 12:11 PM, Brandon Logan  wrote:

> On Tue, 2016-05-17 at 13:29 -0700, Kevin Benton wrote:
> > I'm leaning towards option A because it keeps things cleanly
> > separated. Also, if a cloud is using a plugin that supports segments,
> > an operator could use the new API for everything (including single
> > segment networks) so it shouldn't be that unfriendly.
> >
> >
> > However...
> >
> >
> > >If there's some other option that I somehow missed please suggest it.
> >
> >
> > The other option is to not make an API for this at all. In a
> > multi-segment use-case, a DHCP agent will normally have access to only
> > one segment of a network. By using the current API we can still
> > assign/un-assign an agent from a network and leave the segment
> > selection details to the scheduler. What is the use case for exposing
> > this all of the way up to the operator?
>
> I may have wrongly assumed that segments MAY have the possibility of
> being l2 adjacent, even if the entire network they are in is not, which
> would mean that viewing and scheduling these in the context of a segment
> could be useful.  However, if that is not the case I fully agree that
> those calls are not needed and it can just be left up to the scheduler
> to do that.  Do you feel like it'd be beneficial to show what segment a
> dhcp agent is bound to in the API?  I have no use case, but I wonder if
> operators may want that knowledge since they will be able to list
> segments.
>
> >
> >
> >
> > On Tue, May 17, 2016 at 1:07 PM, Brandon Logan
> >  wrote:
> > As part of the routed networks work [1], the DHCP agent and
> > scheduling
> > needs to be segment aware.  Right now, the dhcpagentscheduler
> > extension
> > exposes API resources to manage networks:
> >
> > - List networks hosted by an agent
> > - GET /agents/{agent_id}/dhcp-networks
> > - Response Body: {"networks": [{...}]}
> >
> > - List agents hosting a network - GET /network
> > - GET /networks/{network_id}/dhcp-agents
> > - Response Body: {"agents": [{...}]}
> >
> > - Add a network to an agent
> > - POST /agents/{agent_id}/dhcp-networks
> > - Request Body: {"network_id": "NETWORK_UUID"}
> >
> > - Remove a network from an agent
> > - DELETE /agents/{agent_id}/dhcp-networks/{network_id}
> >
> > This same functionality needs to also be exposed for working
> > with
> > segments.  We need some opinions on the best way to do this.
> > The
> > options are:
> >
> > A) Expose new resources for segment dhcp agent manipulation
> > - GET /agents/{agent_id}/dhcp-segments
> > - GET /segments/{segment_id}/dhcp-agents
> > - POST /agents/{agent_id}/dhcp-segments
> > - DELETE /agents/{agent_id}/dhcp-segments/{segment_id}
> >
> > B) Allow segment info gathering and manipulation via the
> > current network
> > dhcp agent API resources. No new API resources.
> >
> > C) Some combination of A and B.
> >
> > My current opinion is that option C shouldn't even be an
> > option but I
> > just put it on here just in case someone has a strong
> > argument.  If
> > we're going to add new resources, we may as well do it all the
> > way,
> > which is what C implies would happen.
> >
> > Option B would be great to use if segment support could easily
> > be added
> > in while maintaining backwards compatibility.  I'm not sure if
> > that is
> > going to be possible in a clean way.  Regardless, an extension
> > will have
> > to be created for this.
> >
> > Option A is the cleanest strategy IMHO.  It may not be the
> > most user
> > friendly though because some networks may have multiple
> > segments while
> >

Re: [openstack-dev] [nova] upgrade connection_info when Ceph mon IP changed

2016-05-18 Thread Matt Riedemann

On 5/16/2016 8:39 PM, zhou.b...@zte.com.cn wrote:

Hi all:

  I got a problem described in
https://bugs.launchpad.net/cinder/+bug/1452641,
and my colleague got another similar problem described in
https://bugs.launchpad.net/nova/+bug/1581367.
It's all about the storage backend ip change. With the storage backend,
not only Ceph but also IPSAN,
when the backend's ip changed, the related volumes attached to VMs would
not be available.  Previously
I proposed to auto-check the consistency of IP record in nova's bdm
table and storage backend, which was
submitted in https://review.openstack.org/#/c/289813/.
reviewers point out that it's a waste of performance with normal
case and it's a not a good scenario
to do thess checking in a regular function. I agree with this suggestion
and the bug troubled me and my
colleagues all the time.
 I think if we can just add an option in nova api, such as "nova
reboot --refresh-conn"
to manually modify the VM's bdm info when the bug happened. The
"--refresh-conn" was parsed and passed to
"reboot_instance" function in nova-compute. Without auto-checking, it
would be more flexible and efficient.
And I need all of your valued opinions and appreciate for hearing from
you soon.

The fake code is like this in nova-compute:
def reboot_instance(self, context, instance, block_device_info,
reboot_type, refresh_conn = False):
"""Reboot an instance on this host."""

...
...
block_device_info = self._get_instance_block_device_info(context,
 instance,
refresh_conn)

Thank you.

related links are as follows:
https://bugs.launchpad.net/cinder/+bug/1452641
https://bugs.launchpad.net/nova/+bug/1581367
https://review.openstack.org/#/c/289813/

 ZTE Information
Security Notice: The information contained in this mail (and any
attachment transmitted herewith) is privileged and confidential and is
intended for the exclusive use of the addressee(s). If you are not an
intended recipient, any disclosure, reproduction, distribution or other
dissemination or use of the information contained is strictly
prohibited. If you have received this mail in error, please delete it
and notify us immediately.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



While convenient as a workaround, I'm not in favor of the idea of adding 
something to the REST API so a user can force refresh the connection 
info - this is a bug and leaks information out of the API about how the 
cloud is configured. If you didn't have volumes attached to the instance 
at all then this wouldn't matter.


I think in an earlier version of the patch it was reloading and checking 
the connection info every time the BDM list was retrieved for an 
instance, which was a major issue for normal operations where this isn't 
a problem.


Since it's been scoped to just start/reboot operations, it's better, and 
there are comments in the patch to make it a bit more efficient also 
(avoid calling the DB multiple times for the same information).


I'm not totally opposed to doing the refresh on start/reboot. We could 
make it configurable, so if you're using a storage server backend where 
the IP might change, then set this flag, but that's a bit clunky. And a 
periodic task wouldn't help us out.


I'm open to other ideas if anyone has them.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-05-18 Thread Kevin Benton
No worries. If the slot isn't available maybe we can get infra to add
another channel.

On Wed, May 18, 2016 at 12:57 AM, Miguel Angel Ajo Pelayo <
majop...@redhat.com> wrote:

> Hey,
>
>Finally we took over the channel for 1h. mostly because the time was
> already agreed between many people on opposed timezones and it was a bit
> late to cancel it.
>
>The first point was finding a suitable timeslot for a biweekly meeting
> -for some time- and alternatively following up on email. We should not take
> over the neutron channel for these meetings anymore, I'm sorry for the
> inconvenience.
>
>   Please find the summary here:
>
>
> http://eavesdrop.openstack.org/meetings/network_common_flow_classifier/2016/network_common_flow_classifier.2016-05-17-17.02.html
>
> On Tue, May 17, 2016 at 8:10 PM, Kevin Benton  wrote:
>
>> Yeah, no meetings in #openstack-neutron please. It leaves us nowhere to
>> discuss development stuff during that hour.
>>
>> On Tue, May 17, 2016 at 2:54 AM, Miguel Angel Ajo Pelayo <
>> majop...@redhat.com> wrote:
>>
>>> I agree, let's try to find a timeslot that works.
>>>
>>> using #openstack-neutron with the meetbot works, but it's going to
>>> generate a lot of noise.
>>>
>>> On Tue, May 17, 2016 at 11:47 AM, Ihar Hrachyshka 
>>> wrote:
>>>

 > On 16 May 2016, at 15:47, Takashi Yamamoto 
 wrote:
 >
 > On Mon, May 16, 2016 at 10:25 PM, Takashi Yamamoto
 >  wrote:
 >> hi,
 >>
 >> On Mon, May 16, 2016 at 9:00 PM, Ihar Hrachyshka <
 ihrac...@redhat.com> wrote:
 >>> +1 for earlier time. But also, have we booked any channel for the
 meeting? Hijacking #openstack-neutron may not work fine during such a busy
 (US) time. I suggest we propose a patch for
 https://github.com/openstack-infra/irc-meetings
 >>
 >> i agree and submitted a patch.
 >> https://review.openstack.org/#/c/316830/
 >
 > oops, unfortunately there seems no meeting channel free at the time
 slot.

 This should be solved either by changing the slot, or by getting a new
 channel registered for meetings. Using unregistered channels, especially
 during busy hours, is not effective, and is prone to overlaps for relevant
 meetings. The meetings will also not get a proper slot at
 eavesdrop.openstack.org.

 Ihar

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-18 Thread Kevin Benton
>I update [1] to auto delete dhcp port if there is no other ports. But after
the dhcp port is deleted, the dhcp service is not usable.

You mean in the case where the segments are in the same L2 domain, right?
If not, I don't understand why we wouldn't expect a segment that was
deleted to stop working

Have the DHCP agent scheduler subscribe to segment delete and it can
determine if the network needs to be hosted on any more agents.

On Wed, May 18, 2016 at 4:24 AM, Hong Hui Xiao  wrote:

> I update [1] to auto delete dhcp port if there is no other ports. But
> after the dhcp port is deleted, the dhcp service is not usable. I can
> resume the dhcp service by adding another subnet, but I don't think it is
> a good way. Do we need to consider bind dhcp port to another segment when
> deleting the existing one?
>
> [1] https://review.openstack.org/#/c/317358
>
> HongHui Xiao(肖宏辉) PMP®
>
>
> From:   Carl Baldwin 
> To: OpenStack Development Mailing List
> 
> Date:   05/18/2016 11:50
> Subject:Re: [openstack-dev] [Neutron][ML2][Routed Networks]
>
>
>
>
> On May 17, 2016 2:18 PM, "Kevin Benton"  wrote:
> >
> > >I kind of think it makes sense to require evacuating a segment of
> its ports before deleting it.
> >
> > Ah, I left out an important assumption I was making. We also need to
> auto delete the DHCP port as the segment is deleted. I was thinking this
> will be basically be like the delete_network case where we will auto
> remove the network owned ports.
> I can go along with that. Thanks for the clarification.
> Carl
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] disabling deprecated APIs by config?

2016-05-18 Thread Sean Dague
On 05/18/2016 11:56 AM, Dean Troyer wrote:
> On Wed, May 18, 2016 at 10:20 AM, Sean Dague  > wrote:
> 
> nova-net is now deprecated - https://review.openstack.org/#/c/310539/
> 
> 
> Woot! Now to make it not-the-default in DevStack...

Sean Collins is working on it. :) It is basically the #1 DevStack goal
this cycle.

> One idea was a "big red switch" in the format of a config option
> ``disable_deprecated_apis=True`` (defaults to False). Which would set
> all deprecated APIs to 404 routes.
> 
> One of the nice ideas here is this would allow some API servers to have
> this set, and others not. So users could point to the "clean" API
> server, figure out that they will break, but the default API server
> would still support these deprecated APIs. Or, conversely, the default
> could be the clean API server, and a legacy API server endpoint could be
> provided for projects that really needed it that included these
> deprecated things for now. Either way it would allow some site assisted
> transition. And be something like the -Werror flag in gcc.
> 
> 
> With an app-dev/end-user hat on, I really like the idea of there being
> some switch I can control (alternate endpoint or service catalog?
>  account/project toggle? don't know exactly what yet) to do testing with
> the deprecated APIs disabled.
> 
> This is really a problem of migrating/prompting the client developers to
> do the right thing.  Often times they (including myself) respond faster
> to their users and not the upstream projects, so giving the users the
> ability to test and provide feedback would be huge here.
> 
> With my OSC hat on, I need to still support those APIs for a couple of
> years at least, so I would love to have a way to discover that those
> APIs have been disabled without requiring a round trip to get a 404. 
> This seems like a useful thing for the '/' route version information
> when no '/capabilities'-like endpoint is available.

We're working towards capabilities on the Nova side, but because of the
matrix with policy permissions, it's going to still be a couple cycles.
We will get there. It just takes time.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] DHCP Agent Scheduling for Segments

2016-05-18 Thread Brandon Logan
On Tue, 2016-05-17 at 13:29 -0700, Kevin Benton wrote:
> I'm leaning towards option A because it keeps things cleanly
> separated. Also, if a cloud is using a plugin that supports segments,
> an operator could use the new API for everything (including single
> segment networks) so it shouldn't be that unfriendly. 
> 
> 
> However...
> 
> 
> >If there's some other option that I somehow missed please suggest it.
> 
> 
> The other option is to not make an API for this at all. In a
> multi-segment use-case, a DHCP agent will normally have access to only
> one segment of a network. By using the current API we can still
> assign/un-assign an agent from a network and leave the segment
> selection details to the scheduler. What is the use case for exposing
> this all of the way up to the operator?

I may have wrongly assumed that segments MAY have the possibility of
being l2 adjacent, even if the entire network they are in is not, which
would mean that viewing and scheduling these in the context of a segment
could be useful.  However, if that is not the case I fully agree that
those calls are not needed and it can just be left up to the scheduler
to do that.  Do you feel like it'd be beneficial to show what segment a
dhcp agent is bound to in the API?  I have no use case, but I wonder if
operators may want that knowledge since they will be able to list
segments.

> 
> 
> 
> On Tue, May 17, 2016 at 1:07 PM, Brandon Logan
>  wrote:
> As part of the routed networks work [1], the DHCP agent and
> scheduling
> needs to be segment aware.  Right now, the dhcpagentscheduler
> extension
> exposes API resources to manage networks:
> 
> - List networks hosted by an agent
> - GET /agents/{agent_id}/dhcp-networks
> - Response Body: {"networks": [{...}]}
> 
> - List agents hosting a network - GET /network
> - GET /networks/{network_id}/dhcp-agents
> - Response Body: {"agents": [{...}]}
> 
> - Add a network to an agent
> - POST /agents/{agent_id}/dhcp-networks
> - Request Body: {"network_id": "NETWORK_UUID"}
> 
> - Remove a network from an agent
> - DELETE /agents/{agent_id}/dhcp-networks/{network_id}
> 
> This same functionality needs to also be exposed for working
> with
> segments.  We need some opinions on the best way to do this.
> The
> options are:
> 
> A) Expose new resources for segment dhcp agent manipulation
> - GET /agents/{agent_id}/dhcp-segments
> - GET /segments/{segment_id}/dhcp-agents
> - POST /agents/{agent_id}/dhcp-segments
> - DELETE /agents/{agent_id}/dhcp-segments/{segment_id}
> 
> B) Allow segment info gathering and manipulation via the
> current network
> dhcp agent API resources. No new API resources.
> 
> C) Some combination of A and B.
> 
> My current opinion is that option C shouldn't even be an
> option but I
> just put it on here just in case someone has a strong
> argument.  If
> we're going to add new resources, we may as well do it all the
> way,
> which is what C implies would happen.
> 
> Option B would be great to use if segment support could easily
> be added
> in while maintaining backwards compatibility.  I'm not sure if
> that is
> going to be possible in a clean way.  Regardless, an extension
> will have
> to be created for this.
> 
> Option A is the cleanest strategy IMHO.  It may not be the
> most user
> friendly though because some networks may have multiple
> segments while
> others may not.  If a network is made up of just a single
> segment then
> the current network dhcp agent calls will be fine.  However,
> once a
> network is made up of multiple segments, it wouldn't make
> sense for the
> current network dhcp agent calls to be valid, they'd need to
> be made to
> the new segment resources.  This same line of thinking would
> probably
> have to be considered with Option B as well, so it may be a
> problem for
> both.
> 
> Anyway, I'd like to gather suggestions and opinions on this.
> If there's
> some other option that I somehow missed please suggest it.
> 
> Thanks,
> Brandon
> 
> [1]
> 
> https://specs.openstack.org/openstack/neutron-specs/specs/newton/routed-networks.html#dhcp-scheduling
> 
> 
> __
> OpenStack Development Mailing List (not for usage que

Re: [openstack-dev] [API] os-api-ref HTTP Response code formating

2016-05-18 Thread Sean Dague
On 05/18/2016 02:42 PM, Hayes, Graham wrote:
> I was moving Designate to the os-api-ref extension, and I spotted
> something I thought we could do to improve the readability.
> 
> Currently we have the HTTP Response Codes as a single
> line on the page - I thought a table might be handy as well.
> 
> So, I had a go at it - and put up a POC[0]
> 
> It outputs a table with the code and the project reason for the code[1]
> 
> Example syntax is (used to generate [1]):
> 
>  Response Codes
>  --
> 
>  Normal
>  ^^
> 
>  .. rest_response::
> 
> - 200: OK
> - 100: Foo
> - 201: Zone Created
> 
> 
> 
>  Error
>  ^
> 
>  .. rest_response::
> 
> - 405: Method *must* be POST
> - 403: Policy does not allow current user to create zone
> - 401: User must authenticate
> - 400: Supplied data is malformed.
> - 500: Something went wrong
> 
> Is this something worth pursuing? Should it be laid out differently?

There are a lot to like here. I think "response" is an overloaded word
and I'd use rest_status_code instead for clarity.

When I think about the Nova side there are a few things I think would
make this better.

Having some generic language for each error message if. Typing "User
must authenticate" every time there is a 401 is tiresome, and will just
mean typos. Ideally even generating links to an overview of what each
code means in general would be nice. I was assuming we were going to
write up a dedicated page about error codes.

There are times when what a 409 means pretty specific things, and I
wonder if we want to reference it there instead of elsewhere.

I kind of wonder if:

.. rest_status_code:: 400, 401, 403, 405, 409, 500

  - 409: The fizbuz is locked and can't be updated until unlock is
performed.

And generic messages for everything without the extra entry would work.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Weekly Trove meeting minutes 2016-05-18

2016-05-18 Thread Amrith Kumar
Minutes of the weekly Trove meeting held today can be found at

http://eavesdrop.openstack.org/meetings/trove/2016/trove.2016-05-18-18.00.html

Meeting summary

Action items from last week's meeting (amrith, 18:03:03)
Trove pulse update (amrith, 18:03:28)
http://bit.ly/1VQyg00 (amrith, 18:03:38)
https://gist.github.com/amrith/abab5d5553864b1562d7c99ab88351fc 
(amrith, 18:03:47)

Announcements (amrith, 18:08:22)
Specs that are up for review (amrith, 18:10:48)
RPC API Versioning https://review.openstack.org/#/c/315079/ (amrith, 
18:11:40)
[peterstac] Locality https://review.openstack.org/#/c/298994/ (amrith, 
18:28:12)
https://wiki.openstack.org/wiki/Trove-rpc-versioning (SlickNik, 
18:29:16)
ACTION: [all] review https://review.openstack.org/#/c/298994/5 (amrith, 
18:37:30)

Persist Error Messages https://review.openstack.org/#/c/313780/ (amrith, 
18:38:15)
AGREED: that we will persist stack traces and only show them to admins 
(amrith, 18:44:55)
AGREED: that we will sanitize the stack traces and error messages that 
we persist using mask_password (amrith, 18:45:10)
https://review.openstack.org/#/c/227870/ (peterstac, 18:47:40)

Open Discussion (amrith, 18:52:24)
https://review.openstack.org/#/c/318191/1 (amrith, 18:53:36)

Thanks,

-amrith

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral][Zaqar] Triggering Mistral workflows from Zaqar messages

2016-05-18 Thread Zane Bitter
I've been lobbying the Mistral developers for $SUBJECT since, basically, 
forever.[1][2][3] I like to think after a couple of years I succeeded in 
changing their view on it from "crazy" to merely "unrealistic".[4] In 
the last few months I've had a couple of realisations though:


1) The 'pull' model I've been suggesting is the wrong one, 
architecturally speaking. It's asking Mistral to do too much to poll 
Zaqar queues.
2) A 'push' model is the correct architecture and it already exists in 
the form of Zaqar's Notifications, which suddenly makes this goal look 
very realistic indeed.


I've posted a Zaqar spec for this here:

https://review.openstack.org/#/c/318202/

Not being super familiar with either project myself, I think this needs 
close scrutiny from Mistral developers as well as Zaqar developers to 
make sure I haven't got any of the details wrong. I'd also welcome any 
volunteers interested in implementing this :)



One more long-term thing that I did *not* mention in the spec: there are 
both Zaqar notifications and Mistral actions for sending email and 
hitting webhooks. These are two of the hardest things for a cloud 
operator to secure. It would be highly advantageous if there were only 
_one_ place in OpenStack where these were implemented. Either project 
would potentially work - Zaqar notifications could call a simple, 
operator defined workflow behind the scenes for email/webhook 
notifications; alternatively the Mistral email/webhook actions could 
drop a message on a Zaqar queue connected to a notification - although 
the former sounds easier to me. (And of course clouds with only one of 
the services available could fall back to the current plugins.) 
Something to think about for the future...


cheers,
Zane.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-April/062617.html

[2] http://lists.openstack.org/pipermail/openstack-dev/2015-May/063884.html
[3] Also in-person at every summit since at least Juno :)
[4] http://lists.openstack.org/pipermail/openstack-dev/2015-May/063904.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][oslo.messaging] stable/mitaka is broken by oslo.messaging 5.0.0

2016-05-18 Thread Doug Hellmann
Excerpts from Renat Akhmerov's message of 2016-05-18 11:07:29 +0700:
> 
> > On 17 May 2016, at 21:50, Doug Hellmann  wrote:
> > 
> > Excerpts from Renat Akhmerov's message of 2016-05-17 19:10:55 +0700:
> >> Team,
> >> 
> >> Our stable/mitaka branch is now broken by oslo.messaging 5.0.0. Global 
> >> requirements for stable/mitaka has oslo.messaging>=4.0.0 so it can fetch 
> >> 5.0.0.
> >> 
> >> Just reminding that it breaks us because we intentionally modified 
> >> RPCDispatcher like in [1]. It was needed for “at-least-once” delivery. In 
> >> master we already agreed to remove that hack and work towards having a 
> >> decent solution (there are options). The patch is [2]. But we need to 
> >> handle it in mitaka somehow.
> >> 
> >> Options I see:
> >> Constrain oslo.messaging in global-requirements.txt for stable/mitaka with 
> >> 4.6.1. Hard to do since it requires wide cross-project coordination.
> >> Remove that hack in stable/mitaka as we did with master. It may be bad 
> >> because this was wanted very much by some of the users
> >> 
> >> Not sure what else we can do.
> > 
> > You could set up your test jobs to use the upper-constraints.txt file in
> > the requirements repo.
> 
> Yes, it’s an option. I’m just thinking from a regular user perspective. There 
> will be a lot of people who don’t know about upper-constraints.txt and they 
> will be stumbling on it just using our requirements.txt. My question here is: 
> is upper-constraints.txt something that’s officially promoted and should be 
> used by everyone or it’s mostly introduced for our internal OpenStack gating 
> system?

Anyone installing from PyPI should use the constraints list. Anyone
installing from system packages won't need it.

Doug

> 
> > What was the outcome of the discussion about adding the at-least-once
> > semantics to oslo.messaging?
> 
> 
> No outcome yet, we’re still discussing. I expect that more people join the 
> thread since some stakeholders were off after the summit.
> 
> Renat Akhmerov
> @Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [API] os-api-ref HTTP Response code formating

2016-05-18 Thread Hayes, Graham
I was moving Designate to the os-api-ref extension, and I spotted
something I thought we could do to improve the readability.

Currently we have the HTTP Response Codes as a single
line on the page - I thought a table might be handy as well.

So, I had a go at it - and put up a POC[0]

It outputs a table with the code and the project reason for the code[1]

Example syntax is (used to generate [1]):

 Response Codes
 --

 Normal
 ^^

 .. rest_response::

- 200: OK
- 100: Foo
- 201: Zone Created



 Error
 ^

 .. rest_response::

- 405: Method *must* be POST
- 403: Policy does not allow current user to create zone
- 401: User must authenticate
- 400: Supplied data is malformed.
- 500: Something went wrong

Is this something worth pursuing? Should it be laid out differently?

Thanks,

-- Graham

0 - https://review.openstack.org/#/c/318281/1
1 - http://i.imgur.com/onsRFtI.png

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] disabling deprecated APIs by config?

2016-05-18 Thread John Griffith
On Wed, May 18, 2016 at 9:20 AM, Sean Dague  wrote:

> nova-net is now deprecated - https://review.openstack.org/#/c/310539/
>
> And we're in the process in Nova of doing some spring cleaning and
> deprecating the proxies to other services -
> https://review.openstack.org/#/c/312209/
>
> At some point in the future after deprecation the proxy code is going to
> stop working. Either accidentally, because we're not going to test or
> fix this forever (and we aren't going to track upstream API changes to
> the proxy targets), or intentionally when we decide to delete it to make
> it easier to address core features and bugs that everyone wants addressed.
>
> However, the world moves forward slowly. Consider the following scenario.
>
> We delete nova-net & the network proxy entirely in Peru (a not entirely
> unrealistic idea). At that release there are a bunch of people just
> getting around to Newton. Their deployments allow all these things to
> happen which are going to 100% break when they upgrade, and people are
> writing more and more OpenStack software every cycle.
>
> How do we signal to users this kind of deprecation? Can we give sites
> tools to help prevent new software being written to deprecated (and
> scheduled for deletion) APIs?
>
> One idea was a "big red switch" in the format of a config option
> ``disable_deprecated_apis=True`` (defaults to False). Which would set
> all deprecated APIs to 404 routes.
>
> One of the nice ideas here is this would allow some API servers to have
> this set, and others not. So users could point to the "clean" API
> server, figure out that they will break, but the default API server
> would still support these deprecated APIs. Or, conversely, the default
> could be the clean API server, and a legacy API server endpoint could be
> provided for projects that really needed it that included these
> deprecated things for now. Either way it would allow some site assisted
> transition. And be something like the -Werror flag in gcc.
>
> In the Nova case the kinds of things ending up in this bucket are going
> to be interfaces that people *really* shouldn't be using any more. Many
> of them data back to when OpenStack was only 2 projects, and the concept
> of splitting out function wasn't really thought about (note: we're
> getting ahead of this one for the 'placement' rest API, so it won't have
> any of these issues). At some point this house cleaning was going to
> have to happen, and now seems to be the time to do get it rolling.
>
> Feedback on this idea would be welcomed. We're going to deprecate the
> proxy APIs regardless, however disable_deprecated_apis is it's own idea
> and consequences, and we really want feedback before pushing forward on
> this.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​I like the idea of a switch in the config file.  To Dean's point, would it
also be worth considering a "list-deprecated-calls" that could give him a
list without having to do the roundtrip every time?  That might not
actually solve anything for him, but perhaps something along those lines
would help?​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Reason for installing devstack in CI pipeline

2016-05-18 Thread Jeremy Stanley
On 2016-05-18 08:50:38 -0400 (-0400), Paul Belanger wrote:
[...]
> So taking what you have explained and looking at the code, I think
> there can be some optimizations made to actually switch out
> devstack-gate and using zuul-cloner directly. Currently, using
> devstack-gate add about 10mins to the start of a job and 10GB+ of
> data onto the server.
[...]

That sounds likely. At one point in time devstack-gate's multirepo
Git dependency handling was the only easy and available solution for
that, but eventually its logic got reimplemented in and replaced by
zuul-cloner precisely because projects might want to do those things
outside the context of devstack-gate.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ODP: [Monasca] Sporadic metrics

2016-05-18 Thread Erickson
Hi Tomasz,

Yes, I'll be able to follow up your implementation in both monasca-ui and 
python-monascaclient. If I have any doubts, I'll not hesitate to contact you. 
Thanks for your support. :)

Best Regards,
Erickson Santos.

- Mensagem original -
De: "Tomasz Trebski" 
Para: "witold bedyk" , "Erickson" 
, openstack-dev@lists.openstack.org
Enviadas: Quarta-feira, 18 de maio de 2016 7:36:56
Assunto: ODP: [Monasca] Sporadic metrics

Hi there,

Erickson, do you know if you will be able to follow up my current 
implementation in either of components Witek has mentioned ?
If you would need some better background, let me know all your questions.

Currently the most crucial part would be the monasca-ui. This change would have 
to be created on top of the change for
non-deterministic alarms, because that way it only makes sense. The idea I have 
for it is to automatically propose type of an
alarm for the user based on the picked metric.
- non-deterministic for sporadic metrics
- determinist for sporadic metrics
Of course that is only proposition and user/operator can still make his own 
choices about that.

As for the python-monascaclient, most of the changes would be around metric 
commands.
I think there's actually nothing to be done about other commands (especially 
related to alarms definitions). But just
recently I've started using this CLI client, so I am no expert about that.

PS. Please let me know, if you have any doubts.

Regards,
Tomasz


Od: Witek Bedyk 
Wysłane: 13 maja 2016 17:44
Do: Erickson; openstack-dev@lists.openstack.org
Temat: [Monasca] Sporadic metrics

Hi Erickson,

thanks for your interest in our project and sorry for replying so late.

We are happy that you're willing to contribute to the development of
sporadic metrics.

The incentive for this development is to introduce the possibility to
create alarms in Monasca based on event like metrics. E.g. A metric is
generated every time when an error in the given log-file appears. Such
metric is different from the standard Monasca metric in the way that it
does not appear on the regular, periodic basis. You can read a little
more about it here [1], but the blueprint itself is outdated and the
implementation changed.

The main logic for alarm creation has been moved to
"deterministic_alarm" topic [2] and Tomasz works currently on that. Your
reviews are very welcome.

The changes in "sporadic_metric" topic [3] are meant to give the user
information that a given metric is of sporadic (non-periodic) character.
It will have no influence on monasca-thresh. Tomasz has put the code
that he already had in this topic but he must focus on
"deterministic_alarm" now, so please feel free to submit your patch-sets
if you want. Apart from the existing changes also changes in monasca-ui
and python-monascaclient repos are needed to display the 'sporadic'
attribute of the metric for the user. You could compare this change in
monasca-ui [4].

If you have questions to the implementation of "sporadic metrics", I
think the existing changes in gerrit [3] are the best place for exchange
and communication at the moment. Actually I should have updated the
blueprint for that. Sorry, my fault.

So, have fun with the code :)


Cheers
Witek


[1] https://blueprints.launchpad.net/monasca/+spec/alarmsonlogs
[2] https://review.openstack.org/#/q/topic:deterministic_alarm
[3] https://review.openstack.org/#/q/topic:sporadic_metric
[4]
https://review.openstack.org/#/c/314420/3/monitoring/alarmdefs/templates/alarmdefs/_detail.html


P.S. Please note, I will be in holidays for the next two weeks.


--
FUJITSU Enabling Software Technology GmbH
Schwanthalerstr. 75a, 80336 München

Telefon: +49 89 360908-547
Telefax: +49 89 360908-8547
COINS: 7941-6547

Sitz der Gesellschaft: München
AG München, HRB 143325
Geschäftsführer: Dr. Yuji Takada, Hans-Dieter Gatzka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] [glance] Answers to some questions about Glance

2016-05-18 Thread Nikhil Komawar
Thank you for stating all the right things Brian.


And I want to add that Glance too takes it to heart.


We needed a major version change and that's been communicated in many
ways. If more documentation, etc stuff is required, you are welcome to
provide that feedback. We are more than happy to help where required.


NOTE FOR ALL: We've spent 170,020,783 Calories on this version change
topic. It's done and we will have to live with it.


That's it. Time to pack up here.


On 5/18/16 8:44 AM, Brian Rosmaita wrote:
> On 5/18/16, 2:15 AM, "Clint Byrum"  wrote:
>
>> Excerpts from Robert Collins's message of 2016-05-18 14:57:05 +1200:
>>> On 18 May 2016 at 00:54, Brian Rosmaita 
>>> wrote:
>>>
> Couple of examples:
> 1. switching from "is_public=true" to "visibility=public"

 This was a major version change in the Images API.  The 'is_public'
>>> boolean
 is in the original Images v1 API, 'visibility' was introduced with the
 Images v2 API in the Folsom release.  You just need an awareness of
>>> which
 version of the API you're talking to.
>>> So I realise this is ancient history, but this is really a good
>>> example of why Monty has been pushing on 'never break our APIs': API
>>> breaks hurt users, major versions or not. Keeping the old attribute as
>>> an alias to the new one would have avoided the user pain for a very
>>> small amount of code.
>>>
>>> We are by definition an API - doesn't matter that its HTTP vs Python -
>>> when we break compatibility, there's a very long tail of folk that
>>> will have to spend time updating their code; 'Microversions' are a
>>> good answer to this, as long as we never raise the minimum version we
>>> support. glibc does a very similar thing with versioned symbols - and
>>> they support things approximately indefinitely.
>> +1, realy well said. As Nikhil said, assumptions are bad, and assuming
>> that nobody's using that, or that they'll just adapt, is not really a
>> great way to establish a relationship with the users.
> I agree with the general sentiment, and I sincerely hope that all
> OpenStack projects take it to heart.
>
> I also want to note for the record that the Images API really did need a
> major version change.
>
> cheers,
> brian
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Ryu 4.2 breaking python34 jobs

2016-05-18 Thread Ihar Hrachyshka

> On 18 May 2016, at 18:33, Armando M.  wrote:
> 
> 
> 
> On 18 May 2016 at 09:25, Tidwell, Ryan  wrote:
> I just wanted to give a heads-up to everyone that a bug in Ryu 4.2 which was 
> just recently pushed to pypi seems to causing issues in the python34 jobs in 
> neutron-dynamic-routing.  This issue will likely also cause problems for 
> backports to stable/mitaka in the main neutron repository.
> 
> 
> Ryu is locked at 4.0 in upper-constraints for stable/mitaka so I think we 
> should be good as I don't see the bump being backported.
> 
> https://github.com/openstack/requirements/blob/stable/mitaka/upper-constraints.txt#L328

Right. It’s still a good idea to != the bad version in all branches for 
openstack/requirements since not everyone follows upper-constraints.txt.

>  
> I have filed https://bugs.launchpad.net/neutron/+bug/1583011 to track the 
> issue.  The short version of the problem is that an incompatibility with 
> python 3 was briefly introduced. Note that master in the neutron repository 
> is probably not affected as we have spun out the BGP code that exercises the 
> portions of Ryu affected by the python 3 incompatibility. Please also note 
> that a fix for the issue merged in Ryu several hours ago.
> 
> 
> We should pull the newer version in global requirements and bump 
> upper-constraints again.
> 
> Cheers,
> Armando
> 
> 
>  
> 
> -Ryan
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] [glance] Answers to some questions about Glance

2016-05-18 Thread Nikhil Komawar


On 5/18/16 11:55 AM, Clint Byrum wrote:
> Excerpts from Nikhil Komawar's message of 2016-05-18 11:03:45 -0400:
>> On 5/18/16 2:15 AM, Clint Byrum wrote:
>>> Excerpts from Robert Collins's message of 2016-05-18 14:57:05 +1200:
 On 18 May 2016 at 00:54, Brian Rosmaita  
 wrote:

>> Couple of examples:
>> 1. switching from "is_public=true" to "visibility=public"
> This was a major version change in the Images API.  The 'is_public' 
> boolean
> is in the original Images v1 API, 'visibility' was introduced with the
> Images v2 API in the Folsom release.  You just need an awareness of which
> version of the API you're talking to.
 So I realise this is ancient history, but this is really a good
 example of why Monty has been pushing on 'never break our APIs': API
 breaks hurt users, major versions or not. Keeping the old attribute as
 an alias to the new one would have avoided the user pain for a very
 small amount of code.

 We are by definition an API - doesn't matter that its HTTP vs Python -
 when we break compatibility, there's a very long tail of folk that
 will have to spend time updating their code; 'Microversions' are a
 good answer to this, as long as we never raise the minimum version we
 support. glibc does a very similar thing with versioned symbols - and
 they support things approximately indefinitely.
>>> +1, realy well said. As Nikhil said, assumptions are bad, and assuming
>> You have only conveniently picked up one things I've said in my email,
>> why not choose the other parts of resolving those assumptions correctly.
>> Please do not phrase me when the propaganda is orthogonal to what I
>> proposed.
>>
>>> that nobody's using that, or that they'll just adapt, is not really a
>>> great way to establish a relationship with the users.
>> It's not that assumption that users are not using it:
>>
>> The very _assumption_ people who are using it is that glance v1 is ok to
>> be public facing API which was never designed to be one. So, that's the
>> assumption you need to take into account and not the one you like to
>> pick. That's the part where I talk about being engaged missing in your
>> message.
>>
> It doesn't really matter what it was designed for, once something is
> released as a public facing API, and users build on top of it, there
> are consequences for breaking it.
>
> There's nothing personal about this problem. It happens. My message is
> simple, and consistent with the other thread (and I do see how they are
> linked): We don't get to pick how people consume the messages we send.
> Whether in docs, code, mailing list threads, or even a room with humans
> face to face, mistakes will be made.
>
> So lets be frank about that, and put aside our egos, and accept that

Again, like I keep mentioning in various emails, catching intent is hard
on the ML. If you read the full email and not in pieces you will realize
the point I make about being rational in the feedback.

The problem is about speculation and not ego. Something I wanted to
start as a result of my thread to prove a point and that's been achieved.

The fallout is me merely making sure that Glance team stays happy. If a
few glance-cores leave just because one user is unhappy, that's a big
loss, for the entire community.

Again, let's avoid speculation, keep maintaining rationality in our
approach. Stay on topic, read the comments fully before replying. What
someone consumes is not in our will, but whether we as a community
appreciate such approache is. Let's keep that in mind.

Being judicious in our communication is far more important than frank,
wise feedback always improves, frank may not always do so.

> mistakes were made _by all parties_ in this specific case, and nobody is
> "mad" about this, but we'd all like to avoid making them again.

If you have to work on something in openstack for a few years and that
still remains unresolved for not necessarily the most convincing
reasons, a large portion of the community is going to be mad. What you
see is merely a oversight.

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-18 Thread Ian Cordasco
 

-Original Message-
From: Matt Fischer 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: May 17, 2016 at 19:29:10
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [all] Deprecated options in sample configs?

> > If config sample files are being used as a living document then that would
> > be a reason to leave the deprecated options in there. In my experience as a
> > cloud deployer I never once used them in that manner so it didn't occur to
> > me that people might, hence my question to the list.
> >
> > This may also indicate that people aren't checking release notes as we
> > hope they are. A release note is where I would expect to find this
> > information aggregated with all the other changes I should be aware of.
> > That seems easier to me than aggregating that data myself by checking
> > various sources.
> >
>  
>  
> One way to think about this is that the config file has to be accurate or
> the code won't work, but release notes can miss things with no consequences
> other than perhaps an annoyed operator. So they are sources of truth about
> the state options on of a release or branch.

Further, prior to reno, I think our release notes were more or less `git log 
--oneline` collections. That's rarely useful for an operator when the number of 
commits is very high. Hopefully with teams starting to enforce usage of reno 
(with the exception of a couple) operators will be able to use release notes 
with greater confidence and stop relying so heavily on config files. That said, 
old habits will be hard to break and we've encouraged/made necessary a lot of 
those habits over the years.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

2016-05-18 Thread Michał Jastrzębski
+1 :)

On 18 May 2016 at 10:02, Ryan Hallisey  wrote:
> +1 nice work mlima!
>
> -Ryan
>
> - Original Message -
> From: "Vikram Hosakote (vhosakot)" 
> To: openstack-dev@lists.openstack.org
> Sent: Wednesday, May 18, 2016 9:45:53 AM
> Subject: Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core   
> reviewer
>
> Yes, +1 for sure!
>
> Thanks a lot Mauricio for all the great work especially for adding Manila to
> kolla and also updating the cleanup scripts and documentation!
>
>
> Regards,
> Vikram Hosakote
> IRC: vhosakot
>
> From: "Steven Dake (stdake)" < std...@cisco.com >
> Reply-To: " openstack-dev@lists.openstack.org " < 
> openstack-dev@lists.openstack.org >
> Date: Tuesday, May 17, 2016 at 3:00 PM
> To: " openstack-dev@lists.openstack.org " < openstack-dev@lists.openstack.org 
> >
> Subject: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer
>
> Hello core reviewers,
>
> I am proposing Mauricio (mlima on irc) for the core review team. He has done 
> a fantastic job reviewing appearing in the middle of the pack for 90 days [1] 
> and appearing as #2 in 45 days [2]. His IRC participation is also fantastic 
> and does a good job on technical work including implementing Manila from zero 
> experience :) as well as code cleanup all over the code base and 
> documentation. Consider my proposal a +1 vote.
>
> I will leave voting open for 1 week until May 24th. Please vote +1 (approve), 
> or –2 (veto), or abstain. I will close voting early if there is a veto vote, 
> or a unanimous vote is reached.
>
> Thanks,
> -steve
>
> [1] http://stackalytics.com/report/contribution/kolla/90
> [2] http://stackalytics.com/report/contribution/kolla/45
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Ryu 4.2 breaking python34 jobs

2016-05-18 Thread Armando M.
On 18 May 2016 at 09:25, Tidwell, Ryan  wrote:

> I just wanted to give a heads-up to everyone that a bug in Ryu 4.2 which
> was just recently pushed to pypi seems to causing issues in the python34
> jobs in neutron-dynamic-routing.  This issue will likely also cause
> problems for backports to stable/mitaka in the main neutron repository.
>

Ryu is locked at 4.0 in upper-constraints for stable/mitaka so I think we
should be good as I don't see the bump being backported.

https://github.com/openstack/requirements/blob/stable/mitaka/upper-constraints.txt#L328


> I have filed https://bugs.launchpad.net/neutron/+bug/1583011 to track the
> issue.  The short version of the problem is that an incompatibility with
> python 3 was briefly introduced. Note that master in the neutron repository
> is probably not affected as we have spun out the BGP code that exercises
> the portions of Ryu affected by the python 3 incompatibility. Please also
> note that a fix for the issue merged in Ryu several hours ago.
>

We should pull the newer version in global requirements and bump
upper-constraints again.

Cheers,
Armando


>
> -Ryan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] [glance] Answers to some questions about Glance

2016-05-18 Thread Nikhil Komawar
All great points. But, I just want us to read between the lines.


On 5/17/16 10:57 PM, Robert Collins wrote:
> On 18 May 2016 at 00:54, Brian Rosmaita  wrote:
>
>>> Couple of examples:
>>> 1. switching from "is_public=true" to "visibility=public"
>>
>> This was a major version change in the Images API.  The 'is_public' boolean
>> is in the original Images v1 API, 'visibility' was introduced with the
>> Images v2 API in the Folsom release.  You just need an awareness of which
>> version of the API you're talking to.
> So I realise this is ancient history, but this is really a good

Thank you for realizing that! Whatever keeps us from reopening this
pandora's box again!

> example of why Monty has been pushing on 'never break our APIs': API

Sure, we need to consider when that was proposed and how OpenStack was
operating before, who were the major stakeholders and why certain things
needed to be changed. I am not arguing against your point, rather saying
for sake of supporting an API that will remain maintainable long term.
Have we considered all the things are important like security before
saying that we need backward compatibility or are we willing to
compromise on that?, I guess not.

> breaks hurt users, major versions or not. Keeping the old attribute as
> an alias to the new one would have avoided the user pain for a very
> small amount of code.

Again, we need to be careful about this, keep the compatibility when it
makes sense and is possible. Having to maintain a hacky code is pretty
bad idea I'd say.

>
> We are by definition an API - doesn't matter that its HTTP vs Python -
> when we break compatibility, there's a very long tail of folk that
> will have to spend time updating their code; 'Microversions' are a
> good answer to this, as long as we never raise the minimum version we
> support. glibc does a very similar thing with versioned symbols - and
> they support things approximately indefinitely.

All good points and we are going in that direction. Thanks for stating
them explicitly.

>
> -Rob
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] disabling deprecated APIs by config?

2016-05-18 Thread Brant Knudson
On Wed, May 18, 2016 at 10:20 AM, Sean Dague  wrote:

> nova-net is now deprecated - https://review.openstack.org/#/c/310539/
>
> And we're in the process in Nova of doing some spring cleaning and
> deprecating the proxies to other services -
> https://review.openstack.org/#/c/312209/
>
> At some point in the future after deprecation the proxy code is going to
> stop working. Either accidentally, because we're not going to test or
> fix this forever (and we aren't going to track upstream API changes to
> the proxy targets), or intentionally when we decide to delete it to make
> it easier to address core features and bugs that everyone wants addressed.
>
> However, the world moves forward slowly. Consider the following scenario.
>
> We delete nova-net & the network proxy entirely in Peru (a not entirely
> unrealistic idea). At that release there are a bunch of people just
> getting around to Newton. Their deployments allow all these things to
> happen which are going to 100% break when they upgrade, and people are
> writing more and more OpenStack software every cycle.
>
> How do we signal to users this kind of deprecation? Can we give sites
> tools to help prevent new software being written to deprecated (and
> scheduled for deletion) APIs?
>
> One idea was a "big red switch" in the format of a config option
> ``disable_deprecated_apis=True`` (defaults to False). Which would set
> all deprecated APIs to 404 routes.
>
> One of the nice ideas here is this would allow some API servers to have
> this set, and others not. So users could point to the "clean" API
> server, figure out that they will break, but the default API server
> would still support these deprecated APIs. Or, conversely, the default
> could be the clean API server, and a legacy API server endpoint could be
> provided for projects that really needed it that included these
> deprecated things for now. Either way it would allow some site assisted
> transition. And be something like the -Werror flag in gcc.
>
> In the Nova case the kinds of things ending up in this bucket are going
> to be interfaces that people *really* shouldn't be using any more. Many
> of them data back to when OpenStack was only 2 projects, and the concept
> of splitting out function wasn't really thought about (note: we're
> getting ahead of this one for the 'placement' rest API, so it won't have
> any of these issues). At some point this house cleaning was going to
> have to happen, and now seems to be the time to do get it rolling.
>
> Feedback on this idea would be welcomed. We're going to deprecate the
> proxy APIs regardless, however disable_deprecated_apis is it's own idea
> and consequences, and we really want feedback before pushing forward on
> this.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


oslo.log's fatal_deprecations configuration option[1] will raise an
exception rather than log will raise an exception. So if nova used
oslo.log.versionutils to indicate that the functionality is deprecated then
setting this switch would make it unusable. I'm not sure if anybody has
ever set this option to true.

There could be a gate job that sets deprecations to fatal which would
ensure that there's nothing running that's using deprecated functionality.

[1]
http://docs.openstack.org/developer/oslo.log/opts.html#DEFAULT.fatal_deprecations

- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][stable] Ryu 4.2 breaking python34 jobs

2016-05-18 Thread Tidwell, Ryan
I just wanted to give a heads-up to everyone that a bug in Ryu 4.2 which was 
just recently pushed to pypi seems to causing issues in the python34 jobs in 
neutron-dynamic-routing.  This issue will likely also cause problems for 
backports to stable/mitaka in the main neutron repository. I have filed 
https://bugs.launchpad.net/neutron/+bug/1583011 to track the issue.  The short 
version of the problem is that an incompatibility with python 3 was briefly 
introduced. Note that master in the neutron repository is probably not affected 
as we have spun out the BGP code that exercises the portions of Ryu affected by 
the python 3 incompatibility. Please also note that a fix for the issue merged 
in Ryu several hours ago.

-Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron plugins that do not have L3 agents

2016-05-18 Thread Jay Pipes

On 05/18/2016 10:42 AM, Sean M. Collins wrote:

We could do a variation on what was done in 
https://review.openstack.org/#/c/317754/1/lib/tempest

Something like https://review.openstack.org/#/c/318145/ ?

That way, no more
IS_SOMETHING_ENABLED_THAT_WE_COULD_DISCOVER_VIA_THE_API variables?


Heck yeah. ++

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] disabling deprecated APIs by config?

2016-05-18 Thread Kevin L. Mitchell
On Wed, 2016-05-18 at 10:56 -0500, Dean Troyer wrote:
> With my OSC hat on, I need to still support those APIs for a couple of
> years at least, so I would love to have a way to discover that those
> APIs have been disabled without requiring a round trip to get a 404.
> This seems like a useful thing for the '/' route version information
> when no '/capabilities'-like endpoint is available.

This thread started me thinking about ways to indicate a deprecation to
consumers of the API.  Given that most of our APIs return a top-level
dictionary with a single key, would it make any sense at all to add
additional keys?  We could add a 'warnings' key in that top-level
dictionary that could document deprecations, among other things.

(That said, /capabilities or equivalent would probably be superior for
this particular case…)
-- 
Kevin L. Mitchell 


signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [searchlight] Searchlight Core Nomination - Lei Zhang

2016-05-18 Thread David Lyle
+1

On Tue, May 17, 2016 at 2:58 PM, McLellan, Steven
 wrote:
> +1, Lei's made some great contributions.
>
> On 5/17/16, 3:56 PM, "Brian Rosmaita"  wrote:
>
>>+1
>>
>>I second the motion!
>>
>>On 5/17/16, 3:42 PM, "Tripp, Travis S"  wrote:
>>
>>>I am nominating Lei Zhang from Intel (lei-zh on IRC) to join the
>>>Searchlight core reviewers team. He has been actively participating with
>>>thoughtful patches and reviews demonstrating his depth of understanding
>>>in a variety of areas. He also participates in meetings regularly,
>>>despite a difficult time zone. You may review his Searchlight activity
>>>reports below [0] [1].
>>>
>>>[1] (~Mitaka + Newton)
>>>  http://stackalytics.com/report/contribution/searchlight-group/200
>>>[0] (~Newton)
>>>  
>>> http://stackalytics.com/report/contribution/searchlight-group/40
>>>
>>>Please vote for this change in reply to this message.
>>>
>>>Thank you,
>>>Travis
>>>
>>>
>>>_
>>>_
>>>OpenStack Development Mailing List (not for usage questions)
>>>Unsubscribe:
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] disabling deprecated APIs by config?

2016-05-18 Thread Dean Troyer
On Wed, May 18, 2016 at 10:20 AM, Sean Dague  wrote:

> nova-net is now deprecated - https://review.openstack.org/#/c/310539/


Woot! Now to make it not-the-default in DevStack...


> One idea was a "big red switch" in the format of a config option
> ``disable_deprecated_apis=True`` (defaults to False). Which would set
> all deprecated APIs to 404 routes.
>
> One of the nice ideas here is this would allow some API servers to have
> this set, and others not. So users could point to the "clean" API
> server, figure out that they will break, but the default API server
> would still support these deprecated APIs. Or, conversely, the default
> could be the clean API server, and a legacy API server endpoint could be
> provided for projects that really needed it that included these
> deprecated things for now. Either way it would allow some site assisted
> transition. And be something like the -Werror flag in gcc.
>

With an app-dev/end-user hat on, I really like the idea of there being some
switch I can control (alternate endpoint or service catalog?
 account/project toggle? don't know exactly what yet) to do testing with
the deprecated APIs disabled.

This is really a problem of migrating/prompting the client developers to do
the right thing.  Often times they (including myself) respond faster to
their users and not the upstream projects, so giving the users the ability
to test and provide feedback would be huge here.

With my OSC hat on, I need to still support those APIs for a couple of
years at least, so I would love to have a way to discover that those APIs
have been disabled without requiring a round trip to get a 404.  This seems
like a useful thing for the '/' route version information when no
'/capabilities'-like endpoint is available.

OSC already does detect when the network service type is available and just
uses it whenever possible, I have not yet completely thought through if
this is a good enough general solution, ie, what we want clients to do that
must live in both worlds for a time.

Feedback on this idea would be welcomed. We're going to deprecate the
> proxy APIs regardless, however disable_deprecated_apis is it's own idea
> and consequences, and we really want feedback before pushing forward on
> this.
>

Operators need something to be able to help this process along, the config
option seems really clean to me.  That leaves how to enable the users to
make use of that choice.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] [glance] Answers to some questions about Glance

2016-05-18 Thread Clint Byrum
Excerpts from Nikhil Komawar's message of 2016-05-18 11:03:45 -0400:
> 
> On 5/18/16 2:15 AM, Clint Byrum wrote:
> > Excerpts from Robert Collins's message of 2016-05-18 14:57:05 +1200:
> >> On 18 May 2016 at 00:54, Brian Rosmaita  
> >> wrote:
> >>
>  Couple of examples:
>  1. switching from "is_public=true" to "visibility=public"
> >>>
> >>> This was a major version change in the Images API.  The 'is_public' 
> >>> boolean
> >>> is in the original Images v1 API, 'visibility' was introduced with the
> >>> Images v2 API in the Folsom release.  You just need an awareness of which
> >>> version of the API you're talking to.
> >> So I realise this is ancient history, but this is really a good
> >> example of why Monty has been pushing on 'never break our APIs': API
> >> breaks hurt users, major versions or not. Keeping the old attribute as
> >> an alias to the new one would have avoided the user pain for a very
> >> small amount of code.
> >>
> >> We are by definition an API - doesn't matter that its HTTP vs Python -
> >> when we break compatibility, there's a very long tail of folk that
> >> will have to spend time updating their code; 'Microversions' are a
> >> good answer to this, as long as we never raise the minimum version we
> >> support. glibc does a very similar thing with versioned symbols - and
> >> they support things approximately indefinitely.
> > +1, realy well said. As Nikhil said, assumptions are bad, and assuming
> 
> You have only conveniently picked up one things I've said in my email,
> why not choose the other parts of resolving those assumptions correctly.
> Please do not phrase me when the propaganda is orthogonal to what I
> proposed.
> 
> > that nobody's using that, or that they'll just adapt, is not really a
> > great way to establish a relationship with the users.
> 
> It's not that assumption that users are not using it:
> 
> The very _assumption_ people who are using it is that glance v1 is ok to
> be public facing API which was never designed to be one. So, that's the
> assumption you need to take into account and not the one you like to
> pick. That's the part where I talk about being engaged missing in your
> message.
> 

It doesn't really matter what it was designed for, once something is
released as a public facing API, and users build on top of it, there
are consequences for breaking it.

There's nothing personal about this problem. It happens. My message is
simple, and consistent with the other thread (and I do see how they are
linked): We don't get to pick how people consume the messages we send.
Whether in docs, code, mailing list threads, or even a room with humans
face to face, mistakes will be made.

So lets be frank about that, and put aside our egos, and accept that
mistakes were made _by all parties_ in this specific case, and nobody is
"mad" about this, but we'd all like to avoid making them again.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][oslo.messaging] stable/mitaka is broken by oslo.messaging 5.0.0

2016-05-18 Thread Joshua Harlow

Roman Dobosz wrote:

On Tue, 17 May 2016 21:41:11 -0700
Joshua Harlow  wrote:


Options I see:
Constrain oslo.messaging in global-requirements.txt for
stable/mitaka with 4.6.1. Hard to do since it requires wide
cross-project coordination.
Remove that hack in stable/mitaka as we did with master. It may
be bad because this was wanted very much by some of the users

Not sure what else we can do.

You could set up your test jobs to use the upper-constraints.txt
file in
the requirements repo.

What was the outcome of the discussion about adding the
at-least-once
semantics to oslo.messaging?

So there are a few options I am seeing so far (there might be more
that I don't see also), others can hopefully correct me if they are
wrong (which they might be) ;)

Option #1

Oslo.messaging (and the dispatcher part that does this) stays as is,
doing at-most-once for RPC (notifications are in a different
category here so let's not discuss them) and doing at-most-once well
and battle-hardened (it's current goal) across the various backend
drivers it supports.

At that point at-least-once will have to done via some other library
where this kind of semantics can be placed, that might be tooz via
https://review.openstack.org/#/c/260246/ (which has similar
semantics, but is not based on a kind of RPC, instead it's more like
a job-queue).

Option #2

Oslo.messaging (and the dispatcher part that does this) changes
(possibly allowing it to be replaced with a different type of
dispatcher, ie like in https://review.openstack.org/#/c/314732/);
the default class continues being great at for RPC (notifications
are in a different category here so let's not discuss them) and
doing at-most-once well and battle-hardened (it's current goal)
across the various backend drivers it supports. If people want to
provide an alternate class with different semantics they are
somewhat on there own (but at least they can do this).

Issues raised: this though may not be wanted, as some of the
oslo.messaging folks do not want the dispatcher class to be exposed
at all (and would prefer to make it totally private, so exposing it
would be against that goal); though people are already 'hacking'
this kind of functionality in, so it might be the best we can get at
the current time?

Option #3

Do nothing.

Issues raised: everytime oslo.messaging changes this *mostly*
internal dispatcher API a project will have to make a new 'hack' to
replace it and hope that the semantics that it has 'hacked' in will
continue to be compatible with the various drivers in
oslo.messaging. Not IMHO a sustainable way to keep on working (and
I'd be wary of doing this in a project if I was the owner of one,
because it's ummm, 'dirty').

My thoughts on what could work:

What I'd personally like to see is a mix of option #1 and #2, where
we have commitment from folks (besides myself, lol) to work on
option #1 and we temporarily move forward with option #2 with a
strict-statement that the functionality we would be enabling will
only exist for say a single release (and then it will be removed).

Thoughts from others?


Option #4

(Which might be obvious) Directly use RabbitMQ driver, like
pika/kombu, which can expose all the message queue features to the
project.

Issues raised: Pushback from the community due not using
oslo.messaging and potential necessity for implementing it for other
drivers/transports, or forcing to use particular message queue/driver
in every project.



Isn't this similar/same as to option #1? Nothing about option #1 says 
(from my understanding) that it must be implemented via oslo.messaging 
(and in all likely-hood it can't be).


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-05-18 Thread David Medberry
+1 james bottomley

On Sun, Jan 17, 2016 at 11:55 PM, James Bottomley <
james.bottom...@hansenpartnership.com> wrote:

> On Fri, 2016-01-15 at 15:38 +, Daniel P. Berrange wrote:
> > On Fri, Jan 15, 2016 at 08:48:21PM +0800, Thomas Goirand wrote:
> > > This isn't the first time I'm calling for it. Let's hope this time,
> > > I'll
> > > be heard.
> > >
> > > Randomly, contributors put their company names into source code.
> > > When
> > > they do, then effectively, this tells that a given source file
> > > copyright
> > > holder is whatever is claimed, even though someone from another
> > > company
> > > may have patched it.
> > >
> > > As a result, we have a huge mess. It's impossible for me, as a
> > > package
> > > maintainer, to accurately set the copyright holder names in the
> > > debian/copyright file, which is a required by the Debian FTP
> > > masters.
> >
> > I don't think OpenStack is in a different situation to the vast
> > majority of open source projects I've worked with or seen. Except
> > for those projects requiring copyright assignment to a single
> > entity, it is normal for source files to contain an unreliable
> > random splattering of Copyright notices. This hasn't seemed to
> > create a blocking problem for their maintenance in Debian. Loooking
> > at the debian/copyright files I see most of them have just done a
> > grep for the 'Copyright' statements & included as is - IOW just
> > ignored the fact that this is essentially worthless info and included
> > it regardless.
> >
> > > I see 2 ways forward:
> > > 1/ Require everyone to give-up copyright holding, and give it to
> > > the
> > > OpenStack Foundation.
> > > 2/ Maintain a copyright-holder file in each project.
> >
> > 3/ Do nothing, just populate debian/copyright with the random
> >set of 'Copyright' lines that happen to be the source files,
> >as appears to be common practice across many debian packages
> >
> >eg the kernel package
> >
> > http://metadata.ftp-master.debian.org/changelogs/main/l/linux/lin
> > ux_3.16.7-ckt11-1+deb8u3_copyright
> >
> > "Copyright: 1991-2012 Linus Torvalds and many others"
> >
> >if its good enough for the Debian kernel package, it should be
> >good enough for openstack packages too IMHO.
>
> This is what I'd vote for.  It seems to be enough to satisfy the debian
> policy on copyrights and it means nothing has to change in Openstack.
>
> James
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [javascript] Seeking contributors, js-generator-openstack

2016-05-18 Thread Michael Krotscheck
Hello everyone!

The js-generator-openstack project has been incubated under
openstack-infra, and is seeking contributors (and cores). The purpose of
the project is as follows:

   - Help manage common project configuration aspects, such as licenses,
   gerrit, authors, and more.
   - Assist in keeping dependencies up-to-date and synchronized across
   javascript projects (JS equivalent of global requirements).
   - Provide all the necessary hooks for OpenStack's JavaScript Common
   Testing Interface.
   - Suggest common tools to use for tasks such as linting, unit testing,
   functional testing, and more.
   - (Newton Stretch) Provide a quick way of bootstrapping a new
   CORS-consuming OpenStack UI.

I'm looking for help- firstly, because right now I'm the only person who's
willing to review JavaScript amongst the various infra cores, and I'd
really like more eyeballs on this project. Secondly, because I know that
I'm not the only person who has opinions about how we should be doing
JavaScript things.

Come on over to
https://review.openstack.org/#/q/project:openstack-infra/js-generator-openstack+status:open
and
help me out, would ya? If you've got questions, I'm active in the
#openstack-javascript channel.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] disabling deprecated APIs by config?

2016-05-18 Thread Sean Dague
nova-net is now deprecated - https://review.openstack.org/#/c/310539/

And we're in the process in Nova of doing some spring cleaning and
deprecating the proxies to other services -
https://review.openstack.org/#/c/312209/

At some point in the future after deprecation the proxy code is going to
stop working. Either accidentally, because we're not going to test or
fix this forever (and we aren't going to track upstream API changes to
the proxy targets), or intentionally when we decide to delete it to make
it easier to address core features and bugs that everyone wants addressed.

However, the world moves forward slowly. Consider the following scenario.

We delete nova-net & the network proxy entirely in Peru (a not entirely
unrealistic idea). At that release there are a bunch of people just
getting around to Newton. Their deployments allow all these things to
happen which are going to 100% break when they upgrade, and people are
writing more and more OpenStack software every cycle.

How do we signal to users this kind of deprecation? Can we give sites
tools to help prevent new software being written to deprecated (and
scheduled for deletion) APIs?

One idea was a "big red switch" in the format of a config option
``disable_deprecated_apis=True`` (defaults to False). Which would set
all deprecated APIs to 404 routes.

One of the nice ideas here is this would allow some API servers to have
this set, and others not. So users could point to the "clean" API
server, figure out that they will break, but the default API server
would still support these deprecated APIs. Or, conversely, the default
could be the clean API server, and a legacy API server endpoint could be
provided for projects that really needed it that included these
deprecated things for now. Either way it would allow some site assisted
transition. And be something like the -Werror flag in gcc.

In the Nova case the kinds of things ending up in this bucket are going
to be interfaces that people *really* shouldn't be using any more. Many
of them data back to when OpenStack was only 2 projects, and the concept
of splitting out function wasn't really thought about (note: we're
getting ahead of this one for the 'placement' rest API, so it won't have
any of these issues). At some point this house cleaning was going to
have to happen, and now seems to be the time to do get it rolling.

Feedback on this idea would be welcomed. We're going to deprecate the
proxy APIs regardless, however disable_deprecated_apis is it's own idea
and consequences, and we really want feedback before pushing forward on
this.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

2016-05-18 Thread Ryan Hallisey
+1 nice work mlima!

-Ryan

- Original Message -
From: "Vikram Hosakote (vhosakot)" 
To: openstack-dev@lists.openstack.org
Sent: Wednesday, May 18, 2016 9:45:53 AM
Subject: Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core   reviewer

Yes, +1 for sure! 

Thanks a lot Mauricio for all the great work especially for adding Manila to 
kolla and also updating the cleanup scripts and documentation! 


Regards, 
Vikram Hosakote 
IRC: vhosakot 

From: "Steven Dake (stdake)" < std...@cisco.com > 
Reply-To: " openstack-dev@lists.openstack.org " < 
openstack-dev@lists.openstack.org > 
Date: Tuesday, May 17, 2016 at 3:00 PM 
To: " openstack-dev@lists.openstack.org " < openstack-dev@lists.openstack.org > 
Subject: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer 

Hello core reviewers, 

I am proposing Mauricio (mlima on irc) for the core review team. He has done a 
fantastic job reviewing appearing in the middle of the pack for 90 days [1] and 
appearing as #2 in 45 days [2]. His IRC participation is also fantastic and 
does a good job on technical work including implementing Manila from zero 
experience :) as well as code cleanup all over the code base and documentation. 
Consider my proposal a +1 vote. 

I will leave voting open for 1 week until May 24th. Please vote +1 (approve), 
or –2 (veto), or abstain. I will close voting early if there is a veto vote, or 
a unanimous vote is reached. 

Thanks, 
-steve 

[1] http://stackalytics.com/report/contribution/kolla/90 
[2] http://stackalytics.com/report/contribution/kolla/45 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] [glance] Answers to some questions about Glance

2016-05-18 Thread Nikhil Komawar


On 5/18/16 2:15 AM, Clint Byrum wrote:
> Excerpts from Robert Collins's message of 2016-05-18 14:57:05 +1200:
>> On 18 May 2016 at 00:54, Brian Rosmaita  wrote:
>>
 Couple of examples:
 1. switching from "is_public=true" to "visibility=public"
>>>
>>> This was a major version change in the Images API.  The 'is_public' boolean
>>> is in the original Images v1 API, 'visibility' was introduced with the
>>> Images v2 API in the Folsom release.  You just need an awareness of which
>>> version of the API you're talking to.
>> So I realise this is ancient history, but this is really a good
>> example of why Monty has been pushing on 'never break our APIs': API
>> breaks hurt users, major versions or not. Keeping the old attribute as
>> an alias to the new one would have avoided the user pain for a very
>> small amount of code.
>>
>> We are by definition an API - doesn't matter that its HTTP vs Python -
>> when we break compatibility, there's a very long tail of folk that
>> will have to spend time updating their code; 'Microversions' are a
>> good answer to this, as long as we never raise the minimum version we
>> support. glibc does a very similar thing with versioned symbols - and
>> they support things approximately indefinitely.
> +1, realy well said. As Nikhil said, assumptions are bad, and assuming

You have only conveniently picked up one things I've said in my email,
why not choose the other parts of resolving those assumptions correctly.
Please do not phrase me when the propaganda is orthogonal to what I
proposed.

> that nobody's using that, or that they'll just adapt, is not really a
> great way to establish a relationship with the users.

It's not that assumption that users are not using it:

The very _assumption_ people who are using it is that glance v1 is ok to
be public facing API which was never designed to be one. So, that's the
assumption you need to take into account and not the one you like to
pick. That's the part where I talk about being engaged missing in your
message.

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [diskimage-builder] LVM in diskimage-builder

2016-05-18 Thread Gregory Haynes
On Tue, May 17, 2016, at 02:32 PM, Andre Florath wrote:
> Hi All!
> 
> AFAIK the diskimage-builder started as a testing tool, but it looks
> that it evolves more and more into a general propose tool for creating
> docker and VM disk images.
> 
> Currently there are ongoing efforts to add LVM [1]. But because some
> features that I need are missing, I created my own prototype to get a
> 'feeling' for the complexity and a possible way doing things [2]. I
> contacted Yolanda (the author of the original patch) and we agreed to
> join forces here to implement a patch that fits our both needs.
> 

Glad to hear this. I'd recommend first making sure the public interfaces
defined in[1] don't conflict with the features you'd like to add (or
even potentially like to add). AFAICT this only matters for the LVM
config passed in via DIB_LVM_SETTINGS. The other features should be
doable in a follow on patch without breaking backwards compatibility and
this is probably the best path forward (rather than adding them in to
[1]). Obviously, if there's any other potentially conflicting public
interfaces I am missing we should fix those before [1] goes in.

As for the DIB_LVM_SETTINGS variable - I don't entirely understand the
issues being raised but we can continue that conversation on [1] since
it is a lot easier to discuss there.

> Yolanda made the proposal before starting implementing things, we
> could contact Open Stack developers via this mailing list and ask
> about possible additional requirements and comments.
> 
> Here is a short list of my requirements - and as far as I understood
> Yolanda, her requirements are a subset:
> 
> MUST be able to
> o use one partition as PV
> o use one VG
> o use many LV (up to 10)
> o specify the mount point for each of the LVs
> o specify mount points that 'overlap', e.g.
>   /, /var, /var/log, /var/spool
> o use the default file system (options) as specified via command line
> o survive in every day's live - and not only in dedicated test
>   environment: must be robust and handle error scenarios
> o use '/' (root fs) as LV
> o run within different distributions - like Debian, Ubuntu, Centos7.
> 
> NICE TO HAVE
> o Multiple partitions as PVs
> o Multiple VGs
> o LVM without any partition
>   Or: why do we need partitions these days? ;-)
>   Problem: I have no idea how to boot from a pure LVM image.
> 
> Every idea or comment will help!  Please feel invited to have a
> (short) look / review at the implementation [1] and the design study
> [2].
> 
> Kind regards
> 
> Andre


Thanks,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron plugins that do not have L3 agents

2016-05-18 Thread Sean M. Collins
We could do a variation on what was done in 
https://review.openstack.org/#/c/317754/1/lib/tempest

Something like https://review.openstack.org/#/c/318145/ ?

That way, no more
IS_SOMETHING_ENABLED_THAT_WE_COULD_DISCOVER_VIA_THE_API variables?
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.service] Lifecycle Hooks

2016-05-18 Thread Doug Hellmann
Excerpts from Kanagaraj Manickam's message of 2016-05-18 19:48:13 +0530:
> > DIms,
> >>
> >> Use case could be anything, which would needed by either
> >> operator/community, who wants to perform an required task before and
> after
> >> service is started. This requirement is very generic by nature, and I
> >> believe it will be very useful.
> >>
> >> Would like to give the sample use cases from from Operator & OpenStack
> >> community side as below.
> >> Operator side, any pre/post actions could be hooked which is the
> >> requirement for them. Simple example would be, one who wants to create an
> >> journal of start/stop details like time, number of worker,
> configurations,
> >> etc in a common portal, this life-cycle hook would help.
> 
> > Is that information not available in the logs?
> 
> [Kanagaraj M] its available in log, but one who wants to collect these info
> in centralized portal,
> it would help.
> 
> >
> > OpenStack community side, sample use cases would be:
> > 1. Most of the OpenStack components starts TextGuruMeditation, logging
> > while those components are get started. These tasks could be provided as
> > life cycle hooks and all OpenStack components could start to leverage it.
> 
> >All of those are things that need to be built into the app in a way that
> >they are started at the right time, rather than at an arbitrary point
> >defined by the plugin order a deployer has specified.
> 
> [Kanagaraj M] while i investigated the OpenStack service cmd, mostly
> it follows the similar pattern on use these utils, so i thought, it would be
> good to provide an plugin, which take care of it instead every service
> code does take care of it. helps to reduces maintenance effort.

Except that we don't want deployers to turn them off, and we need to
control the initialization order, and so we don't want them to be
specified through a configuration option.

> 
> >> 2. For automatically discovering the OpenStack deployment, this hooks
> will
> >> be very useful. Auto-discover-hook would report to pre-defined
> destinations
> >> while starting/stopping the service.
> 
> >Doing that usefully is going to require passing information to the hook
> >so it knows where it is running (what service, what port, etc.). None of
> >the APIs for doing this have been described yet. Do you have any plans
> >put together?
> 
> [Kanagaraj M] I am trying to get all of these information from oslo.confg
> global config variable. As we discussed about namos during austin summit,
> namos does collect these details
> https://github.com/openstack/os-namos/blob/master/os_namos/sync.py#L124

There are 2 issues with having a plugin access config values directly.

Configuration options are owned by the code that defines them, and
are not considered a public "API" for other parts of the code. That
means an application or library developer is free to change the
name or location of a configuration option, without regard to code
that might be trying to use it from outside of the owning module.
oslo.config has support for renames so that *deployers* aren't
broken, but we don't do anything to prevent code that accesses
private values from breaking.  So, you don't want to build any
assumptions into the plugins that they will be able to see configuration
values.

Second, options do not "exist" as far as oslo.config is concerned
until they are registered.  The registration step may happen at
runtime in code that has not executed before the plugin is loaded. So
even if we ignore the "private" nature of the option, there is a timing
issue.

> 
> >It feels very much like you're advocating that we create a thing like
> >paste-deploy for non-WSGI apps: allow anyone to insert anything into
> >the executing process for any purpose and with little control on the
> >application authors' part. That introduces potential stability issues,
> >and a lot of questions that haven't been answered. For example:
> 
> >Does service startup block until all of the plugins are done? If not,
> >do we need to have a timeout management system built in or do we run
> >the plugins in their own thread/subprocess so they can run in the
> >background?
> 
> >Can a plugin change the execution of the service in some way (you
> >mentioned having a plugin download config files when we spoke at the
> >summit, is that still something you want to slip in this way instead of
> >putting it into oslo.config)?
> 
> >Can a plugin cause the service to not start at all by exiting?
> 
> >What happens if one plugin fails but others succeed? Do we keep running?
> 
> >What information about the service can a plugin see? It's running in the
> >same process, so it could see the configuration object, for example.
> >It would only be able to see configuration options it adds (yes, that
> >would work) or that were registered by the application before the plugin
> >is run. So, not necessarily all options but potentially many, with more
> >and more as apps shift to pre-registering all of their

Re: [openstack-dev] [TripleO][CI] Tempest on periodic jobs

2016-05-18 Thread Dan Prince
On Tue, 2016-05-17 at 20:03 +0300, Sagi Shnaidman wrote:
> Hi,
> raising again the question about tempest running on TripleO CI as it
> was discussed in the last TripleO meeting.
> 
> I'd like to get your attention that in these tests, which I ran just
> for ensure it works, there were bugs discovered, and these weren't
> corner cases but real failures of TripleO installation. Like this one
> for Sahara: https://review.openstack.org/#/c/309042/
> I'm sorry, I should have prepared these bugs for the meeting as
> proofs for testing value.

Would it be reasonable for us to add optional Sahara coverage to our
ping test by using the Heat OS::Sahara resources? I feel like this
would gives us lightweight sahara coverage which if enabled costs us
next to nothing in extra CI wall time but gives us the extra coverage?

> 
> The second issue that was blocker before is a wall time and now, as
> we can see from jobs length, after HW upgrade of CI - is not an issue
> anymore. We can run tempest without any fear to get into timeout
> problem, "nonha" job for sure, as most short from all.

It isn't just the 3 hour timeout that matters I think. This was an
upstream constraint that for good reason was an upper limit on our job
times. It is about how long it takes to land code upstream and the
length and consistency of the CI time is a big part of that. Thanks to
some recent optimization on both the hardware and software side we are
now running jobs maybe 20-30 minutes faster. So our HA and upgrades
jobs take around 2 hours or so. This is certainly a welcome improvement
but we've still got much work to do. I'm not willing to hand 10-15
minutes of time back just so we can run Tempest across the board.
Rather, I would see us further optimize the job times to get them
faster still. I don't think the point of investing in a major hardware
upgrade was so we could make some time to run Tempest, it was more to
help us get stability and consistency in our upstream CI test results.

> 
> So I'd insist on running tempest exactly on promoting job in order
> not to promote images with bugs, especially the critical like the
> whole service not available at all. The pingtest is not enough for
> this purpose as we can see from the bugs above, it checks very basic
> things and not all services are covered. I think we aren't interested
> just to see the jobs green, but sticking for the basic working
> functionality and quality of promoting. Maybe it's influence of my
> previous QA roles, but I don't see any value to promote something
> with bugs.

Promoting and being able to consume new packages from TripleO is a very
important part of our pipeline. I think the feedback the core team
initially gave with regards to Tempest was that running it in a
periodic job was a good place for it. This was a few weeks back. It
would allow us to keep things generally working with Tempest but not
block our pipelines should a one-off regression occur.

What changed in the meantime was that we started using the existing
periodic jobs for package promotion directly. So if we add Tempest
there now it is my understanding we depend on it to be working in order
to promote packages and be able to consume the latest sources. While
this may seem ideal, I don't think that is what we want, or need and it
would slow us down. When we need to promote packages we often need
something fixed quickly... and having to chase down Tempest failures at
that time isn't ideal. So ideally if Tempest is going to be running on
our periodic job that blocks package promotion, we would also want it
running on all of our CI jobs so that we aren't surprised by Tempest
failures at the last minute (before package promotion). But this brings
us back to the time it takes to run Tempest across the board.

So my suggestion would be that we have a separate, independent periodic
job (or even a downstream job) that monitors and runs the CI testing of
TripleO with tempest. This job will not block package promotion. We can
monitor the Tempest results and fix them accordingly. And, If there are
any important services that we can test quickly in ping test to add
extra coverage add them in ping test directly to have up to the minute
coverage for TripleO.

Call it a tiered testing approach.


> 
> The point about CI stability - the last issues that CI faces now are
> not so connected to tempest tests or CI code at all, it's bugs of
> underlying projects and whether tempest will run or not doesn't
> really matters in this case. These issues fail everything yet before
> any testing starts. Indication of such issues before they leak into
> TripleO is different topic and approach.
> 
> So my main point for running tempest tests on "nonha" periodic jobs
> is:
> Quality and guaranteed basic functionality of installed overcloud
> services. At least all of them are up and can accept connections.
> Avoid and early discover critical bugs that are not seen in pingtest.
> I remind that we going to run the only smoke 

Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

2016-05-18 Thread Jeff Peeler
+1!

On Tue, May 17, 2016 at 3:00 PM, Steven Dake (stdake)  wrote:
> Hello core reviewers,
>
> I am proposing Mauricio (mlima on irc) for the core review team.  He has
> done a fantastic job reviewing appearing in the middle of the pack for 90
> days [1] and appearing as #2 in 45 days [2].  His IRC participation is also
> fantastic and does a good job on technical work including implementing
> Manila from zero experience :) as well as code cleanup all over the code
> base and documentation.  Consider my proposal a +1 vote.
>
> I will leave voting open for 1 week until May 24th.  Please vote +1
> (approve), or –2 (veto), or abstain.  I will close voting early if there is
> a veto vote, or a unanimous vote is reached.
>
> Thanks,
> -steve
>
> [1] http://stackalytics.com/report/contribution/kolla/90
> [2] http://stackalytics.com/report/contribution/kolla/45

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] api-ref gate job is active

2016-05-18 Thread Sean Dague
On 05/18/2016 09:58 AM, Jay Dobies wrote:
> Just a quick note that there is a new job active called
> gate-heat-api-ref. Our API documentation has been pulled into our tree
> [1] and you can run it locally with `tox -e api-ref`.
> 
> For now, it's a direct port of our existing API docs, but I'm planning
> on taking a pass over them to double check that they are still valid.
> Feel free to ping me if you have any questions/issues.
> 
> [1] https://review.openstack.org/#/c/312712/

Very cool. I proposed a review which switches over to the extracted
library today - https://review.openstack.org/#/c/318019/ which passes
with all your data.

Thanks for digging in so early in this process.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.service] Lifecycle Hooks

2016-05-18 Thread Kanagaraj Manickam
> DIms,
>>
>> Use case could be anything, which would needed by either
>> operator/community, who wants to perform an required task before and
after
>> service is started. This requirement is very generic by nature, and I
>> believe it will be very useful.
>>
>> Would like to give the sample use cases from from Operator & OpenStack
>> community side as below.
>> Operator side, any pre/post actions could be hooked which is the
>> requirement for them. Simple example would be, one who wants to create an
>> journal of start/stop details like time, number of worker,
configurations,
>> etc in a common portal, this life-cycle hook would help.

> Is that information not available in the logs?

[Kanagaraj M] its available in log, but one who wants to collect these info
in centralized portal,
it would help.

>
> OpenStack community side, sample use cases would be:
> 1. Most of the OpenStack components starts TextGuruMeditation, logging
> while those components are get started. These tasks could be provided as
> life cycle hooks and all OpenStack components could start to leverage it.

>All of those are things that need to be built into the app in a way that
>they are started at the right time, rather than at an arbitrary point
>defined by the plugin order a deployer has specified.

[Kanagaraj M] while i investigated the OpenStack service cmd, mostly
it follows the similar pattern on use these utils, so i thought, it would be
good to provide an plugin, which take care of it instead every service
code does take care of it. helps to reduces maintenance effort.

>> 2. For automatically discovering the OpenStack deployment, this hooks
will
>> be very useful. Auto-discover-hook would report to pre-defined
destinations
>> while starting/stopping the service.

>Doing that usefully is going to require passing information to the hook
>so it knows where it is running (what service, what port, etc.). None of
>the APIs for doing this have been described yet. Do you have any plans
>put together?

[Kanagaraj M] I am trying to get all of these information from oslo.confg
global config variable. As we discussed about namos during austin summit,
namos does collect these details
https://github.com/openstack/os-namos/blob/master/os_namos/sync.py#L124

>It feels very much like you're advocating that we create a thing like
>paste-deploy for non-WSGI apps: allow anyone to insert anything into
>the executing process for any purpose and with little control on the
>application authors' part. That introduces potential stability issues,
>and a lot of questions that haven't been answered. For example:

>Does service startup block until all of the plugins are done? If not,
>do we need to have a timeout management system built in or do we run
>the plugins in their own thread/subprocess so they can run in the
>background?

>Can a plugin change the execution of the service in some way (you
>mentioned having a plugin download config files when we spoke at the
>summit, is that still something you want to slip in this way instead of
>putting it into oslo.config)?

>Can a plugin cause the service to not start at all by exiting?

>What happens if one plugin fails but others succeed? Do we keep running?

>What information about the service can a plugin see? It's running in the
>same process, so it could see the configuration object, for example.
>It would only be able to see configuration options it adds (yes, that
>would work) or that were registered by the application before the plugin
>is run. So, not necessarily all options but potentially many, with more
>and more as apps shift to pre-registering all of their options in one
>place. Assuming these are things the deployer has selectively installed,
>maybe that's OK. OTOH, it does open another security surface.

>What happens when a service is told to restart its workers? Do the
>plugins run again?

>Can a plugin start listening for network connections on its own? Connect
>to the message bus?  Provide an RPC endpoint? Start processes? Threads?

[KanagarajM] It gives me a lots insight on what are different problems
would come and i really thank you.
Hooks will be provided by community and/or deployers. while community
provides, those hooks will be well documented, tested and configuration
would be
provided. so all of the above mentioned aspects would be taken care well by
community based on the hooks functionality. And deployer also would take
 care of similar safety measurement before using their hooks similar to how
would they take care when using startup scripts.



>> Regards
>> Kanagaraj M
>>
>> On Wed, May 11, 2016 at 7:05 PM, Davanum Srinivas 
wrote:
>>
>> > Kanagaraj,
> >
>> > Who is the first consumer? for what specific purpose?
>> >
>> > Thanks,
>> > Dims
>> >
>> > On Wed, May 11, 2016 at 9:27 AM, Kanagaraj Manickam 
>> > wrote:
>> > > Hi,
>> > >
>> > > When OpenStack service components are started/stooped,
>> > > operators or OpenStack Services want to execute some actives
>> > > before and/

[openstack-dev] [gate] fernet tokens by default in devstack has been reverted

2016-05-18 Thread Matt Riedemann
I've reverted the devstack change from 4/29 that uses fernet tokens by 
default [1]. This was the cause of the top non-infra-related CI failure 
since it landed about 3 weeks ago. There are some patches up from Lance 
Bragstad and Adam Young in Keystone to try and fix it, but until those 
make better progress we need to revert the change to get things flowing 
again.


[1] https://review.openstack.org/#/c/318116/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] api-ref gate job is active

2016-05-18 Thread Jay Dobies
Just a quick note that there is a new job active called 
gate-heat-api-ref. Our API documentation has been pulled into our tree 
[1] and you can run it locally with `tox -e api-ref`.


For now, it's a direct port of our existing API docs, but I'm planning 
on taking a pass over them to double check that they are still valid. 
Feel free to ping me if you have any questions/issues.


[1] https://review.openstack.org/#/c/312712/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

2016-05-18 Thread Vikram Hosakote (vhosakot)
Yes,  +1 for sure!

Thanks a lot Mauricio for all the great work especially for adding Manila to
kolla and also updating the cleanup scripts and documentation!


Regards,
Vikram Hosakote
IRC: vhosakot

From: "Steven Dake (stdake)" mailto:std...@cisco.com>>
Reply-To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, May 17, 2016 at 3:00 PM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

Hello core reviewers,

I am proposing Mauricio (mlima on irc) for the core review team.  He has done a 
fantastic job reviewing appearing in the middle of the pack for 90 days [1] and 
appearing as #2 in 45 days [2].  His IRC participation is also fantastic and 
does a good job on technical work including implementing Manila from zero 
experience :) as well as code cleanup all over the code base and documentation. 
 Consider my proposal a +1 vote.

I will leave voting open for 1 week until May 24th.  Please vote +1 (approve), 
or -2 (veto), or abstain.  I will close voting early if there is a veto vote, 
or a unanimous vote is reached.

Thanks,
-steve

[1] http://stackalytics.com/report/contribution/kolla/90
[2] http://stackalytics.com/report/contribution/kolla/45
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] Refresher on how to use global requirements process

2016-05-18 Thread Ihar Hrachyshka
Great write-up. Do you plan to capture it e.g. in project guide?

> On 18 May 2016, at 13:43, Davanum Srinivas  wrote:
> 
> Team,
> 
> Here's a primer on how to make sure your project is subject requirements
> process.
> 
> Step #1: Submit a patch to this file at the following locations to
> add a "check-requirements" job (Example is for Monasca)
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n7386
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n7414
> 
> This step adds a CI job to your review process, when the
> requirements.txt in your
> project differs from the one in global-requirements.txt it will complain 
> loudly
> and fail your review. This ensures that your project requirements.txt will not
> have anything other than the entries in global-requirements.txt
> 
> Step #2: Submit a patch to this file
> http://git.openstack.org/cgit/openstack/requirements/tree/projects.txt
> 
> This step ensures that when the nightly proposal bot/script runs, it will
> create a fresh review in your project queue with any changes that were made
> to global-requirements.txt.
> 
> We won't accept a patch for projects.txt in step #2 without seeing a review
> for step #2. Once you have Step #1 and Step #2, your project is essentially
> subscribed to the global requirements process. If we break you, then we
> revert stuff or work with you to move forward. If you see the list of projects
> in projects.txt, essentially we are guaranteeing that we won't break
> any of them.
> 
> Step #3, is adding your python packages to global-requirements.txt. If a
> project has done Step #1 and #2, the requirements team is obligated to revert
> any breaking changes immediately.
> 
> Alternative,
> If the requirements process is onerous, then you should consider making sure
> that none of the jobs you run are subject to constraints and entries for your
> project do not appear in any of the files under the requirements team's 
> purview
> 
> Please note that as we speak, a new team is being formed to take care of 
> things.
> So we'll do better with quicker reviews and reverts if needed. We'll also try 
> to
> figure out ways and means not to break projects inadvertently.
> 
> Thanks,
> Dims
> 
> -- 
> Davanum Srinivas :: https://twitter.com/dims
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Reason for installing devstack in CI pipeline

2016-05-18 Thread Paul Belanger
On Wed, May 18, 2016 at 11:23:53AM +0100, Derek Higgins wrote:
> On 16 May 2016 at 18:16, Paul Belanger  wrote:
> > Greetings,
> >
> > Over the last few weeks I've been head deep into understanding the TripleO 
> > CI
> > pipeline.  For the most part, I am happy that we have merged centos-7 DIB
> > support and I'm working to migrate the jobs to it.
> >
> > Something I have been trying to figure out, is why does the pipeline install
> > devstack?  I cannot see anything currently in the toci_gate.sh script that 
> > is
> > referencing devstack.  Everything seems to be related to launching the 
> > external
> > node.
> >
> > So, my question is, what is devstack doing?
> 
> Can you elaborate what you mean when you say we're installing
> devstack, we currently make use of some of the devstack-gate scripts.
> The main need  for this is so it clones the correct version of each
> project to /opt/stack/new, the tripleo ci job then looks at
> ZUUL_CHANGES[1] to get a list of projects being tested and uses those
> git repositories to build rpm packages from them. The rest of the CI
> job then uses these rpm's (layered on top of rdo trunk repositorys) so
> they end up installed where appropriate.
>
Right I should have said devstack-gate, not just devstack.

So taking what you have explained and looking at the code, I think there can be
some optimizations made to actually switch out devstack-gate and using
zuul-cloner directly. Currently, using devstack-gate add about 10mins to the
start of a job and 10GB+ of data onto the server.

When I have some spare cycles, I plan on adding an experimental job to see how
we can better optimize the jobs. Mostly a make work project to better understand
how the CI pipeline works.

> [1] - 
> http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/toci_instack.sh#n103
> >
> > ---
> > Paul
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] [glance] Answers to some questions about Glance

2016-05-18 Thread Brian Rosmaita
On 5/18/16, 2:15 AM, "Clint Byrum"  wrote:

>Excerpts from Robert Collins's message of 2016-05-18 14:57:05 +1200:
>> On 18 May 2016 at 00:54, Brian Rosmaita 
>>wrote:
>> 
>> >> Couple of examples:
>> >> 1. switching from "is_public=true" to "visibility=public"
>> >
>> >
>> > This was a major version change in the Images API.  The 'is_public'
>>boolean
>> > is in the original Images v1 API, 'visibility' was introduced with the
>> > Images v2 API in the Folsom release.  You just need an awareness of
>>which
>> > version of the API you're talking to.
>> 
>> So I realise this is ancient history, but this is really a good
>> example of why Monty has been pushing on 'never break our APIs': API
>> breaks hurt users, major versions or not. Keeping the old attribute as
>> an alias to the new one would have avoided the user pain for a very
>> small amount of code.
>> 
>> We are by definition an API - doesn't matter that its HTTP vs Python -
>> when we break compatibility, there's a very long tail of folk that
>> will have to spend time updating their code; 'Microversions' are a
>> good answer to this, as long as we never raise the minimum version we
>> support. glibc does a very similar thing with versioned symbols - and
>> they support things approximately indefinitely.
>
>+1, realy well said. As Nikhil said, assumptions are bad, and assuming
>that nobody's using that, or that they'll just adapt, is not really a
>great way to establish a relationship with the users.

I agree with the general sentiment, and I sincerely hope that all
OpenStack projects take it to heart.

I also want to note for the record that the Images API really did need a
major version change.

cheers,
brian





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

2016-05-18 Thread Michal Rostecki

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Austin summit - session recap/summary

2016-05-18 Thread Paul Belanger
On Wed, May 18, 2016 at 12:22:55PM +0100, Derek Higgins wrote:
> On 6 May 2016 at 14:18, Paul Belanger  wrote:
> > On Tue, May 03, 2016 at 05:34:55PM +0100, Steven Hardy wrote:
> >> Hi all,
> >>
> >> Some folks have requested a summary of our summit sessions, as has been
> >> provided for some other projects.
> >>
> >> I'll probably go into more detail on some of these topics either via
> >> subsequent more focussed threads an/or some blog posts but what follows is
> >> an overview of our summit sessions[1] with notable actions or decisions
> >> highlighted.  I'm including some of my own thoughts and conclusions, folks
> >> are welcome/encouraged to follow up with their own clarifications or
> >> different perspectives :)
> >>
> >> TripleO had a total of 5 sessions in Austin I'll cover them one-by-one:
> >>
> >> -
> >> Upgrades - current status and roadmap
> >> -
> >>
> >> In this session we discussed the current state of upgrades - initial
> >> support for full major version upgrades has been implemented, but the
> >> implementation is monolithic, highly coupled to pacemaker, and inflexible
> >> with regard to third-party extraconfig changes.
> >>
> >> The main outcomes were that we will add support for more granular
> >> definition of the upgrade lifecycle to the new composable services format,
> >> and that we will explore moving towards the proposed lightweight HA
> >> architecture to reduce the need for so much pacemaker specific logic.
> >>
> >> We also agreed that investigating use of mistral to drive upgrade workflows
> >> was a good idea - currently we have a mixture of scripts combined with Heat
> >> to drive the upgrade process, and some refactoring into discrete mistral
> >> workflows may provide a more maintainable solution.  Potential for using
> >> the existing SoftwareDeployment approach directly via mistral (outside of
> >> the heat templates) was also discussed as something to be further
> >> investigated and prototyped.
> >>
> >> We also touched on the CI implications of upgrades - we've got an upgrades
> >> job now, but we need to ensure coverage of full release-to-release upgrades
> >> (not just commit to commit).
> >>
> >> ---
> >> Containerization status/roadmap
> >> ---
> >>
> >> In this session we discussed the current status of containers in TripleO
> >> (which is to say, the container based compute node which deploys containers
> >> via Heat onto an an Atomic host node that is also deployed via Heat), and
> >> what strategy is most appropriate to achieve a fully containerized TripleO
> >> deployment.
> >>
> >> Several folks from Kolla participated in the session, and there was
> >> significant focus on where work may happen such that further collaboration
> >> between communities is possible.  To some extent this discussion on where
> >> (as opposed to how) proved a distraction and prevented much discussion on
> >> supportable architectural implementation for TripleO, thus what follows is
> >> mostly my perspective on the issues that exist:
> >>
> >> Significant uncertainty exists wrt integration between Kolla and TripleO -
> >> there's largely consensus that we want to consume the container images
> >> defined by the Kolla community, but much less agreement that we can
> >> feasably switch to the ansible-orchestrated deployment/config flow
> >> supported by Kolla without breaking many of our primary operator interfaces
> >> in a fundamentally unacceptable way, for example:
> >>
> >> - The Mistral based API is being implemented on the expectation that the
> >>   primary interface to TripleO deployments is a parameters schema exposed
> >>   by a series of Heat templates - this is no longer true in a "split stack"
> >>   model where we have to hand off to an alternate service orchestration 
> >> tool.
> >>
> >> - The tripleo-ui (based on the Mistral based API) consumes heat parameter
> >>   schema to build it's UI, and Ansible doesn't support the necessary
> >>   parameter schema definition (such as types and descriptions) to enable
> >>   this pattern to be replicated.  Ansible also doesn't provide a HTTP API,
> >>   so we'd still have to maintain and API surface for the (non python) UI to
> >>   consume.
> >>
> >> We also discussed ideas around integration with kubernetes (a hot topic on
> >> the Kolla track this summit), but again this proved inconclusive beyond
> >> that yes someone should try developing a PoC to stimulate further
> >> discussion.  Again, significant challenges exist:
> >>
> >> - We still need to maintain the Heat parameter interfaces for the API/UI,
> >>   and there is also a strong preference to maintain puppet as a tool for
> >>   generating service configuration (so that existing operator integrations
> >>   via puppet continue to function) - this is a barrier to directly
> >>   consuming the kolla-kubernetes effort directly.
> >>

[openstack-dev] [all][requirements] Refresher on how to use global requirements process

2016-05-18 Thread Davanum Srinivas
Team,

Here's a primer on how to make sure your project is subject requirements
process.

Step #1: Submit a patch to this file at the following locations to
add a "check-requirements" job (Example is for Monasca)
http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n7386
http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n7414

This step adds a CI job to your review process, when the
requirements.txt in your
project differs from the one in global-requirements.txt it will complain loudly
and fail your review. This ensures that your project requirements.txt will not
have anything other than the entries in global-requirements.txt

Step #2: Submit a patch to this file
http://git.openstack.org/cgit/openstack/requirements/tree/projects.txt

This step ensures that when the nightly proposal bot/script runs, it will
create a fresh review in your project queue with any changes that were made
to global-requirements.txt.

We won't accept a patch for projects.txt in step #2 without seeing a review
for step #2. Once you have Step #1 and Step #2, your project is essentially
subscribed to the global requirements process. If we break you, then we
revert stuff or work with you to move forward. If you see the list of projects
in projects.txt, essentially we are guaranteeing that we won't break
any of them.

Step #3, is adding your python packages to global-requirements.txt. If a
project has done Step #1 and #2, the requirements team is obligated to revert
any breaking changes immediately.

Alternative,
If the requirements process is onerous, then you should consider making sure
that none of the jobs you run are subject to constraints and entries for your
project do not appear in any of the files under the requirements team's purview

Please note that as we speak, a new team is being formed to take care of things.
So we'll do better with quicker reviews and reverts if needed. We'll also try to
figure out ways and means not to break projects inadvertently.

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-18 Thread Hong Hui Xiao
I update [1] to auto delete dhcp port if there is no other ports. But 
after the dhcp port is deleted, the dhcp service is not usable. I can 
resume the dhcp service by adding another subnet, but I don't think it is 
a good way. Do we need to consider bind dhcp port to another segment when 
deleting the existing one? 

[1] https://review.openstack.org/#/c/317358

HongHui Xiao(肖宏辉) PMP®


From:   Carl Baldwin 
To: OpenStack Development Mailing List 

Date:   05/18/2016 11:50
Subject:Re: [openstack-dev] [Neutron][ML2][Routed Networks]




On May 17, 2016 2:18 PM, "Kevin Benton"  wrote:
>
> >I kind of think it makes sense to require evacuating a segment of 
its ports before deleting it.
>
> Ah, I left out an important assumption I was making. We also need to 
auto delete the DHCP port as the segment is deleted. I was thinking this 
will be basically be like the delete_network case where we will auto 
remove the network owned ports.
I can go along with that. Thanks for the clarification.
Carl
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Austin summit - session recap/summary

2016-05-18 Thread Derek Higgins
On 6 May 2016 at 14:18, Paul Belanger  wrote:
> On Tue, May 03, 2016 at 05:34:55PM +0100, Steven Hardy wrote:
>> Hi all,
>>
>> Some folks have requested a summary of our summit sessions, as has been
>> provided for some other projects.
>>
>> I'll probably go into more detail on some of these topics either via
>> subsequent more focussed threads an/or some blog posts but what follows is
>> an overview of our summit sessions[1] with notable actions or decisions
>> highlighted.  I'm including some of my own thoughts and conclusions, folks
>> are welcome/encouraged to follow up with their own clarifications or
>> different perspectives :)
>>
>> TripleO had a total of 5 sessions in Austin I'll cover them one-by-one:
>>
>> -
>> Upgrades - current status and roadmap
>> -
>>
>> In this session we discussed the current state of upgrades - initial
>> support for full major version upgrades has been implemented, but the
>> implementation is monolithic, highly coupled to pacemaker, and inflexible
>> with regard to third-party extraconfig changes.
>>
>> The main outcomes were that we will add support for more granular
>> definition of the upgrade lifecycle to the new composable services format,
>> and that we will explore moving towards the proposed lightweight HA
>> architecture to reduce the need for so much pacemaker specific logic.
>>
>> We also agreed that investigating use of mistral to drive upgrade workflows
>> was a good idea - currently we have a mixture of scripts combined with Heat
>> to drive the upgrade process, and some refactoring into discrete mistral
>> workflows may provide a more maintainable solution.  Potential for using
>> the existing SoftwareDeployment approach directly via mistral (outside of
>> the heat templates) was also discussed as something to be further
>> investigated and prototyped.
>>
>> We also touched on the CI implications of upgrades - we've got an upgrades
>> job now, but we need to ensure coverage of full release-to-release upgrades
>> (not just commit to commit).
>>
>> ---
>> Containerization status/roadmap
>> ---
>>
>> In this session we discussed the current status of containers in TripleO
>> (which is to say, the container based compute node which deploys containers
>> via Heat onto an an Atomic host node that is also deployed via Heat), and
>> what strategy is most appropriate to achieve a fully containerized TripleO
>> deployment.
>>
>> Several folks from Kolla participated in the session, and there was
>> significant focus on where work may happen such that further collaboration
>> between communities is possible.  To some extent this discussion on where
>> (as opposed to how) proved a distraction and prevented much discussion on
>> supportable architectural implementation for TripleO, thus what follows is
>> mostly my perspective on the issues that exist:
>>
>> Significant uncertainty exists wrt integration between Kolla and TripleO -
>> there's largely consensus that we want to consume the container images
>> defined by the Kolla community, but much less agreement that we can
>> feasably switch to the ansible-orchestrated deployment/config flow
>> supported by Kolla without breaking many of our primary operator interfaces
>> in a fundamentally unacceptable way, for example:
>>
>> - The Mistral based API is being implemented on the expectation that the
>>   primary interface to TripleO deployments is a parameters schema exposed
>>   by a series of Heat templates - this is no longer true in a "split stack"
>>   model where we have to hand off to an alternate service orchestration tool.
>>
>> - The tripleo-ui (based on the Mistral based API) consumes heat parameter
>>   schema to build it's UI, and Ansible doesn't support the necessary
>>   parameter schema definition (such as types and descriptions) to enable
>>   this pattern to be replicated.  Ansible also doesn't provide a HTTP API,
>>   so we'd still have to maintain and API surface for the (non python) UI to
>>   consume.
>>
>> We also discussed ideas around integration with kubernetes (a hot topic on
>> the Kolla track this summit), but again this proved inconclusive beyond
>> that yes someone should try developing a PoC to stimulate further
>> discussion.  Again, significant challenges exist:
>>
>> - We still need to maintain the Heat parameter interfaces for the API/UI,
>>   and there is also a strong preference to maintain puppet as a tool for
>>   generating service configuration (so that existing operator integrations
>>   via puppet continue to function) - this is a barrier to directly
>>   consuming the kolla-kubernetes effort directly.
>>
>> - A COE layer like kubernetes is a poor fit for deployments where operators
>>   require strict control of service placement (e.g exactly which nodes a 
>> service
>>   runs on, IP address assignments to specific nodes etc) - this is already
>>

Re: [openstack-dev] [Freezer] Replace Gnu Tar with DAR

2016-05-18 Thread Fausto Marzi
Hi Deklan,
what happen if the extract is executed without --listed-incremental or
--incremental options?

Does the issue still happen?

Thanks,
Fausto

On Sat, May 14, 2016 at 12:56 AM, Dieterly, Deklan 
wrote:

> When using incremental backups, tar will not handle removing a dir and
> then renaming another dir to the removed dir.
>
>
> dek@dek-HP-Z620-Workstation:~/backup-test$ tar --extract
> --listed-incrementa=/dev/null --file backup.2.tar
> tar: Cannot rename Œbackup/dir1¹ to Œbackup/dir2¹: Directory not empty
> tar: Exiting with failure status due to previous errors
>
>
>
> Here are the steps to reproduce.
>
>  1845  mkdir backup
>  1846  mkdir backup/dir1
>  1847  mkdir backup/dir2
>  1848  echo "aa" > backup/dir1/dir1-file1
>  1849  echo "aa" > backup/dir2/dir2-file1
>  1852  tar --create --file=backup.tar --listed-incremental=./listed-incr
> backup
>  1854  rm -rf backup/dir2
>  1855  mv backup/dir1 backup/dir2
>  1856  tar --create --file=backup.2.tar --listed-incremental=./listed-incr
> backup
>  1859  tar --extract --listed-incrementa=/dev/null --file backup.tar
>  1861  tar --extract --listed-incrementa=/dev/null --file backup.2.tar
>
>
> This seems to be a well known, long-standing issue with tar.
> --
> Deklan Dieterly
>
> Senior Systems Software Engineer
> HPE
>
>
>
>
> On 5/13/16, 4:33 PM, "Fox, Kevin M"  wrote:
>
> >Whats the issue?
> >
> >From: Dieterly, Deklan [deklan.diete...@hpe.com]
> >Sent: Friday, May 13, 2016 3:07 PM
> >To: openstack-dev@lists.openstack.org
> >Subject: [openstack-dev] [Freezer] Replace Gnu Tar with DAR
> >
> >Does anybody see any issues if Freezer used DAR instead of Gnu Tar? DAR
> >seems to handle a particular use case that Freezer has while Gnu Tar does
> >not.
> >--
> >Deklan Dieterly
> >
> >Senior Systems Software Engineer
> >HPE
> >
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HA] About next HA Team meeting (VoIP)

2016-05-18 Thread Sam P
Hi All,

 Thank you all for attending to meeting.
 You can find the meeting notes at following etherpad in section
[Notes from meeting 2016/05/18 Wed]
 https://etherpad.openstack.org/p/newton-instance-ha

 Next meeting will be on 5/23 Monday (normal IRC meeting).
 http://eavesdrop.openstack.org/#High_Availability_Meeting

--- Regards,
Sampath



On Tue, May 17, 2016 at 8:03 PM, Sam P  wrote:
> Hi All,
>
>  Since no objections raised, HA team VoIP meeting will shift to 9am
> UTC 18th May 2016.
>
> Here are the gotomeeting details for the meeting.
> 1.  Please join the meeting, May 18, 2016 at 18:00 GMT+9 (9am UTC).
> https://global.gotomeeting.com/join/424496645
> 2. You will be connected to audio using your computer's microphone and
> speakers (VoIP).  A headset is recommended.
> Meeting ID: 424-496-645
>
>
>
>
> I would like to add following topic to agenda.
> ---
> Instance HA API use case
>
>   We consider following use cases need APIs to manage instance HA in 
> operation.
>   Detailed specs and database schema can be found at following link.
>   https://github.com/ntt-sic/masakari/wiki/Masakari-API-Design
>
> [Failover Segment]
> System can be zoned from top to down levels, into Regions,
> Availability Zones and Host Aggregates (or Cells). Within those zones,
> one or more pacemaker/pacemaker-remote clusters may exist. In addition
> to those boundaries, shared storage boundary is also important to
> decide the optimal host for fail-over. Openstack zoned boundaries
> (such as Regions, AZ, Host Aggregates, etc..) can be managed by the
> nova scheduler. However, shared storage boundaries are difficult to
> manage. Moreover, the operator may want to use other types of boundary
> such as rack layout and powering. Therefore, operator may want to
> define the segment of hypervisor hosts and assign the failover
> host/hosts for each of them. Those segment can be define based on the
> shared storage boundaries or any other limitations may critical for
> selection of the failover host.
>
> [Capacity Reservation]
> Service provider who ensures an uptime of VM instance to their
> customer needs to make sure that the certain amount of host capacity
> are reserved to prepare a failure event. If the host capacity of
> system is full and the host failure happens, the VM on the failure
> host cannot be evacuated to other host. The system capacity is
> typically fragmented into segments due to underlying component’s
> scalability and each segment has a limited capacity. To increase
> resource efficiency, high utilization of host capacity is preferred.
> However, as any user consume resources on demand, the host capacity of
> each segment tends to reach the full if the system doesn’t provides
> the way to reserve the portion of host capacity to the operators.
> Therefore, the function to reserve host capacity for failover event is
> important to ensure the high availability of VM instance.
>
> [Host Maintenance]
> A host has to be temporarily and safely removed from the system for
> the maintenance such as hardware failure, firmware update and so on.
> During the maintenance, the monitoring function on the host should be
> disabled and the monitoring alert from the host should be ignored not
> to trigger any recovery action of VM instance on the host if it’s
> running. The host should be excluded from reserved hosts as well.
> ---
> --- Regards,
> Sampath
>
>
>
> On Mon, May 9, 2016 at 8:45 PM, Adam Spiers  wrote:
>> Sam P  wrote:
>>> Hi All,
>>>
>>>  In today's ( 9th May 2016) meeting we agree to skip the next IRC
>>> meeting (which is 16th May 2016)  in favour of a gotomeeting VoIP on
>>> 18th May 2016 (Wednesday).
>>>  Today's meeting logs and summary can be found here.
>>>  http://eavesdrop.openstack.org/meetings/ha/2016/ha.2016-05-09-08.04.html
>>>
>>>  About the meeting Time:
>>>  Every one was convenient with 8:00am UTC.
>>>  However due to some resource allocation issues, I would like to shift
>>> this VoIP meeting to
>>>  9am UTC 18th May 2016
>>>
>>>  Please let me know if you are convenient or not with this time slot.
>>
>> That later time is fine for me :)  Thanks!
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]PTL election conclusion

2016-05-18 Thread Shinobu Kinjo
Hi Chaoyi,

Great!

On Wed, May 18, 2016 at 5:31 PM, joehuang  wrote:
> Hello, Team,
>
> From last Monday to today, only single candidate (myself, if I missed 
> someone's nomination, please point out) for the PTL election of Tricircle for 
> Newton release, according to the guideline from the community, no election is 
> needed.
>
> So I would like to serve as the PTL of Tricircle in Newton release, thanks 
> for your support, let's move Tricircle forward together.

But let me modify your message a bit because I'm picky, as you might
have noticed already -;
We do not support this project but do contribute, at least me.
Anyhow we're ready to become awesome!!

Cheers,
Shinobu

>
> Best Regards
> Chaoyi Huang ( Joe Huang )
>
> -Original Message-
> From: joehuang
> Sent: Monday, May 09, 2016 10:08 AM
> To: 'OpenStack Development Mailing List (not for usage questions)'
> Subject: RE: [openstack-dev][tricircle]PTL election of Tricircle for Newton 
> release
>
> Hi, team,
>
> If you want to be the PTL for Newton release of Tricircle, please send your 
> self nomination letter in the mail-list this week.
>
> Best Regards
> Chaoyi Huang ( Joe Huang )
>
> -Original Message-
> From: joehuang
> Sent: Thursday, May 05, 2016 9:44 AM
> To: 'ski...@redhat.com'; OpenStack Development Mailing List (not for usage 
> questions)
> Subject: [openstack-dev][tricircle]PTL election of Tricircle for Newton 
> release
>
> Hello,
>
> As discussed in yesterday weekly meeting, PTL nomination period from May 9 ~ 
> May 13, election from May 16 ~ May 20 if more than one nomination . If you 
> want to be the PTL for Newton release of Tricircle, please send your self 
> nomination letter in the mail-list. You can refer to the nomination letter of 
> other projects, for example, Kuryr[1], Glance[2], Neutron[3], others can also 
> be found in [4]
>
>
> [1]http://git.openstack.org/cgit/openstack/election/plain//candidates/newton/Kuryr/Gal_Sagie.txt
> [2]http://git.openstack.org/cgit/openstack/election/plain//candidates/newton/Glance/Nikhil_Komawar.txt
> [3]http://git.openstack.org/cgit/openstack/election/plain//candidates/newton/Neutron/Armando_Migliaccio.txt
> [4]https://wiki.openstack.org/wiki/PTL_Elections_March_2016
>
> Best Regards
> Chaoyi Huang ( Joe Huang )
>
>
> -Original Message-
> From: Shinobu Kinjo [mailto:shinobu...@gmail.com]
> Sent: Wednesday, May 04, 2016 5:35 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [tricircle] Requirements for becoming approved 
> official project
>
> Hi Team,
>
> There is an additional work to become an official (approval) project.
> Once we complete PTL election with everyone's consensus, we need to update 
> projects.yaml. [1] I think that the OSPF to become approval project is to 
> elect PTL, then talk to other PTLs of other projects.
>
> [1] 
> https://github.com/openstack/governance/blob/master/reference/projects.yaml
>
> Cheers,
> Shinobu
>
>
> On Mon, May 2, 2016 at 10:40 PM, joehuang  wrote:
>> Hi, Shinobu,
>>
>> Many thanks for the check for Tricircle to be an OpenStack project, and 
>> Thierry for the clarification. glad to know that we are close to OpenStack 
>> offical project criteria.
>>
>> Let's discuss the initial PTL election in weekly meeting, and start initial 
>> PTL election after that if needed.
>>
>> Best Regards
>> Chaoyi Huang ( joehuang )
>> 
>> From: Shinobu Kinjo [shinobu...@gmail.com]
>> Sent: 02 May 2016 18:48
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [tricircle] Requirements for becoming
>> approved official project
>>
>> Hi Thierry,
>>
>> On Mon, May 2, 2016 at 5:45 PM, Thierry Carrez  wrote:
>>> Shinobu Kinjo wrote:

 I guess, it's usable. [1] [2] [3], probably and more...

 The reason why still I can just guess is that there is a bunch of
 documentations!!
 It's one of great works but too much.
>>>
>>>
>>> We have transitioned most of the documentation off the wiki, but
>>> there are still a number of pages that are not properly deprecated.
>>>
 [1] https://wiki.openstack.org/wiki/PTL_Guide
>>>
>>>
>>> This is now mostly replaced by the project team guide, so I marked
>>> this one as deprecated.
>>
>> Honestly, frankly we should clean up something deprecated since there
>> is a bunch of documentations -; It's really hard to read every singe
>> piece...
>>
>> Anyway better than nothing though.
>>
>>>
>>> As far as initial election goes, if there is a single candidate no
>>> need to organize a formal election. If you need to run one, you can
>>> use CIVS
>>> (http://civs.cs.cornell.edu/) since that is what we use for the
>>> official
>>> elections:
>>> https://wiki.openstack.org/wiki/Election_Officiating_Guidelines
>>
>> Thank you for pointing it out.
>> That is really good advice.
>>
>>>
>>> Regards,
>>>
>>> --
>>> Thierry Carrez (ttx)
>>>
>>>
>>> __

Re: [openstack-dev] [vitrage] Normalized Resource State

2016-05-18 Thread Malin, Eylon (Nokia - IL)
I like your idea Ifat. 

-Original Message-
From: Afek, Ifat (Nokia - IL) [mailto:ifat.a...@nokia.com] 
Sent: Wednesday, May 18, 2016 1:27 PM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: Amir, Tomer (Nokia - IL) 
Subject: Re: [openstack-dev] [vitrage] Normalized Resource State

> -Original Message-
> From: EXT Weyl, Alexey (Nokia - IL) [mailto:alexey.w...@nokia.com]
> 
> Sounds good Ifat and Eylon.
> 
> Maybe we only need to think of a more general name for TERMINATING 
> state (something that will include all end transient states).
> 
> > -Original Message-
> > From: Afek, Ifat (Nokia - IL) [mailto:ifat.a...@nokia.com]
> > Sent: Monday, May 16, 2016 6:29 PM
> >
> > > -Original Message-
> > > From: EXT Malin, Eylon (Nokia - IL) [mailto:eylon.ma...@nokia.com]
> > > Sent: Monday, May 16, 2016 6:09 PM
> > >
> > > Hi all,
> > >
> > > Aggregated_state can has only values from NormalizedResourceState 
> > > which is constant enum.
> > > Currently the NormalizedResourceState has these values :
> TERMINATED,
> > > ERROR, UNRECOGNIZED ,  SUSPENDED ,   RESCUED ,  RESIZED ,
>  TRANSIENT
> > ,
> > >  SUBOPTIMAL,  RUNNING ,  UNDEFINED.
> > >
> > > I think that 2 states are missing :
> > > 1. IN_MAINTENANCE - which represent that this resource is under 
> > > maintenance 2. TERMINATING - which represent transient state that 
> > > terminating the resource.
> >
> > MAINTENANCE state is relevant also for OPNFV Doctor workflow (which
> is
> > currently in discussion), so I think we should add it. We can add 
> > TERMINATING as well.
> >
> > >
> > > And regards RUNNING state maybe ACTIVE is more appropriate. 
> > > Horizon also show active state for instance, networks and ports.
> >
> > Fine with me to replace RUNNING with ACTIVE.
> >
> > >
> > > What do you think ?
> > >
> > > Eylon Malin
> >
> > Ifat.

Thinking about it again, I'm not sure the normalized state is the right 
direction.
Today it makes sense to add "terminating" and "maintenance", tomorrow it will 
make sense to add two other states, and we will end up with a "unified state" 
that includes all states of all datasources.

If the main purpose of the normalized state is for visualization, then how 
about we have the following:
- AggregatedState, which is the aggregation of the different states. It can 
hold any value, depending on the datasource configuration file.
- SimplifiedState or StateCategory, which will be one of {error, warning, ok, 
unknown}, or in other words red/yellow/green/gray. 
- No normalized state

What do you say?
Ifat.









__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ODP: [Monasca] Sporadic metrics

2016-05-18 Thread tomasz.treb...@ts.fujitsu.com
Hi there,

Erickson, do you know if you will be able to follow up my current 
implementation in either of components Witek has mentioned ?
If you would need some better background, let me know all your questions.

Currently the most crucial part would be the monasca-ui. This change would have 
to be created on top of the change for
non-deterministic alarms, because that way it only makes sense. The idea I have 
for it is to automatically propose type of an
alarm for the user based on the picked metric.
- non-deterministic for sporadic metrics
- determinist for sporadic metrics
Of course that is only proposition and user/operator can still make his own 
choices about that.

As for the python-monascaclient, most of the changes would be around metric 
commands.
I think there's actually nothing to be done about other commands (especially 
related to alarms definitions). But just
recently I've started using this CLI client, so I am no expert about that.

PS. Please let me know, if you have any doubts.

Regards,
Tomasz


Od: Witek Bedyk 
Wysłane: 13 maja 2016 17:44
Do: Erickson; openstack-dev@lists.openstack.org
Temat: [Monasca] Sporadic metrics

Hi Erickson,

thanks for your interest in our project and sorry for replying so late.

We are happy that you're willing to contribute to the development of
sporadic metrics.

The incentive for this development is to introduce the possibility to
create alarms in Monasca based on event like metrics. E.g. A metric is
generated every time when an error in the given log-file appears. Such
metric is different from the standard Monasca metric in the way that it
does not appear on the regular, periodic basis. You can read a little
more about it here [1], but the blueprint itself is outdated and the
implementation changed.

The main logic for alarm creation has been moved to
"deterministic_alarm" topic [2] and Tomasz works currently on that. Your
reviews are very welcome.

The changes in "sporadic_metric" topic [3] are meant to give the user
information that a given metric is of sporadic (non-periodic) character.
It will have no influence on monasca-thresh. Tomasz has put the code
that he already had in this topic but he must focus on
"deterministic_alarm" now, so please feel free to submit your patch-sets
if you want. Apart from the existing changes also changes in monasca-ui
and python-monascaclient repos are needed to display the 'sporadic'
attribute of the metric for the user. You could compare this change in
monasca-ui [4].

If you have questions to the implementation of "sporadic metrics", I
think the existing changes in gerrit [3] are the best place for exchange
and communication at the moment. Actually I should have updated the
blueprint for that. Sorry, my fault.

So, have fun with the code :)


Cheers
Witek


[1] https://blueprints.launchpad.net/monasca/+spec/alarmsonlogs
[2] https://review.openstack.org/#/q/topic:deterministic_alarm
[3] https://review.openstack.org/#/q/topic:sporadic_metric
[4]
https://review.openstack.org/#/c/314420/3/monitoring/alarmdefs/templates/alarmdefs/_detail.html


P.S. Please note, I will be in holidays for the next two weeks.


--
FUJITSU Enabling Software Technology GmbH
Schwanthalerstr. 75a, 80336 München

Telefon: +49 89 360908-547
Telefax: +49 89 360908-8547
COINS: 7941-6547

Sitz der Gesellschaft: München
AG München, HRB 143325
Geschäftsführer: Dr. Yuji Takada, Hans-Dieter Gatzka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] Normalized Resource State

2016-05-18 Thread Afek, Ifat (Nokia - IL)
> -Original Message-
> From: EXT Weyl, Alexey (Nokia - IL) [mailto:alexey.w...@nokia.com]
> 
> Sounds good Ifat and Eylon.
> 
> Maybe we only need to think of a more general name for TERMINATING
> state (something that will include all end transient states).
> 
> > -Original Message-
> > From: Afek, Ifat (Nokia - IL) [mailto:ifat.a...@nokia.com]
> > Sent: Monday, May 16, 2016 6:29 PM
> >
> > > -Original Message-
> > > From: EXT Malin, Eylon (Nokia - IL) [mailto:eylon.ma...@nokia.com]
> > > Sent: Monday, May 16, 2016 6:09 PM
> > >
> > > Hi all,
> > >
> > > Aggregated_state can has only values from NormalizedResourceState
> > > which is constant enum.
> > > Currently the NormalizedResourceState has these values :
> TERMINATED,
> > > ERROR, UNRECOGNIZED ,  SUSPENDED ,   RESCUED ,  RESIZED ,
>  TRANSIENT
> > ,
> > >  SUBOPTIMAL,  RUNNING ,  UNDEFINED.
> > >
> > > I think that 2 states are missing :
> > > 1. IN_MAINTENANCE - which represent that this resource is under
> > > maintenance 2. TERMINATING - which represent transient state that
> > > terminating the resource.
> >
> > MAINTENANCE state is relevant also for OPNFV Doctor workflow (which
> is
> > currently in discussion), so I think we should add it. We can add
> > TERMINATING as well.
> >
> > >
> > > And regards RUNNING state maybe ACTIVE is more appropriate. Horizon
> > > also show active state for instance, networks and ports.
> >
> > Fine with me to replace RUNNING with ACTIVE.
> >
> > >
> > > What do you think ?
> > >
> > > Eylon Malin
> >
> > Ifat.

Thinking about it again, I'm not sure the normalized state is the right 
direction.
Today it makes sense to add "terminating" and "maintenance", tomorrow it will 
make sense to add two other states, and we will end up with a "unified state" 
that includes all states of all datasources.

If the main purpose of the normalized state is for visualization, then how 
about we have the following:
- AggregatedState, which is the aggregation of the different states. It can 
hold any value, depending on the datasource configuration file.
- SimplifiedState or StateCategory, which will be one of {error, warning, ok, 
unknown}, or in other words red/yellow/green/gray. 
- No normalized state

What do you say?
Ifat.









__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Reason for installing devstack in CI pipeline

2016-05-18 Thread Derek Higgins
On 16 May 2016 at 18:16, Paul Belanger  wrote:
> Greetings,
>
> Over the last few weeks I've been head deep into understanding the TripleO CI
> pipeline.  For the most part, I am happy that we have merged centos-7 DIB
> support and I'm working to migrate the jobs to it.
>
> Something I have been trying to figure out, is why does the pipeline install
> devstack?  I cannot see anything currently in the toci_gate.sh script that is
> referencing devstack.  Everything seems to be related to launching the 
> external
> node.
>
> So, my question is, what is devstack doing?

Can you elaborate what you mean when you say we're installing
devstack, we currently make use of some of the devstack-gate scripts.
The main need  for this is so it clones the correct version of each
project to /opt/stack/new, the tripleo ci job then looks at
ZUUL_CHANGES[1] to get a list of projects being tested and uses those
git repositories to build rpm packages from them. The rest of the CI
job then uses these rpm's (layered on top of rdo trunk repositorys) so
they end up installed where appropriate.

[1] - 
http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/toci_instack.sh#n103
>
> ---
> Paul
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Aodh upgrades - Request backport exception for stable/liberty

2016-05-18 Thread Jiří Stránský

On 16.5.2016 23:54, Pradeep Kilambi wrote:

On Mon, May 16, 2016 at 3:33 PM, James Slagle 
wrote:


On Mon, May 16, 2016 at 10:34 AM, Pradeep Kilambi  wrote:

Hi Everyone:

I wanted to start a discussion around considering backporting Aodh to
stable/liberty for upgrades. We have been discussing quite a bit on whats
the best way for our users to upgrade ceilometer alarms to Aodh when

moving

from liberty to mitaka. A quick refresh on what changed, In Mitaka,
ceilometer alarms were replaced by Aodh. So only way to get alarms
functionality is use aodh. Now when the user kicks off upgrades from

liberty

to Mitaka, we want to make sure alarms continue to function as expected
during the process which could take multiple days. To accomplish this I
propose the following approach:

* Backport Aodh functionality to stable/liberty. Note, Aodh

functionality is

backwards compatible, so with Aodh running, ceilometer api and client

will

redirect requests to Aodh api. So this should not impact existing users

who

are using ceilometer api or client.

* As part of Aodh deployed via heat stack update, ceilometer alarms

services

will be replaced by openstack-aodh-*. This will be done by the puppet

apply

as part of stack convergence phase.

* Add checks in the Mitaka pre upgrade steps when overcloud install kicks
off to check and warn the user to update to liberty + aodh to ensure

aodh is

running. This will ensure heat stack update is run and, if alarming is

used,

Aodh is running as expected.

The upgrade scenarios between various releases would work as follows:

Liberty -> Mitaka

* Upgrade starts with ceilometer alarms running
* A pre-flight check will kick in to make sure Liberty is upgraded to
liberty + aodh with stack update
* Run heat stack update to upgrade to aodh
* Now ceilometer alarms should be removed and Aodh should be running
* Proceed with mitaka upgrade
* End result, Aodh continue to run as expected

Liberty + aodh -> Mitaka:

* Upgrade starts with Aodh running
* A pre-flight check will kick in to make sure Liberty is upgraded to

Aodh

with stack update
* Confirming Aodh is indeed running, proceed with Mitaka upgrade with

Aodh

running
* End result, Aodh continue to be run as expected


This seems to be a good way to get the upgrades working for aodh. Other

less

effective options I can think of are:

1. Let the Mitaka upgrade kick off and do "yum update" which replace aodh
during migration, alarm functionality will be down until puppet converge
runs and configures Aodh. This means alarms will be down during upgrade
which is not ideal.

2. During Mitaka upgrades, replace with Aodh and add a bash script that
fully configures Aodh and ensures aodh is functioning. This will involve
significant work and results in duplicating everything puppet does today.


How much duplication would this really be? Why would it have to be in bash?



Well pretty much entire aodh configuration will need to happen, Here is
what we do in devstack, something along these lines[1]. So in short, we'll
need to install, create users, configure db and coordination backends,
configure api to run under mod wsgi. Sure, it doesn't have to be bash,
assumed that would be easiest to invoke during upgrades.





Could it be:

Liberty -> Mitaka

* Upgrade starts with ceilometer alarms running
* Add a new hook for the first step of Mitaka upgrade that does:
  ** sets up mitaka repos
  ** migrates from ceilometer alarms to aodh, can use puppet
  ** ensures aodh is running
* Proceed with rest of mitaka upgrade

At most, it seems we'd have to surround the puppet apply with some
pacemaker commands to possibly set maintenance mode and migrate
constraints.

The puppet manifest itself would just be the includes and classes for aodh.




Yea I guess we could do something like this, i'm not fully clear on the
details on how and when this would be called. But with the below caveat you
mentioned already.



Yes this is a possibility, it's still not fully utilizing the Puppet we 
have for deployment, we'd have at least a custom manifest, but hopefully 
it wouldn't be too big.


In case the AODH classes include some other Puppet classes from their 
code, we could end up applying more config changes than desired in this 
phase and break something. I'm hoping that this is more of a theoretical 
concern rather than practical, but probably deserves some verification.







One complication might be that the aodh packages from Mitaka might
pull in new deps that required updating other OpenStack services,
which we wouldn't yet want to do. That is probably worth confirming
though.



Yea we will be pulling in at least some new oslo deps and client libraries
for sure. But wouldnt yum update during the upgrades do that anyway? or
would aodh setup run before yum update phase of upgrade process?


Good question :) We could probably also do it in the middle of 
controller update phase, between step 1 (stop services + package update) 
and step 2 (start ser

Re: [openstack-dev] [glance] Cross project and all other liaisons

2016-05-18 Thread Erno Kuvaja
If there is changes needed and you haven't been editing the wiki pages
before, let us know. Infra does not accept new accounts at the moment due
to huge amount of spamming, so if you haven't registered before, you can't
do it now.

- Erno

On Tue, May 17, 2016 at 4:45 PM, Nikhil Komawar 
wrote:

> Team,
>
>
> Please make sure the Cross project liaisons page [1] is up to date with
> your Newton commitments.
>
>
> * If you are planning to stick around with certain duties and your name
> is already on the wiki page [1] , please make sure the contact
> information is up-to-date. Also, please note any changes that may have
> happened on the responsibilities mentioned in the wiki.
>
>
> * If you are not looking forward to contributing to a specific duty that
> you signed up for earlier or if you wish to change your duties (for
> example if you wish to be a QA liaison and you have been a release
> liaison until now), please let me know first and after the ack update
> the page [1] -- this can be racy (two or more people interested in the
> same role) so trying to avoid conflict of interests.
>
>
> * If you are interested in contributing to any of the roles or are
> interested in signing up for more than one role, you are welcome to do
> so. You do not need to be a core in the project for this sign-up but
> cores are encouraged to sign-up. If you are looking to make your way
> into the core group, this could be a good opportunity as well. The
> responsibilities for each role have been mentioned in the wiki [1]. But
> the first and foremost responsibility is being the primary point of
> contact of Glance for that cross-group and keep a regular sync with the
> PTL about the updates. You are strongly encouraged to mention updates at
> our weekly meetings (except in some cases like security liaisons where
> you will send updates privately via email to the glance-core-sec group
> or sometimes to the VMT team).
>
>
> All the names against these roles will be considered final tomorrow
> (Wednesday May 18th 23:59 UTC).
>
>
> Let me know if you have concerns.
>
>
> [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons
>
>
>
>
> --
>
> Thanks,
> Nikhil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Neutron plugins that do not have L3 agents

2016-05-18 Thread Gary Kotton
Hi,
The patch [i] broke all of the Neutron plugins that do not run L3 agents. This 
was due to the fact that some different plugins expected different values for 
the Q_L3_ENABLED.The revert for that patch has been blocked [ii]. In order to 
get out of this pickle we have proposed the following:

1.   Decouple tempest from neutron flags [iii]

2.   Enable plugins to define if they support L3 API’s (this does not mean 
that they need to run L3 agents) [iv]
I hope that 1 and 2 will suffice to enable us to get the CI’s back up and 
running again.
Thanks
Gary


i. https://review.openstack.org/#/c/315995/

ii.   https://review.openstack.org/#/c/316660/

iii.  https://review.openstack.org/#/c/317754

iv.  https://review.openstack.org/#/c/317877/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rdo-list] [tripleo] How to setup hyper-converged nodes (compute+ceph)?

2016-05-18 Thread Giulio Fidente

On 05/18/2016 04:13 AM, Gerard Braad wrote:

Hi all,


Hereby updating the status of this message:

On Mon, May 16, 2016 at 9:55 AM, Gerard Braad  wrote:

we would like
to deploy Compute nodes with Ceph installed on them. This will
probably be a change to the tripleo-heat-templates and the compute,
and cephstorage resources


I noticed a review enabling the deployment of Ceph OSDs on the compute
node: https://review.openstack.org/#/c/273754/5
At the moment, it is marked as Workflow -1 due to possible
implementation of this feature by composable roles.


hi, that change is most probably working fine as-is but we are in the 
process of migrating service defintions into more isolated roles and 
that would supersede the existing submission


there is some WIP (changes Ie92b25a9c68a76b6d92abedef31e8039b16d9863 and 
I1921115cb6218c7554348636c404245c79937673) to migrate ceph 
mon/client/osd services into isolated roles but it's far from landing at 
this point because we miss some additional machinery in the templates to 
distribute roles on more


one way or another this should be possible in Newton though so I suggest 
to keep tracking all three submission; we could fallback on the initial 
submission should the ceph migration reveal to be problematic

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-18 Thread Michał Dulko
On 05/17/2016 09:40 PM, Crag Wolfe wrote:
 


> That helps a lot, thanks! You are right, it would have to be a 3-step
> upgrade to avoid the issue you mentioned in 6.
>
> Another thing I am wondering about: if my particular object is not
> exposed by RPC, is it worth making it a full blown o.vo or not? I.e, I
> can do the 3 steps over 3 releases just in the object's .py file -- what
> additional value do I get from o.vo?

Unfortunately Zane's right - none. In case of DB schema compatibility
you can benefit from o.vo if you have something like nova-conductor -
upgraded atomically and being able to backport object to previous
versions getting data from newer DB schema. Also there shouldn't be DB
accesses in your n-cpu-like services.

o.vo are mostly useful in Cinder to model dictionaries sent over RPC
(like request_spec), which we're backporting if there are older versions
of services in the deployment. Versioning and well-defining dict blobs
is essential to control compatibility. Also sending whole o.vo instead
of plain id in RPC methods can give you more flexibility in complicated
compatibility issues, but it turns out in Cinder we haven't yet hit a
case when that would be useful.

> I'm also shying away from the idea of allowing for config-driven
> upgrades. The reason is, suppose an operator updates a config, then does
> a rolling restart to go from X to X+1. Then again (and probably again)
> as needed. Everything works great, run a victory lap. A few weeks later,
> some ansible or puppet automation accidentally blows away the config
> value saying that heat-engine should be running at the X+3 version for
> my_object. Ouch. Probably unlikely, but more likely than say
> accidentally deploying a .py file from three releases ago.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gate] Re: [fixtures] [Neutron][Stable][Liberty][CLI] Gate broken

2016-05-18 Thread Ihar Hrachyshka

> On 18 May 2016, at 09:55, Ihar Hrachyshka  wrote:
> 
> 
>> On 18 May 2016, at 05:31, Darek Smigiel  wrote:
>> 
>> Hello Stable Maint Team,
>> It seems that python-neutronclient gate for Liberty is broken[1][2] by 
>> update for keystoneclient. OpenStack proposal bot already sent update to 
>> requirements [3], but it needs to be merged.
>> If you have enough power, please unblock gate.
>> 
>> Thanks,
>> Darek
>> 
>> [1] https://review.openstack.org/#/c/296580/
>> [2] https://review.openstack.org/#/c/296576/
>> [3] https://review.openstack.org/#/c/258336/
> 
> Right.
> 
> I actually looked at the requirements update yesterday, and the problem is 
> that it also fails in gate due to fixtures 2.0.0 being used in client gate, 
> and apparently misbehaving for python3:
> 
> http://logs.openstack.org/36/258336/4/check/gate-python-neutronclient-python34/668ca60/testr_results.html.gz
> 
> This failure occurs only when the failing tests are executed by the same 
> python thread after any CLITestV20Base based test (like 
> CLITestV20ExtensionJSONChildResource). The base class mocks out 
> neutronclient.v2_0.client.Client.get_attr_metadata using 
> fixtures.MonkeyPatch, and apparently the patch is not cleaned up properly by 
> fixtures 2.0.0.
> 
> The reason why it fails for Liberty only is that in Mitaka+, we don’t call 
> this patched method in the course of the failing test runs, and hence don’t 
> trigger the issue.
> 
> The easiest way to solve that is by switching from fixtures to mock for the 
> monkey patch. It indeed solves the issue. If we go this route, ideally, we 
> would probably do the same for all branches starting from master, even if the 
> issue is not currently triggered there.
> 
> I would like to hear from client folks whether it’s a reasonable approach 
> here to just switch to mock and backport, or we want to stick to fixtures and 
> bring the issue with fixtures authors. Note that in neutron, we were already 
> hit by the release monkey patch breakage before, and switched to using mock 
> in base test class:
> 
> https://review.openstack.org/#/c/302997/
> 
> ===
> 
> Now, the question is why the new fixtures release broke us. In Liberty, we 
> already have constraints in place, right? Not really. For clients, we have 
> not applied them (even in master). I made initial attempt to do it, by adding 
> -c… to install_command in the repo, but was hit by an issue:
> 
> Obtaining file:///home/vagrant/git/python-neutronclient
> Could not satisfy constraints for 'python-neutronclient': installation from 
> path or url cannot be constrained to a version
> 
> This happens because we have usedevelop = True in tox.ini, so it tries to 
> install the client from the repo path. But since upper-constraints.txt also 
> contains the client version pin, pip detects version conflict and fails. This 
> does not happen for neutron where we also use usedevelop = True because 
> neutron package version is not tracked in openstack/requirements and is not 
> pinned in upper-constraints.txt.
> 
> In devstack, before installing a library from git, we modify the provided 
> constraints file, by replacing the library version pin with file:// 
> definition:
> 
> http://git.openstack.org/cgit/openstack/requirements/tree/README.rst#n160
> 
> To make it work for tox based jobs, we would need to apply the same strategy 
> as part of install_command for all clients. Meaning, we would need a hack 
> similar to tox_install.sh found in neutron-*aas repos. We would also need to 
> install openstack/requirements as part of the process to get access to 
> edit-constraint tool.

UPD: this patch implements the approach: https://review.openstack.org/317909

I suspect other clients will need to handle similar issues, so I wrote the 
script in a way that should be easy to adopt for other repos.

> 
> Ihar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

2016-05-18 Thread Paul Bourke

+1 thanks for all the work Mauricio

On 17/05/16 20:00, Steven Dake (stdake) wrote:

Hello core reviewers,

I am proposing Mauricio (mlima on irc) for the core review team.  He has
done a fantastic job reviewing appearing in the middle of the pack for
90 days [1] and appearing as #2 in 45 days [2].  His IRC participation
is also fantastic and does a good job on technical work including
implementing Manila from zero experience :) as well as code cleanup all
over the code base and documentation.  Consider my proposal a +1 vote.

I will leave voting open for 1 week until May 24th.  Please vote +1
(approve), or –2 (veto), or abstain.  I will close voting early if there
is a veto vote, or a unanimous vote is reached.

Thanks,
-steve

[1] http://stackalytics.com/report/contribution/kolla/90
[2] http://stackalytics.com/report/contribution/kolla/45


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]weekly meeting of May. 18

2016-05-18 Thread joehuang
Hi,

IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on every 
Wednesday starting from UTC 13:00.

After several  round review of the 'cross pod L2 networking' and 'dynamic pod 
binding', it's more clear about the features implementation. So today we will 
have a short discussion and progress review in these two features, and also 
check and discuss the bugs reported recently if needed.

Agenda:
# implementation discussion of 'cross pod L2 networking' and 'dynamic pod 
binding'
# bugs review

If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang ( Joe Huang )

From: joehuang
Sent: Wednesday, May 11, 2016 3:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev][tricircle]weekly meeting of May. 11

Hi,

IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on every 
Wednesday starting from UTC 13:00.

Agenda:
# cross pod L2 networking, Local Network/Shared VLAN network
# dynamic pod binding
# features required for tempest

If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.



Best Regards
Chaoyi Huang ( Joe Huang )

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]PTL election conclusion

2016-05-18 Thread joehuang
Hello, Team,

>From last Monday to today, only single candidate (myself, if I missed 
>someone's nomination, please point out) for the PTL election of Tricircle for 
>Newton release, according to the guideline from the community, no election is 
>needed.

So I would like to serve as the PTL of Tricircle in Newton release, thanks for 
your support, let's move Tricircle forward together.

Best Regards
Chaoyi Huang ( Joe Huang )

-Original Message-
From: joehuang 
Sent: Monday, May 09, 2016 10:08 AM
To: 'OpenStack Development Mailing List (not for usage questions)'
Subject: RE: [openstack-dev][tricircle]PTL election of Tricircle for Newton 
release

Hi, team,

If you want to be the PTL for Newton release of Tricircle, please send your 
self nomination letter in the mail-list this week.

Best Regards
Chaoyi Huang ( Joe Huang )

-Original Message-
From: joehuang
Sent: Thursday, May 05, 2016 9:44 AM
To: 'ski...@redhat.com'; OpenStack Development Mailing List (not for usage 
questions)
Subject: [openstack-dev][tricircle]PTL election of Tricircle for Newton release

Hello,

As discussed in yesterday weekly meeting, PTL nomination period from May 9 ~ 
May 13, election from May 16 ~ May 20 if more than one nomination . If you want 
to be the PTL for Newton release of Tricircle, please send your self nomination 
letter in the mail-list. You can refer to the nomination letter of other 
projects, for example, Kuryr[1], Glance[2], Neutron[3], others can also be 
found in [4]


[1]http://git.openstack.org/cgit/openstack/election/plain//candidates/newton/Kuryr/Gal_Sagie.txt
[2]http://git.openstack.org/cgit/openstack/election/plain//candidates/newton/Glance/Nikhil_Komawar.txt
[3]http://git.openstack.org/cgit/openstack/election/plain//candidates/newton/Neutron/Armando_Migliaccio.txt
[4]https://wiki.openstack.org/wiki/PTL_Elections_March_2016

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Shinobu Kinjo [mailto:shinobu...@gmail.com]
Sent: Wednesday, May 04, 2016 5:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle] Requirements for becoming approved 
official project

Hi Team,

There is an additional work to become an official (approval) project.
Once we complete PTL election with everyone's consensus, we need to update 
projects.yaml. [1] I think that the OSPF to become approval project is to elect 
PTL, then talk to other PTLs of other projects.

[1] https://github.com/openstack/governance/blob/master/reference/projects.yaml

Cheers,
Shinobu


On Mon, May 2, 2016 at 10:40 PM, joehuang  wrote:
> Hi, Shinobu,
>
> Many thanks for the check for Tricircle to be an OpenStack project, and 
> Thierry for the clarification. glad to know that we are close to OpenStack 
> offical project criteria.
>
> Let's discuss the initial PTL election in weekly meeting, and start initial 
> PTL election after that if needed.
>
> Best Regards
> Chaoyi Huang ( joehuang )
> 
> From: Shinobu Kinjo [shinobu...@gmail.com]
> Sent: 02 May 2016 18:48
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [tricircle] Requirements for becoming 
> approved official project
>
> Hi Thierry,
>
> On Mon, May 2, 2016 at 5:45 PM, Thierry Carrez  wrote:
>> Shinobu Kinjo wrote:
>>>
>>> I guess, it's usable. [1] [2] [3], probably and more...
>>>
>>> The reason why still I can just guess is that there is a bunch of 
>>> documentations!!
>>> It's one of great works but too much.
>>
>>
>> We have transitioned most of the documentation off the wiki, but 
>> there are still a number of pages that are not properly deprecated.
>>
>>> [1] https://wiki.openstack.org/wiki/PTL_Guide
>>
>>
>> This is now mostly replaced by the project team guide, so I marked 
>> this one as deprecated.
>
> Honestly, frankly we should clean up something deprecated since there 
> is a bunch of documentations -; It's really hard to read every singe 
> piece...
>
> Anyway better than nothing though.
>
>>
>> As far as initial election goes, if there is a single candidate no 
>> need to organize a formal election. If you need to run one, you can 
>> use CIVS
>> (http://civs.cs.cornell.edu/) since that is what we use for the 
>> official
>> elections: 
>> https://wiki.openstack.org/wiki/Election_Officiating_Guidelines
>
> Thank you for pointing it out.
> That is really good advice.
>
>>
>> Regards,
>>
>> --
>> Thierry Carrez (ttx)
>>
>>
>> _
>> _ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Email:
> shin...@linux.com
> GitHub:
> shinobu-x
> Blog:
> Life with Distributed Computational System based on OpenSource
>
> _

[openstack-dev] [vitrage] Yaml file validation for Static Physical datasource

2016-05-18 Thread Weyl, Alexey (Nokia - IL)
Hi Liat,

I have noticed that you are working on the validation of the templates yaml 
files.

Can you please write the validation for the static physical yaml files as well?

Alexey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][oslo.messaging] stable/mitaka is broken by oslo.messaging 5.0.0

2016-05-18 Thread Roman Dobosz
On Tue, 17 May 2016 21:41:11 -0700
Joshua Harlow  wrote:

> >> Options I see:
> >> Constrain oslo.messaging in global-requirements.txt for
> >> stable/mitaka with 4.6.1. Hard to do since it requires wide
> >> cross-project coordination.
> >> Remove that hack in stable/mitaka as we did with master. It may
> >> be bad because this was wanted very much by some of the users
> >>
> >> Not sure what else we can do.
> >
> > You could set up your test jobs to use the upper-constraints.txt
> > file in
> > the requirements repo.
> >
> > What was the outcome of the discussion about adding the
> > at-least-once
> > semantics to oslo.messaging?
>
> So there are a few options I am seeing so far (there might be more
> that I don't see also), others can hopefully correct me if they are
> wrong (which they might be) ;)
>
> Option #1
>
> Oslo.messaging (and the dispatcher part that does this) stays as is,
> doing at-most-once for RPC (notifications are in a different
> category here so let's not discuss them) and doing at-most-once well
> and battle-hardened (it's current goal) across the various backend
> drivers it supports.
>
> At that point at-least-once will have to done via some other library
> where this kind of semantics can be placed, that might be tooz via
> https://review.openstack.org/#/c/260246/ (which has similar
> semantics, but is not based on a kind of RPC, instead it's more like
> a job-queue).
>
> Option #2
>
> Oslo.messaging (and the dispatcher part that does this) changes
> (possibly allowing it to be replaced with a different type of
> dispatcher, ie like in https://review.openstack.org/#/c/314732/);
> the default class continues being great at for RPC (notifications
> are in a different category here so let's not discuss them) and
> doing at-most-once well and battle-hardened (it's current goal)
> across the various backend drivers it supports. If people want to
> provide an alternate class with different semantics they are
> somewhat on there own (but at least they can do this).
>
> Issues raised: this though may not be wanted, as some of the
> oslo.messaging folks do not want the dispatcher class to be exposed
> at all (and would prefer to make it totally private, so exposing it
> would be against that goal); though people are already 'hacking'
> this kind of functionality in, so it might be the best we can get at
> the current time?
>
> Option #3
>
> Do nothing.
>
> Issues raised: everytime oslo.messaging changes this *mostly*
> internal dispatcher API a project will have to make a new 'hack' to
> replace it and hope that the semantics that it has 'hacked' in will
> continue to be compatible with the various drivers in
> oslo.messaging. Not IMHO a sustainable way to keep on working (and
> I'd be wary of doing this in a project if I was the owner of one,
> because it's ummm, 'dirty').
>
> My thoughts on what could work:
>
> What I'd personally like to see is a mix of option #1 and #2, where
> we have commitment from folks (besides myself, lol) to work on
> option #1 and we temporarily move forward with option #2 with a
> strict-statement that the functionality we would be enabling will
> only exist for say a single release (and then it will be removed).
>
> Thoughts from others?

Option #4

(Which might be obvious) Directly use RabbitMQ driver, like
pika/kombu, which can expose all the message queue features to the
project.

Issues raised: Pushback from the community due not using
oslo.messaging and potential necessity for implementing it for other
drivers/transports, or forcing to use particular message queue/driver
in every project.

-- 
Cheers,
Roman Dobosz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fixtures] [Neutron][Stable][Liberty][CLI] Gate broken

2016-05-18 Thread Ihar Hrachyshka

> On 18 May 2016, at 09:55, Ihar Hrachyshka  wrote:
> 
> 
>> On 18 May 2016, at 05:31, Darek Smigiel  wrote:
>> 
>> Hello Stable Maint Team,
>> It seems that python-neutronclient gate for Liberty is broken[1][2] by 
>> update for keystoneclient. OpenStack proposal bot already sent update to 
>> requirements [3], but it needs to be merged.
>> If you have enough power, please unblock gate.
>> 
>> Thanks,
>> Darek
>> 
>> [1] https://review.openstack.org/#/c/296580/
>> [2] https://review.openstack.org/#/c/296576/
>> [3] https://review.openstack.org/#/c/258336/
> 
> Right.
> 
> I actually looked at the requirements update yesterday, and the problem is 
> that it also fails in gate due to fixtures 2.0.0 being used in client gate, 
> and apparently misbehaving for python3:
> 
> http://logs.openstack.org/36/258336/4/check/gate-python-neutronclient-python34/668ca60/testr_results.html.gz
> 
> This failure occurs only when the failing tests are executed by the same 
> python thread after any CLITestV20Base based test (like 
> CLITestV20ExtensionJSONChildResource). The base class mocks out 
> neutronclient.v2_0.client.Client.get_attr_metadata using 
> fixtures.MonkeyPatch, and apparently the patch is not cleaned up properly by 
> fixtures 2.0.0.
> 
> The reason why it fails for Liberty only is that in Mitaka+, we don’t call 
> this patched method in the course of the failing test runs, and hence don’t 
> trigger the issue.
> 
> The easiest way to solve that is by switching from fixtures to mock for the 
> monkey patch. It indeed solves the issue. If we go this route, ideally, we 
> would probably do the same for all branches starting from master, even if the 
> issue is not currently triggered there.
> 
> I would like to hear from client folks whether it’s a reasonable approach 
> here to just switch to mock and backport, or we want to stick to fixtures and 
> bring the issue with fixtures authors. Note that in neutron, we were already 
> hit by the release monkey patch breakage before, and switched to using mock 
> in base test class:
> 
> https://review.openstack.org/#/c/302997/

UPD: I pushed some patches for master that should be merged and backported back 
to stable/liberty to fix python3 gate there: 
https://review.openstack.org/#/q/topic:bug/1583029 [assuming we are fine with 
switching to mock there, for which I don’t see any reason not to].

> 
> ===
> 
> Now, the question is why the new fixtures release broke us. In Liberty, we 
> already have constraints in place, right? Not really. For clients, we have 
> not applied them (even in master). I made initial attempt to do it, by adding 
> -c… to install_command in the repo, but was hit by an issue:
> 
> Obtaining file:///home/vagrant/git/python-neutronclient
> Could not satisfy constraints for 'python-neutronclient': installation from 
> path or url cannot be constrained to a version
> 
> This happens because we have usedevelop = True in tox.ini, so it tries to 
> install the client from the repo path. But since upper-constraints.txt also 
> contains the client version pin, pip detects version conflict and fails. This 
> does not happen for neutron where we also use usedevelop = True because 
> neutron package version is not tracked in openstack/requirements and is not 
> pinned in upper-constraints.txt.
> 
> In devstack, before installing a library from git, we modify the provided 
> constraints file, by replacing the library version pin with file:// 
> definition:
> 
> http://git.openstack.org/cgit/openstack/requirements/tree/README.rst#n160
> 
> To make it work for tox based jobs, we would need to apply the same strategy 
> as part of install_command for all clients. Meaning, we would need a hack 
> similar to tox_install.sh found in neutron-*aas repos. We would also need to 
> install openstack/requirements as part of the process to get access to 
> edit-constraint tool.
> 
> Ihar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >