Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Sean Dague
On 11/04/2015 03:27 PM, Clark Boylan wrote:
> On Wed, Nov 4, 2015, at 09:14 AM, Sean Dague wrote:
>> On 11/04/2015 12:10 PM, Jeremy Stanley wrote:
>>> On 2015-11-04 08:43:27 -0600 (-0600), Matthew Thode wrote:
 On 11/04/2015 06:47 AM, Sean Dague wrote:
>>> [...]
> Is there a nodepool cache strategy where we could pre build these? A 25%
> performance win comes out the other side if there is a strategy here.

 python wheel repo could help maybe?
>>>
>>> That's along the lines of how I expect we'd need to solve it.
>>> Basically add a new DIB element to openstack-infra/project-config in
>>> nodepool/elements (or extend the cache-devstack element already
>>> there) to figure out which version(s) it needs to prebuild and then
>>> populate a wheelhouse which can be leveraged by the jobs running on
>>> the resulting diskimage. The test scripts in the
>>> openstack/requirements repo may already have much of this logic
>>> implemented for the purpose of testing that we can build sane wheels
>>> of all our requirements.
>>>
>>> This of course misses situations where the requirements change and
>>> the diskimages haven't been rebuilt or in jobs testing proposed
>>> changes which explicitly alter these requirements, but could be
>>> augmented by similar mechanisms in devstack itself to avoid building
>>> them more than once.
>>
>> Ok, so given that pip automatically builds a local wheel cache now when
>> it installs this... is it as simple as
>> https://review.openstack.org/#/c/241692/ ?
> It is not that simple and this change will probably need to be reverted.
> We don't install the build deps for these packages during the dib run.
> We only add them to the appropriate apt/yum caches. This means that the
> image builds will start to fail because lxml won't find libxml2-dev and
> whatever other headers packages it needs in order to link against the
> appropriate libs.
> 
> The issue here is we do our best to force devstack to do the work at run
> time to make sure that devstack-gate or our images aren't masking some
> bug or become a required part of the devstack process. This means that
> none of these packages are installed and won't be available to the pip
> install.

This seems like incorrect logic. We should test devstack can do all the
things on a devstack change, not on every neutron / trove / nova change.
I'm fine if we want to have a slow version of this for devstack testing
which starts from a massively stripped down state, but for the 99% of
patches that aren't devstack changes, this seems like overkill.

> We have already had to revert a similar change in the past and at the
> time the basic agreement was we should go back to building wheel package
> mirrors that jobs could take advantage of. That work floundered due to a
> lack of reviews, but I still think that is the correct way to solve this
> problem. Basic idea for that is to have some periodic jobs build a
> distro/arch/release specific wheel cache then rsync that over to all our
> pypi mirrors for use by the jobs.
> 
> Clark
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread John Belamaric
If you have custom data you want to keep for your driver, you should create 
your own database tables to track that information. For example, the reference 
driver creates its own tables to track its data in ipam* tables.

John

On Nov 4, 2015, at 3:46 PM, Shraddha Pandhe 
> wrote:

Hi folks,

I have a small question/suggestion about IPAM.

With IPAM, we are allowing users to have their own IPAM drivers so that they 
can manage IP allocation. The problem is, the new ipam tables in the database 
have the same columns as the old tables. So, as a user, if I want to have my 
own logic for ip allocation, I can't actually get any help from the database. 
Whereas, if we had an arbitrary json blob in the ipam tables, I could put any 
useful information/tags there, that can help me for allocation.

Does this make sense?

e.g. If I want to create multiple allocation pools in a subnet and use them for 
different purposes, I would need some sort of tag for each allocation pool for 
identification. Right now, there is no scope for doing something like that.

Any thoughts? If there are any other way to solve the problem, please let me 
know



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Vilobh Meshram
I will be working on adding the Consul driver to Tooz [1].

-Vilobh
[1] https://blueprints.launchpad.net/python-tooz/+spec/add-consul-driver

On Wed, Nov 4, 2015 at 2:05 PM, Mark Voelker  wrote:

> On Nov 4, 2015, at 4:41 PM, Gregory Haynes  wrote:
> >
> > Excerpts from Clint Byrum's message of 2015-11-04 21:17:15 +:
> >> Excerpts from Joshua Harlow's message of 2015-11-04 12:57:53 -0800:
> >>> Ed Leafe wrote:
>  On Nov 3, 2015, at 6:45 AM, Davanum Srinivas
> wrote:
> > Here's a Devstack review for zookeeper in support of this initiative:
> >
> > https://review.openstack.org/241040
> >
> > Thanks,
> > Dims
> 
>  I thought that the operators at that session made it very clear that
> they would *not* run any Java applications, and that if OpenStack required
> a Java app to run, they would no longer use it.
> 
>  I like the idea of using Zookeeper as the DLM, but I don't think it
> should be set up as a default, even for devstack, given the vehement
> opposition expressed.
> 
> >>>
> >>> What should be the default then?
> >>>
> >>> As for 'vehement opposition' I didn't see that as being there, I saw a
> >>> small set of people say 'I don't want to run java or I can't run java',
> >>> some comments about requiring using oracles JVM (which isn't correct,
> >>> OpenJDK works for folks that I have asked in the zookeeper community
> and
> >>> else where) and the rest of the folks were ok with it...
> >>>
> >>> If people want a alternate driver, propose it IMHO...
> >>>
> >>
> >> The few operators who stated this position are very much appreciated
> >> for standing up and making it clear. It has helped us not step into a
> >> minefield with a native ZK driver!
> >>
> >> Consul is the most popular second choice, and should work fine for the
> >> use cases we identified. It will not be sufficient if we ever have
> >> a use case where many agents must lock many resources, since Consul
> >> does not offer a way to grant lock access in a fair manner (ZK does,
> >> and we're not aware of any others that do actually). Using Consul or
> >> etcd for this case would result in situations where lock waiters may
> >> wait _forever_, and will likely wait longer than they should at times.
> >> Hopefully we can simply avoid the need for this in OpenStack all
> together.
> >>
> >> I do _not_ think we should wait for constrained operators to scream
> >> at us about ZK to write a Consul driver. It's important enough that we
> >> should start documenting all of the issues we expect to see with Consul
> >> (it's not widely packaged, for instance) and writing a driver with its
> >> own devstack plugin.
> >>
> >> If there are Consul experts who did not make it to those sessions,
> >> it would be greatly appreciated if you can spend some time on this.
> >>
> >> What I don't want to see happen is we get into a deadlock where there's
> >> a large portion of users who can't upgrade and no driver to support
> them.
> >> So lets stay ahead of the problem, and get a set of drivers that works
> >> for everybody!
> >>
> >
> > One additional note - out of the three possible options I see for tooz
> > drivers in production (zk, consul, etcd) we currently only have drivers
> > for ZK. This means that unless new drivers are created, when we depend
> > on tooz we will be requiring folks deploy zk.
> >
> > It would be *awesome* if some folks stepped up to create and support at
> > least one of the aternate backends.
> >
> > Although I am a fan of the ZK solution, I have an old WIP patch for
> > creating an etcd driver. I would like to revive and maintain it, but I
> > would also need one more maintainer per the new rules for in tree
> > drivers…
>
> For those following along at home, said WIP etcd driver patch is here:
>
> https://review.openstack.org/#/c/151463/
>
> And said rules are at:
>
> https://review.openstack.org/#/c/240645/
>
> And FWIW, I too am personally fine with ZK as a default for devstack.
>
> At Your Service,
>
> Mark T. Voelker
>
> >
> > Cheers,
> > Greg
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Davanum Srinivas
Graham,

Agree. Hence the Tooz as the abstraction layer. Folks are welcome to
write new drivers or fix existing drivers for Tooz where needed.

-- Dims

On Wed, Nov 4, 2015 at 3:04 PM, Hayes, Graham  wrote:
> On 04/11/15 20:04, Ed Leafe wrote:
>> On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
>>>
>>> Here's a Devstack review for zookeeper in support of this initiative:
>>>
>>> https://review.openstack.org/241040
>>>
>>> Thanks,
>>> Dims
>>
>> I thought that the operators at that session made it very clear that they 
>> would *not* run any Java applications, and that if OpenStack required a Java 
>> app to run, they would no longer use it.
>>
>> I like the idea of using Zookeeper as the DLM, but I don't think it should 
>> be set up as a default, even for devstack, given the vehement opposition 
>> expressed.
>>
>>
>> -- Ed Leafe
>>
>
> I got the impression that there was *some* operators that wouldn't run
> java.
>
> I do not see an issue with having ZooKeeper as the default, as long as
> there is an alternate solution that also works for the operators that do
> not want to use it.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Sean Dague
On 11/04/2015 03:57 PM, Joshua Harlow wrote:
> Ed Leafe wrote:
>> On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
>>> Here's a Devstack review for zookeeper in support of this initiative:
>>>
>>> https://review.openstack.org/241040
>>>
>>> Thanks,
>>> Dims
>>
>> I thought that the operators at that session made it very clear that
>> they would *not* run any Java applications, and that if OpenStack
>> required a Java app to run, they would no longer use it.
>>
>> I like the idea of using Zookeeper as the DLM, but I don't think it
>> should be set up as a default, even for devstack, given the vehement
>> opposition expressed.
>>
> 
> What should be the default then?
> 
> As for 'vehement opposition' I didn't see that as being there, I saw a
> small set of people say 'I don't want to run java or I can't run java',
> some comments about requiring using oracles JVM (which isn't correct,
> OpenJDK works for folks that I have asked in the zookeeper community and
> else where) and the rest of the folks were ok with it...
> 
> If people want a alternate driver, propose it IMHO...

Zookeeper has previously been used by a number of projects, I think it
makes a sensible default to start. We even had it in the gate on the
unit test jobs for a while. We can make a plug point in devstack later
once we see some kinds of jobs running on the zookeeper base for what
the semantics would make sense to plug more stuff in.

Kind of like the MQ path in devstack right now. One default, and a plug
point for people trying other stuff.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Fox, Kevin M
As an Op, I can say the only time I've really wanted to change the config file 
and felt pain restarting a service was when I needed to adjust the logging 
level. If that one thing could be done, or it could be done in a completely 
different way (mgmt unix socket?), I think that would go a very long way.

Thanks,
Kevin

From: gord chung [g...@live.ca]
Sent: Wednesday, November 04, 2015 9:43 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic 
Reconfiguration of OpenStack Services

we actually had a solution implemented in Ceilometer to handle this[1].

that said, based on the results of our survey[2], we found that most operators 
*never* update configuration files after the initial setup and if they did it 
was very rarely (monthly updates). the question related to Ceilometer and its 
pipeline configuration file so the results might be specific to Ceilometer. I 
think you should definitely query operators before undertaking any work. the 
last thing you want to do is implement a feature no one really needs/wants.

[1] 
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html

On 04/11/2015 10:00 AM, Marian Horban wrote:
Hi guys,

Unfortunately I haven't been on Tokio summit but I know that there was
discussion about dynamic reloading of configuration.
Etherpad refs:
https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services,
https://etherpad.openstack.org/p/mitaka-oslo-security-logging

In this thread I want to discuss agreements reached on the summit and discuss
implementation details.

Some notes taken from etherpad and my remarks:

1. "Adding "mutable" parameter for each option."
"Do we have an option mutable=True on CfgOpt? Yes"
-
As I understood 'mutable' parameter must indicate whether service contains
code responsible for reloading of this option or not.
And this parameter should be one of the arguments of cfg.Opt constructor.
Problems:
1. Library's options.
SSL options ca_file, cert_file, key_file taken from oslo.service library
could be reloaded in nova-api so these options should be mutable...
But for some projects that don't need SSL support reloading of SSL options
doesn't make sense. For such projects this option should be non mutable.
Problem is that oslo.service - single and there are many different projects
which use it in different way.
The same options could be mutable and non mutable in different contexts.
2. Support of config options on some platforms.
Parameter "mutable" could be different for different platforms. Some options
make sense only for specific platforms. If we mark such options as mutable
it could be misleading on some platforms.
3. Dependency of options.
There are many 'workers' options(osapi_compute_workers, ec2_workers,
metadata_workers, workers). These options specify number of workers for
OpenStack API services.
If value of the 'workers' option is greater than '1' instance of
ProcessLauncher is created otherwise instance of ServiceLauncher is created.
When ProcessLauncher receives SIGHUP it reloads it own configuration,
gracefully terminates children and respawns new children.
This mechanism allows to reload many config options implicitly.
But if value of the 'workers' option equals '1' instance of ServiceLauncher
is created.
ServiceLauncher starts everything in single process and in this case we
don't have such implicit reloading.

I think that mutability of options is a complicated feature and I think that
adding of 'mutable' parameter into cfg.Opt constructor could just add mess.

2. "oslo.service catches SIGHUP and calls oslo.config"
-
>From my point of view every service should register list of hooks to reload
config options. oslo.service should catch SIGHUP and call list of registered
hooks one by one with specified order.
Discussion of such implementation was started in ML:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074558.html.
Raw reviews:
https://review.openstack.org/#/c/228892/,
https://review.openstack.org/#/c/223668/.

3. "oslo.config is responsible to log changes which were ignored on SIGHUP"
-
Some config options could be changed using API(for example quotas) that's why
oslo.config doesn't know actual configuration of service and can't log
changes of configuration.

Regards, Marian Horban



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Fox, Kevin M
Ah, I hadn't considered the rabbit_hosts (discovery) case. yeah, that would be 
a useful thing to be able to tweak live. I haven't needed that feature yet, but 
could see how that flexibility could come in handy.

Thanks,
Kevin

From: Joshua Harlow [harlo...@fastmail.com]
Sent: Wednesday, November 04, 2015 11:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic 
Reconfiguration of OpenStack Services

Along this line, thinks like the following are likely more changeable
(and my guess is operators would want to change them when things start
going badly), for example from a nova.conf that I have laying around...

[DEFAULT]

rabbit_hosts=...
rpc_response_timeout=...
default_notification_level=...
default_log_levels=...

[glance]

api_servers=...

(and more)

Some of those I think should have higher priority as being
reconfigurable, but I think operators should be asked what they think
would be useful and prioritize those.

Some of those really are service discovery 'types' (rabbit_hosts,
glance/api_servers, keystone/api_servers) but fixing this is likely a
longer term goal (see conversations in keystone).

Joshua Harlow wrote:
> gord chung wrote:
>> we actually had a solution implemented in Ceilometer to handle this[1].
>>
>> that said, based on the results of our survey[2], we found that most
>> operators *never* update configuration files after the initial setup and
>> if they did it was very rarely (monthly updates). the question related
>> to Ceilometer and its pipeline configuration file so the results might
>> be specific to Ceilometer. I think you should definitely query operators
>> before undertaking any work. the last thing you want to do is implement
>> a feature no one really needs/wants.
>>
>> [1]
>> http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html
>>
>> [2]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html
>>
>
> So my general though on the above is yes, definitely consult operators
> to see if they would use this, although if a feature doesn't exist and
> has never existed (say outside of ceilometer) then it's sort of hard to
> get an accurate survey result from a group of people that have never had
> the feature in the first place... Either way it should be done, just to
> get more knowledge...
>
> I know operators (at yahoo!) want to be able to dynamically change the
> logging level, and that's not a monthly task, but more of an 'as-needed'
> one that would be very helpful when things start going badly... So
> perhaps the set of reloadable configuration should start out small and
> not encompass all the things...
>
>>
>> On 04/11/2015 10:00 AM, Marian Horban wrote:
>>> Hi guys,
>>>
>>> Unfortunately I haven't been on Tokio summit but I know that there was
>>> discussion about dynamic reloading of configuration.
>>> Etherpad refs:
>>> https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services,
>>>
>>>
>>> https://etherpad.openstack.org/p/mitaka-oslo-security-logging
>>>
>>> In this thread I want to discuss agreements reached on the summit and
>>> discuss
>>> implementation details.
>>>
>>> Some notes taken from etherpad and my remarks:
>>>
>>> 1. "Adding "mutable" parameter for each option."
>>> "Do we have an option mutable=True on CfgOpt? Yes"
>>> -
>>> As I understood 'mutable' parameter must indicate whether service
>>> contains
>>> code responsible for reloading of this option or not.
>>> And this parameter should be one of the arguments of cfg.Opt
>>> constructor.
>>> Problems:
>>> 1. Library's options.
>>> SSL options ca_file, cert_file, key_file taken from oslo.service library
>>> could be reloaded in nova-api so these options should be mutable...
>>> But for some projects that don't need SSL support reloading of SSL
>>> options
>>> doesn't make sense. For such projects this option should be non mutable.
>>> Problem is that oslo.service - single and there are many different
>>> projects
>>> which use it in different way.
>>> The same options could be mutable and non mutable in different contexts.
>>> 2. Support of config options on some platforms.
>>> Parameter "mutable" could be different for different platforms. Some
>>> options
>>> make sense only for specific platforms. If we mark such options as
>>> mutable
>>> it could be misleading on some platforms.
>>> 3. Dependency of options.
>>> There are many 'workers' options(osapi_compute_workers, ec2_workers,
>>> metadata_workers, workers). These options specify number of workers for
>>> OpenStack API services.
>>> If value of the 'workers' option is greater than '1' instance of
>>> ProcessLauncher is created otherwise instance of ServiceLauncher is
>>> created.
>>> When ProcessLauncher receives SIGHUP it reloads it own configuration,
>>> 

Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Joshua Harlow
Agreed, it'd be nice to have an 'audit' of the various projects configs 
and try to categorize which ones should be reloadable (and the priority 
to make it reloadable) and which ones are service discovery configs (and 
probably shouldn't be in config in the first place) and which ones are 
nice to haves to be configurable... (like rpc_response_timeout).


The side-effects of making a few things configurable will likely cause a 
whole bunch of other issues anyway (like how an application/library... 
gets notified that a config has changed and it may need to re-adjust 
itself to those new values which may include for example unloading a 
driver, stopping a thread, starting a new driver...), so I'm thinking we 
should start small and grow as needed (based on above prioritization).


Log levels are high on my known list.

Fox, Kevin M wrote:

Ah, I hadn't considered the rabbit_hosts (discovery) case. yeah, that would be 
a useful thing to be able to tweak live. I haven't needed that feature yet, but 
could see how that flexibility could come in handy.

Thanks,
Kevin

From: Joshua Harlow [harlo...@fastmail.com]
Sent: Wednesday, November 04, 2015 11:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic 
Reconfiguration of OpenStack Services

Along this line, thinks like the following are likely more changeable
(and my guess is operators would want to change them when things start
going badly), for example from a nova.conf that I have laying around...

[DEFAULT]

rabbit_hosts=...
rpc_response_timeout=...
default_notification_level=...
default_log_levels=...

[glance]

api_servers=...

(and more)

Some of those I think should have higher priority as being
reconfigurable, but I think operators should be asked what they think
would be useful and prioritize those.

Some of those really are service discovery 'types' (rabbit_hosts,
glance/api_servers, keystone/api_servers) but fixing this is likely a
longer term goal (see conversations in keystone).

Joshua Harlow wrote:

gord chung wrote:

we actually had a solution implemented in Ceilometer to handle this[1].

that said, based on the results of our survey[2], we found that most
operators *never* update configuration files after the initial setup and
if they did it was very rarely (monthly updates). the question related
to Ceilometer and its pipeline configuration file so the results might
be specific to Ceilometer. I think you should definitely query operators
before undertaking any work. the last thing you want to do is implement
a feature no one really needs/wants.

[1]
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html

[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html


So my general though on the above is yes, definitely consult operators
to see if they would use this, although if a feature doesn't exist and
has never existed (say outside of ceilometer) then it's sort of hard to
get an accurate survey result from a group of people that have never had
the feature in the first place... Either way it should be done, just to
get more knowledge...

I know operators (at yahoo!) want to be able to dynamically change the
logging level, and that's not a monthly task, but more of an 'as-needed'
one that would be very helpful when things start going badly... So
perhaps the set of reloadable configuration should start out small and
not encompass all the things...


On 04/11/2015 10:00 AM, Marian Horban wrote:

Hi guys,

Unfortunately I haven't been on Tokio summit but I know that there was
discussion about dynamic reloading of configuration.
Etherpad refs:
https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services,


https://etherpad.openstack.org/p/mitaka-oslo-security-logging

In this thread I want to discuss agreements reached on the summit and
discuss
implementation details.

Some notes taken from etherpad and my remarks:

1. "Adding "mutable" parameter for each option."
"Do we have an option mutable=True on CfgOpt? Yes"
-
As I understood 'mutable' parameter must indicate whether service
contains
code responsible for reloading of this option or not.
And this parameter should be one of the arguments of cfg.Opt
constructor.
Problems:
1. Library's options.
SSL options ca_file, cert_file, key_file taken from oslo.service library
could be reloaded in nova-api so these options should be mutable...
But for some projects that don't need SSL support reloading of SSL
options
doesn't make sense. For such projects this option should be non mutable.
Problem is that oslo.service - single and there are many different
projects
which use it in different way.
The same options could be mutable and non mutable in different contexts.
2. Support of config options on some platforms.
Parameter 

Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Shraddha Pandhe
On Wed, Nov 4, 2015 at 1:38 PM, Armando M.  wrote:

>
>
> On 4 November 2015 at 13:21, Shraddha Pandhe 
> wrote:
>
>> Hi Salvatore,
>>
>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>> make IPAM much more powerful. Some other projects already do things like
>> this.
>>
>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>> 'extras' arbitrary JSON field. This allows us to put any information in
>> there that we think is important for us.
>>
>
> I personally feel that relying on json blobs is not only dangerously
> affecting portability, but it causes us to bloat the business logic, and
> forcing us to be doing less efficient when querying/filtering data
>

> Most importantly though, I feel it's like abdicating our responsibility to
> do a good design job.
>


How does it affect portability?

I don't think it forces us to do anything. 'Allows'? Maybe. But that can be
solved. Before making any design decisions for internal feature-requests,
we should first check with the community if its a wider use-case. If it is
a wider use-case, we should collaborate and fix it upstream the right way.

I feel that, its impossible for the community to know all the use-cases.
Even if they knew, it would be impossible to incorporate all of them. I
filed a bug few months ago about multiple gateway support for subnets.

https://bugs.launchpad.net/neutron/+bug/1464361

It was marked as 'Wont fix' because nobody else had this use-case. Adding
and maintaining a patch to support this is super risky as it breaks the
APIs. A JSON blob would have helped me here.

I have another use-case. For multi-ip support for Ironic, we want to divide
the IP allocation ranges into two: Static IPs and extra IPs. The static IPs
are pre-configured IPs for Ironic inventory whereas extra IPs are the
multi-ips. Nobody else in the community has this use-case.

If we add our own database for internal stuff, we go back to the same
problem of allowing bad design.



> Ultimately, we should be able to identify how to model these extensions
> you're thinking of both conceptually and logically.
>

I would agree with that. If theres an effort going on in this direction,
ill be happy to join. Without this, people like us with unique use-cases
are stuck with having patches.



>
> I couldn't care less if other projects use it, but we ended up using in
> Neutron too, and since I lost this battle time and time again, all I am
> left with is this rant :)
>
>
>>
>>
>> Hoping to get some positive feedback from API and DB lieutenants too.
>>
>>
>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando > > wrote:
>>
>>> Arbitrary blobs are a powerful tools to circumvent limitations of an
>>> API, as well as other constraints which might be imposed for versioning or
>>> portability purposes.
>>> The parameters that should end up in such blob are typically specific
>>> for the target IPAM driver (to an extent they might even identify a
>>> specific driver to use), and therefore an API consumer who knows what
>>> backend is performing IPAM can surely leverage it.
>>>
>>> Therefore this would make a lot of sense, assuming API portability and
>>> not leaking backend details are not a concern.
>>> The Neutron team API & DB lieutenants will be able to provide more input
>>> on this regard.
>>>
>>> In this case other approaches such as a vendor specific extension are
>>> not a solution - assuming your granularity level is the allocation pool;
>>> indeed allocation pools are not first-class neutron resources, and it is
>>> not therefore possible to have APIs which associate vendor specific
>>> properties to allocation pools.
>>>
>>> Salvatore
>>>
>>> On 4 November 2015 at 21:46, Shraddha Pandhe <
>>> spandhe.openst...@gmail.com> wrote:
>>>
 Hi folks,

 I have a small question/suggestion about IPAM.

 With IPAM, we are allowing users to have their own IPAM drivers so that
 they can manage IP allocation. The problem is, the new ipam tables in the
 database have the same columns as the old tables. So, as a user, if I want
 to have my own logic for ip allocation, I can't actually get any help from
 the database. Whereas, if we had an arbitrary json blob in the ipam tables,
 I could put any useful information/tags there, that can help me for
 allocation.

 Does this make sense?

 e.g. If I want to create multiple allocation pools in a subnet and use
 them for different purposes, I would need some sort of tag for each
 allocation pool for identification. Right now, there is no scope for doing
 something like that.

 Any thoughts? If there are any other way to solve the problem, please
 let me know





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 

Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Doug Hellmann
Excerpts from Joshua Harlow's message of 2015-11-04 14:13:33 -0800:
> Agreed, it'd be nice to have an 'audit' of the various projects configs 
> and try to categorize which ones should be reloadable (and the priority 
> to make it reloadable) and which ones are service discovery configs (and 
> probably shouldn't be in config in the first place) and which ones are 
> nice to haves to be configurable... (like rpc_response_timeout).
> 
> The side-effects of making a few things configurable will likely cause a 
> whole bunch of other issues anyway (like how an application/library... 
> gets notified that a config has changed and it may need to re-adjust 
> itself to those new values which may include for example unloading a 
> driver, stopping a thread, starting a new driver...), so I'm thinking we 
> should start small and grow as needed (based on above prioritization).
> 
> Log levels are high on my known list.

Right, anything that is going to take significant application rework to
support should wait. The session identified a few relatively simple
options that help with debugging when they are changed, including
logging, and we should start with those.

Doug

> 
> Fox, Kevin M wrote:
> > Ah, I hadn't considered the rabbit_hosts (discovery) case. yeah, that would 
> > be a useful thing to be able to tweak live. I haven't needed that feature 
> > yet, but could see how that flexibility could come in handy.
> >
> > Thanks,
> > Kevin
> > 
> > From: Joshua Harlow [harlo...@fastmail.com]
> > Sent: Wednesday, November 04, 2015 11:34 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic 
> > Reconfiguration of OpenStack Services
> >
> > Along this line, thinks like the following are likely more changeable
> > (and my guess is operators would want to change them when things start
> > going badly), for example from a nova.conf that I have laying around...
> >
> > [DEFAULT]
> >
> > rabbit_hosts=...
> > rpc_response_timeout=...
> > default_notification_level=...
> > default_log_levels=...
> >
> > [glance]
> >
> > api_servers=...
> >
> > (and more)
> >
> > Some of those I think should have higher priority as being
> > reconfigurable, but I think operators should be asked what they think
> > would be useful and prioritize those.
> >
> > Some of those really are service discovery 'types' (rabbit_hosts,
> > glance/api_servers, keystone/api_servers) but fixing this is likely a
> > longer term goal (see conversations in keystone).
> >
> > Joshua Harlow wrote:
> >> gord chung wrote:
> >>> we actually had a solution implemented in Ceilometer to handle this[1].
> >>>
> >>> that said, based on the results of our survey[2], we found that most
> >>> operators *never* update configuration files after the initial setup and
> >>> if they did it was very rarely (monthly updates). the question related
> >>> to Ceilometer and its pipeline configuration file so the results might
> >>> be specific to Ceilometer. I think you should definitely query operators
> >>> before undertaking any work. the last thing you want to do is implement
> >>> a feature no one really needs/wants.
> >>>
> >>> [1]
> >>> http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html
> >>>
> >>> [2]
> >>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html
> >>>
> >> So my general though on the above is yes, definitely consult operators
> >> to see if they would use this, although if a feature doesn't exist and
> >> has never existed (say outside of ceilometer) then it's sort of hard to
> >> get an accurate survey result from a group of people that have never had
> >> the feature in the first place... Either way it should be done, just to
> >> get more knowledge...
> >>
> >> I know operators (at yahoo!) want to be able to dynamically change the
> >> logging level, and that's not a monthly task, but more of an 'as-needed'
> >> one that would be very helpful when things start going badly... So
> >> perhaps the set of reloadable configuration should start out small and
> >> not encompass all the things...
> >>
> >>> On 04/11/2015 10:00 AM, Marian Horban wrote:
>  Hi guys,
> 
>  Unfortunately I haven't been on Tokio summit but I know that there was
>  discussion about dynamic reloading of configuration.
>  Etherpad refs:
>  https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services,
> 
> 
>  https://etherpad.openstack.org/p/mitaka-oslo-security-logging
> 
>  In this thread I want to discuss agreements reached on the summit and
>  discuss
>  implementation details.
> 
>  Some notes taken from etherpad and my remarks:
> 
>  1. "Adding "mutable" parameter for each option."
>  "Do we have an option mutable=True on CfgOpt? Yes"
>  

[openstack-dev] [trove][release] python-troveclient 1.4.0 release

2015-11-04 Thread Craig Vyvial
Hello everyone,

We have released the 1.4.0 version of the python-troveclient.

In liberty Trove had added more datastores to support clustering but the
client was missing an attribute to allow you to set the network and
availability zone for each of the instances in the cluster. This
troveclient version release adds the az and nic parameters.

Thanks,
Craig Vyvial


signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Clark Boylan
On Wed, Nov 4, 2015, at 09:14 AM, Sean Dague wrote:
> On 11/04/2015 12:10 PM, Jeremy Stanley wrote:
> > On 2015-11-04 08:43:27 -0600 (-0600), Matthew Thode wrote:
> >> On 11/04/2015 06:47 AM, Sean Dague wrote:
> > [...]
> >>> Is there a nodepool cache strategy where we could pre build these? A 25%
> >>> performance win comes out the other side if there is a strategy here.
> >>
> >> python wheel repo could help maybe?
> > 
> > That's along the lines of how I expect we'd need to solve it.
> > Basically add a new DIB element to openstack-infra/project-config in
> > nodepool/elements (or extend the cache-devstack element already
> > there) to figure out which version(s) it needs to prebuild and then
> > populate a wheelhouse which can be leveraged by the jobs running on
> > the resulting diskimage. The test scripts in the
> > openstack/requirements repo may already have much of this logic
> > implemented for the purpose of testing that we can build sane wheels
> > of all our requirements.
> > 
> > This of course misses situations where the requirements change and
> > the diskimages haven't been rebuilt or in jobs testing proposed
> > changes which explicitly alter these requirements, but could be
> > augmented by similar mechanisms in devstack itself to avoid building
> > them more than once.
> 
> Ok, so given that pip automatically builds a local wheel cache now when
> it installs this... is it as simple as
> https://review.openstack.org/#/c/241692/ ?
It is not that simple and this change will probably need to be reverted.
We don't install the build deps for these packages during the dib run.
We only add them to the appropriate apt/yum caches. This means that the
image builds will start to fail because lxml won't find libxml2-dev and
whatever other headers packages it needs in order to link against the
appropriate libs.

The issue here is we do our best to force devstack to do the work at run
time to make sure that devstack-gate or our images aren't masking some
bug or become a required part of the devstack process. This means that
none of these packages are installed and won't be available to the pip
install.

We have already had to revert a similar change in the past and at the
time the basic agreement was we should go back to building wheel package
mirrors that jobs could take advantage of. That work floundered due to a
lack of reviews, but I still think that is the correct way to solve this
problem. Basic idea for that is to have some periodic jobs build a
distro/arch/release specific wheel cache then rsync that over to all our
pypi mirrors for use by the jobs.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Robert Collins
On 5 November 2015 at 09:02, Ed Leafe  wrote:
> On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
>>
>> Here's a Devstack review for zookeeper in support of this initiative:
>>
>> https://review.openstack.org/241040
>>
>> Thanks,
>> Dims
>
> I thought that the operators at that session made it very clear that they 
> would *not* run any Java applications, and that if OpenStack required a Java 
> app to run, they would no longer use it.
>
> I like the idea of using Zookeeper as the DLM, but I don't think it should be 
> set up as a default, even for devstack, given the vehement opposition 
> expressed.

There was no option suggested that all the operators would run happily.

Thus it doesn't matter what the 'default' is - we know only some
operators will run it.

In the session we were told that zookeeper is already used in CI jobs
for ceilometer (was this wrong?) and thats why we figured it made a
sane default for devstack.

We can always change the default later.

What is important is that folk step up and write the consul and etcd
drivers for the non-Java-happy operators to consume.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Mitaka Design Summit Recap

2015-11-04 Thread Sean McGinnis
Cinder Mitaka Design Summit Summary

Will the Real Block Storage Service Please Stand Up
===
Should Cinder be usable outside of a full OpenStack environment.
There are several solutions out there for providing a Software
Defined Storage service with plugins for various backends. Most
of the functionality used for these is already done by Cinder.
So the question is, should Cinder try to be that ubiquitous SDS
interface?

The concern is that Cinder should either try to address this
broader use case or be left behind. Especially since there is
already a lot of overlap in functionality, and end users already
asking about it.

Some concern about doing this is whether it will be a distraction
from our core purpose - to be a solid and useful service for
providing block storage in an OpenStack cloud.

On the other hand, some folks have played around with doing this
already and found there really are only a few key issues with
being able to use Cinder without something like Keystone. Based on
this, it was decided we will spend some time looking into doing
this, but at a lower priority than our core work.

Availability Zones in Cinder

Recently it was highlighted that there are issues between AZs
used in Cinder versus AZs used in Nova. When Cinder was originally
branched out of the Nova code base we picked up the concept of
Availability Zones, but the ideas was never fully implemented and
isn't exactly what some expect it to be in its current state.

Speaking with some of the operators in the room, there were two
main desires for AZ interaction with Nova - either the AZ specified
in Nova needs to match one to one with the AZ in Cinder, or there
is no connection between the two and the Nova AZ doesn't matter on
the Cinder side.

There is currently a workaround in Cinder. If the config file
value for allow_availability_zone_fallback is set to True, if a
request for a new volume comes in with a Nova AZ not present, the
default Cinder AZ will be used instead.

A few options for improving AZ support were suggested. At least for
those present, the current "dirty fix" workaround is sufficient. If
further input makes it clear that this is not enough, we can look
in to one of the proposed alternatives to address those needs.

API Microversions
=
Some projects, particularly Nova and Manila, have already started
work on supporting API microversions. We plan on leveraging their
work to add support in Cinder. Scott D'Angelo has done some work
porting that framework from Manila into a spec and proof of concept
in Cinder.

API microversions would allow us to make breaking API changes while
still providing backward compatibility to clients that expect the
existing behavior. It may also allow us to remove functionality
more easily.

We still want to be restrictive about modifying the API. Just
because this will make it slightly easier to do, it still has
an ongoing maintenance cost, and slightly a higher one at that,
that we will want to limit as much as possible.

A great explanation of the microversions concept was written up by
Sean Dague here:

https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/

Experimental APIs
=
Building on the work with microversions, we would use that to expose
experimental APIs and make it explicit that they are experimental
only and could be removed at any time, without the normal window
provided with deprecating other features.

Although there were certainly some very valid concerns raised about
doing this, and whether it would be useful or not, general consensus
was that it would be good to support it.

After further discussion, it was pointed out that there really isn't
anything in the works that needs this right now, so it may be delayed.
The issue there being that if we wait to do it, when we actually do
need to use it for something it won't be ready to go.

Cinder Nova Interaction
===
Great joint session with some of the Nova folks. Talked through some
of the issues we've had with the interaction between Nova and Cinder
and areas where we need to improve it.

Some of the decisions were:
- Working on support for multiattach. Will delay encryption support
  until non-encrypted issues get worked out.
- Rootwrap issues with the use of os-brick. Priv-sep sounds like it
  is the better answer. Will need to wait for that to mature before
  we can switch away from rootwrap though.
- API handling changes. A lot of cases where an API call is made and
  it is assumed to succeed. Will use event notifications to report
  results back to Nova. Requires changes on both sides.
- Specs will be submitted for coordinated handling for extending
  volumes.
- Volume related Nova bugs were highlighted. Cinder team will try to
  help triage and resolve some of those.
  https://bugs.launchpad.net/nova/+bugs?field.tag=volumes

Volume Manager Locks

Covered in 

Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Ed Leafe
On Nov 4, 2015, at 3:17 PM, Clint Byrum  wrote:

> What I don't want to see happen is we get into a deadlock where there's
> a large portion of users who can't upgrade and no driver to support them.
> So lets stay ahead of the problem, and get a set of drivers that works
> for everybody!

I think that this is a great idea, but we also need some people familiar with 
Consul to do this work. Otherwise, ZK (and hence Java) is a defacto dependency.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Mid-cycle meetup for Mitaka

2015-11-04 Thread Armando M.
Hi folks,

After some consideration, I am proposing a change for the Mitaka release
cycle in relation to the mid-cycle meetup event.

My proposal is to defer the gathering to later in the release cycle [1],
and assess whether we have it or not based on the course of events in the
cycle. If we feel that a last push closer to the end will help us hit some
critical targets, then I am all in for arranging it.

Based on our latest experiences, I have not seen a strong correlation
between progress made during the cycle and progress made during the meetup,
so we might as well save us the trouble of travelling close to Christmas.

I'd like to thank Kyle, Miguel Lavalle and Doug for looking into the
logistics. We may still need their services later in the new year, but as
of now all I can say is:

Happy (distributed) hacking!

Cheers,
Armando

[1] https://wiki.openstack.org/wiki/Mitaka_Release_Schedule
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry][ceilometer][aodh][gnocchi] Tokyo summit roundup

2015-11-04 Thread gord chung

hi folks,

i want to start off by thanking everyone for joining the telemetry 
related discussions at the Tokyo summit -- we got some great feedback 
and ideas. similar to the last summit, here's a rundown of items that we 
talked about[1] and will be tracking in the upcoming cycle. as before, 
this is a (chaotic) brain dump and does not necessarily reflect any 
prioritisation.


note: the following is split into different sections, each representing 
a service provided under the telemetry umbrella. these projects are 
discretely managed but with tight collaboration.



-- Aodh (alarming service) --

- client - since we split alarming functionality from Ceilometer into 
it's own unique service[2], aodhclient support will be added so 
ceilometerclient functionality does not become overloaded with unrelated 
alarming code.
- user interface - to improve usability, support for CRUD operations of 
alarms will be added to horizon [3]
- horizontal scaling - the existing event alarm support added in 
Liberty[4] handles a single evaluator. multiple worker support will be 
added to enable better scaling
- simplifying combination alarm - combination alarms allowed flexibility 
of reusing threshold alarms but limited the use case to AND conditions 
and added evaluation ordering complexity. these drawbacks will be 
addressed by a new composite alarm [5]
- testing - tempest and grenade plugin testing support to be added in 
addition to existing unit/functional tests



-- Ceilometer (data collection service) --

- example reference architecture - to improve the consumption of 
Ceilometer, performance study will be done to build reference 
architecture. additional example configurations will be added to enable 
easier entry based on use case.
- housekeeping - alarming and rpc functionality were deprecated in 
Kilo/Liberty[6]. to ensure a tidy code base, it's time to clean house or 
as some devs like to put it: burn it down.[*]

- rolling upgrades - document the process of upgrading
- refined polling - the polling agent(s) now exclusively poll data and 
defer processing to notification agent(s). because of this, we can 
create a more tailored polling and pipeline configuration experience.
- improved polling distribution - currently, the cache is shared within 
a process. to better enable task distribution, we should enable sharing 
the cache across processes.
- polling metadata caching - we improved the caching mechanism in 
Liberty to minimise the load caused by Ceilometer polling. further 
improvements can be made to minimise the number of calls in general.[7]
- resource caching - to improve write speed, a cache will be implemented 
in the collector to minimise unnecessary writes[8]
- batch db writing - to minimise writes, batched writing of data will be 
added to collector[9]
- componentisation part 2 - Ceilometer handles meters[10] and 
events[11]. we need to make it pluggable to offer better flexibility.
- testing - tempest and grenade plugin testing support to be added in 
addition to existing unit/functional tests. additionally, multi-node 
testing to test upgrade path.



-- Gnocchi (time-series database and resource indexing service) --

- metric archive sharding - to improve performance of very large data 
sets (ie. second-by-second granularity), we can split the archive when 
updating data to avoid transferring entire archive set.
- dynamic resource creation - currently to create a new resource type, a 
new model and migration needs to be added. we need to make this creation 
dynamic to allow for more resource types.
- proliferate gnocchiclient adoption - gnocchiclient is now available. 
to ensure consistent usage, it should be adopted in Ceilometer and Aodh.
- testing - tempest and grenade plugin testing support to be added in 
addition to existing unit/functional tests



you can sign up for work items on the etherpad[12]. as always, you're 
more than welcome to propose addition ideas on irc:#openstack-ceilometer 
(to be #openstack-telemetry) and to openstack-dev using  
[telemetry]/[ceilometer]/[aodh]/[gnocchi] in subject.


as always we will continue to work with externally managed projects[13].

[1] 
https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Ceilometer
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/073897.html
[3] 
https://blueprints.launchpad.net/openstack-ux/+spec/horizon-alarm-management
[4] 
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/event-alarm-evaluator.html

[5] https://review.openstack.org/#/c/208786/
[6] 
https://wiki.openstack.org/wiki/ReleaseNotes/Liberty#OpenStack_Telemetry_.28Ceilometer.29

[7] https://review.openstack.org/#/c/209799/
[8] https://review.openstack.org/#/c/203109/
[9] https://review.openstack.org/#/c/234831/
[10] http://docs.openstack.org/admin-guide-cloud/telemetry-measurements.html
[11] http://docs.openstack.org/admin-guide-cloud/telemetry-events.html
[12] 

Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Shraddha Pandhe
Hi Salvatore,

Thanks for the feedback. I agree with you that arbitrary JSON blobs will
make IPAM much more powerful. Some other projects already do things like
this.

e.g. In Ironic, node has driver_info, which is JSON. it also has an
'extras' arbitrary JSON field. This allows us to put any information in
there that we think is important for us.


Hoping to get some positive feedback from API and DB lieutenants too.


On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando 
wrote:

> Arbitrary blobs are a powerful tools to circumvent limitations of an API,
> as well as other constraints which might be imposed for versioning or
> portability purposes.
> The parameters that should end up in such blob are typically specific for
> the target IPAM driver (to an extent they might even identify a specific
> driver to use), and therefore an API consumer who knows what backend is
> performing IPAM can surely leverage it.
>
> Therefore this would make a lot of sense, assuming API portability and not
> leaking backend details are not a concern.
> The Neutron team API & DB lieutenants will be able to provide more input
> on this regard.
>
> In this case other approaches such as a vendor specific extension are not
> a solution - assuming your granularity level is the allocation pool; indeed
> allocation pools are not first-class neutron resources, and it is not
> therefore possible to have APIs which associate vendor specific properties
> to allocation pools.
>
> Salvatore
>
> On 4 November 2015 at 21:46, Shraddha Pandhe 
> wrote:
>
>> Hi folks,
>>
>> I have a small question/suggestion about IPAM.
>>
>> With IPAM, we are allowing users to have their own IPAM drivers so that
>> they can manage IP allocation. The problem is, the new ipam tables in the
>> database have the same columns as the old tables. So, as a user, if I want
>> to have my own logic for ip allocation, I can't actually get any help from
>> the database. Whereas, if we had an arbitrary json blob in the ipam tables,
>> I could put any useful information/tags there, that can help me for
>> allocation.
>>
>> Does this make sense?
>>
>> e.g. If I want to create multiple allocation pools in a subnet and use
>> them for different purposes, I would need some sort of tag for each
>> allocation pool for identification. Right now, there is no scope for doing
>> something like that.
>>
>> Any thoughts? If there are any other way to solve the problem, please let
>> me know
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Role for Fuel Master Node

2015-11-04 Thread Javeria Khan
Thanks Igor, Alex. Guess there isn't any support for running tasks directly
on the Fuel Master node for now.

I did try moving to deployment_tasks.yaml, however it leads to other issues
such as "/etc/fuel/plugins// does not exist" failing on
deployments.

I'm trying to move back to using the former tasks.yaml, but the
fuel-plugin-builder keeps looking for deployment_tasks.yaml now. There
should be some build source list I can remove?


--
Javeria

On Wed, Nov 4, 2015 at 12:44 PM, Aleksandr Didenko 
wrote:

> Hi,
>
> please note that such tasks are executed inside 'mcollective' docker
> container, not on the Fuel master host system.
>
> Regards,
> Alex
>
> On Tue, Nov 3, 2015 at 10:41 PM, Igor Kalnitsky 
> wrote:
>
>> Hi Javeria,
>>
>> Try to use 'master' in 'role' field. Example:
>>
>> - role: 'master'
>>   stage: pre_deployment
>>   type: shell
>>   parameters:
>>   cmd: echo all > /tmp/plugin.all
>>   timeout: 42
>>
>> Let me know if you need additional help.
>>
>> Thanks,
>> Igor
>>
>> P.S: Since Fuel 7.0 it's recommended to use deployment_tasks.yaml
>> instead of tasks.yaml. Please see Fuel Plugins wiki page for details.
>>
>> On Tue, Nov 3, 2015 at 10:26 PM, Javeria Khan 
>> wrote:
>> > Hey everyone,
>> >
>> > I've been working on a fuel plugin and for some reason just cant figure
>> out
>> > how to run a task on the fuel master node through the tasks.yaml. Is
>> there
>> > even a role for it?
>> >
>> > Something similar to what ansible does with localhost would work.
>> >
>> > Thanks,
>> > Javeria
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Salvatore Orlando
Arbitrary blobs are a powerful tools to circumvent limitations of an API,
as well as other constraints which might be imposed for versioning or
portability purposes.
The parameters that should end up in such blob are typically specific for
the target IPAM driver (to an extent they might even identify a specific
driver to use), and therefore an API consumer who knows what backend is
performing IPAM can surely leverage it.

Therefore this would make a lot of sense, assuming API portability and not
leaking backend details are not a concern.
The Neutron team API & DB lieutenants will be able to provide more input on
this regard.

In this case other approaches such as a vendor specific extension are not a
solution - assuming your granularity level is the allocation pool; indeed
allocation pools are not first-class neutron resources, and it is not
therefore possible to have APIs which associate vendor specific properties
to allocation pools.

Salvatore

On 4 November 2015 at 21:46, Shraddha Pandhe 
wrote:

> Hi folks,
>
> I have a small question/suggestion about IPAM.
>
> With IPAM, we are allowing users to have their own IPAM drivers so that
> they can manage IP allocation. The problem is, the new ipam tables in the
> database have the same columns as the old tables. So, as a user, if I want
> to have my own logic for ip allocation, I can't actually get any help from
> the database. Whereas, if we had an arbitrary json blob in the ipam tables,
> I could put any useful information/tags there, that can help me for
> allocation.
>
> Does this make sense?
>
> e.g. If I want to create multiple allocation pools in a subnet and use
> them for different purposes, I would need some sort of tag for each
> allocation pool for identification. Right now, there is no scope for doing
> something like that.
>
> Any thoughts? If there are any other way to solve the problem, please let
> me know
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Shraddha Pandhe
Hi folks,

I have a small question/suggestion about IPAM.

With IPAM, we are allowing users to have their own IPAM drivers so that
they can manage IP allocation. The problem is, the new ipam tables in the
database have the same columns as the old tables. So, as a user, if I want
to have my own logic for ip allocation, I can't actually get any help from
the database. Whereas, if we had an arbitrary json blob in the ipam tables,
I could put any useful information/tags there, that can help me for
allocation.

Does this make sense?

e.g. If I want to create multiple allocation pools in a subnet and use them
for different purposes, I would need some sort of tag for each allocation
pool for identification. Right now, there is no scope for doing something
like that.

Any thoughts? If there are any other way to solve the problem, please let
me know
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Joshua Harlow

Ed Leafe wrote:

On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:

Here's a Devstack review for zookeeper in support of this initiative:

https://review.openstack.org/241040

Thanks,
Dims


I thought that the operators at that session made it very clear that they would 
*not* run any Java applications, and that if OpenStack required a Java app to 
run, they would no longer use it.

I like the idea of using Zookeeper as the DLM, but I don't think it should be 
set up as a default, even for devstack, given the vehement opposition expressed.



What should be the default then?

As for 'vehement opposition' I didn't see that as being there, I saw a 
small set of people say 'I don't want to run java or I can't run java', 
some comments about requiring using oracles JVM (which isn't correct, 
OpenJDK works for folks that I have asked in the zookeeper community and 
else where) and the rest of the folks were ok with it...


If people want a alternate driver, propose it IMHO...



-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Hayes, Graham
On 04/11/15 20:04, Ed Leafe wrote:
> On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
>>
>> Here's a Devstack review for zookeeper in support of this initiative:
>>
>> https://review.openstack.org/241040
>>
>> Thanks,
>> Dims
> 
> I thought that the operators at that session made it very clear that they 
> would *not* run any Java applications, and that if OpenStack required a Java 
> app to run, they would no longer use it.
> 
> I like the idea of using Zookeeper as the DLM, but I don't think it should be 
> set up as a default, even for devstack, given the vehement opposition 
> expressed.
> 
> 
> -- Ed Leafe
> 

I got the impression that there was *some* operators that wouldn't run
java.

I do not see an issue with having ZooKeeper as the default, as long as
there is an alternate solution that also works for the operators that do
not want to use it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Monty Taylor

On 11/04/2015 04:09 PM, Davanum Srinivas wrote:

Graham,

Agree. Hence the Tooz as the abstraction layer. Folks are welcome to
write new drivers or fix existing drivers for Tooz where needed.


Yes. This is correct. We cannot grow a hard depend on a Java thing, but 
optional depends are ok - and it turns out the semantics needed from 
DLMs and DKVSs are sufficiently abstractable for it to make sense.


That said - the only usable tooz backend at the moment is zookeeper - so 
someone who cares about the not-Java use case will have to step up and 
write a consul backend. The main thing is that we allow that to happen 
and don't do things that would prevent such a thing from being written.


Reasons for making ZK the default are:

- It exists in tooz today
- It's easily installable in all the distros
- It has devstack support already

None of those three are true of consul, although none are terribly hard 
to achieve.



On Wed, Nov 4, 2015 at 3:04 PM, Hayes, Graham  wrote:

On 04/11/15 20:04, Ed Leafe wrote:

On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:


Here's a Devstack review for zookeeper in support of this initiative:

https://review.openstack.org/241040

Thanks,
Dims


I thought that the operators at that session made it very clear that they would 
*not* run any Java applications, and that if OpenStack required a Java app to 
run, they would no longer use it.

I like the idea of using Zookeeper as the DLM, but I don't think it should be 
set up as a default, even for devstack, given the vehement opposition expressed.


-- Ed Leafe



I got the impression that there was *some* operators that wouldn't run
java.

I do not see an issue with having ZooKeeper as the default, as long as
there is an alternate solution that also works for the operators that do
not want to use it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tripleo] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-04 Thread Michal Rostecki

On 11/03/2015 10:27 PM, Zane Bitter wrote:

I think we all agree that using something _like_ Kubernetes would be
extremely interesting for controller services, where you have a bunch of
heterogeneous services with scheduling constraints (HA), that may need
to be scaled out at different rates,  

IMHO it's not interesting at all for compute nodes though, where the
scheduling is not only fixed but well-defined in advance. (It's... one
compute node per compute node. Duh.)

e.g. I could easily imagine a future containerised TripleO where the
controller services were deployed with Magnum but the compute nodes were
configured directly with Heat software deployments.

In such a scenario the fact that you can't use Kubernetes for compute
nodes diminishes its value not at all. So while I'm guessing net=host is
still a blocker (for Neutron services on the controller - although
another message in this thread suggests that K8s now supports it
anyway), I don't think pid=host needs to be since AFAICT it appears to
be required only for libvirt.

Something to think about...



One of the goals of Kolla (and idea of containerizing OpenStack services 
in general) is to simplify upgrades. Scaling and scheduling are 
obviously important points of Kolla, but they are not the only.


The model of upgrade where images of nova-compute, Neutron agents etc. 
are build once, pushed to registry and then pulled on compute nodes 
looks much better for me than traditional upgrade of packages. It also 
may decrease probability of breaking some common dependency during upgrades.


Regards,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Fox, Kevin M
To clarify that statement a little more,

Speaking only for myself as an op, I don't want to support yet one more 
snowflake in a sea of snowflakes, that works differently then all the rest, 
without a very good reason.

Java has its own set of issues associated with the JVM. Care, and feeding sorts 
of things. If we are to invest time/money/people in learning how to properly 
maintain it, its easier to justify if its not just a one off for just DLM,

So I wouldn't go so far as to say we're vehemently opposed to java, just that 
DLM on its own is probably not a strong enough feature all on its own to 
justify requiring pulling in java. Its been only a very recent thing that you 
could convince folks that DLM was needed at all. So either make java optional, 
or find some other use cases that needs java badly enough that you can make 
java a required component. I suspect some day searchlight might be compelling 
enough for that, but not today.

As for the default, the default should be good reference. if most sites would 
run with etc or something else since java isn't needed, then don't default 
zookeeper on.

Thanks,
Kevin 


From: Ed Leafe [e...@leafe.com]
Sent: Wednesday, November 04, 2015 12:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] Outcome of distributed lock manager  
discussion @ the summit

On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
>
> Here's a Devstack review for zookeeper in support of this initiative:
>
> https://review.openstack.org/241040
>
> Thanks,
> Dims

I thought that the operators at that session made it very clear that they would 
*not* run any Java applications, and that if OpenStack required a Java app to 
run, they would no longer use it.

I like the idea of using Zookeeper as the DLM, but I don't think it should be 
set up as a default, even for devstack, given the vehement opposition expressed.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] acceptance: run WSGI for API services

2015-11-04 Thread Sergii Golovatiuk
Hi,

mod_wsgi has some limitations so making order for WSGI services. Reloading
and restarting processes is well documented at [1]. Creating ordering is a
problem. There are several options:
1. Use uwsgi instead of mod_wsgi
2. Use several apache instances (one instance for one service)
3. Ignore ordering as processes start quite fast

[1] https://code.google.com/p/modwsgi/wiki/ReloadingSourceCode

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Nov 4, 2015 at 4:31 PM, Jason Guiditta  wrote:

> On 14/08/15 09:45 -0400, Emilien Macchi wrote:
>
>> So far we have WSGI support for puppet-keystone & pupper-ceilometer.
>> I'm currently working on other components to easily deploy OpenStack
>> running API services using apache/wsgi instead of eventlet.
>>
>> I would like to propose some change in our beaker tests:
>>
>> stable/kilo:
>> * puppet-{ceilometer,keystone}: test both cases so we validate the
>> upgrade with beaker
>> * puppet-*: no wsgi support now, but eventually could be backported from
>> master (liberty) once pushed.
>>
>> master (future stable/liberty):
>> * puppet-{ceilometer,keystone}: keep only WSGI scenario
>> * puppet-*: push WSGI support in manifests, test them in beaker,
>> eventually backport them to stable/kilo, and if on time (before
>> stable/libery), drop non-WSGI scenario.
>>
>> The goal here is to:
>> * test upgrade from non-WSGI to WSGI setup in stable/kilo for a maximum
>> of modules
>> * keep WSGI scenario only for Liberty
>>
>> Thoughts?
>> --
>> Emilien Macchi
>>
>> Sorry for the late reply, but I am wondering if anyone knows how we
> (via puppet, pacemaker, or whatever else - even the services
> themselves, within apache) would handle start order if all these
> services become wsgi apps running under apache?  In other words, for
> an HA deployment, as an example, we typically set ceilometer-central
> to start _after_ keystone.  If they are both in apache, how could this
> be done?  Is it truly not needed? If not, is this something new, or
> have those of us working on deployments with the pacemaker
> architecture been misinformed all this time?
>
> -j
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2015-11-04 12:57:53 -0800:
> Ed Leafe wrote:
> > On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
> >> Here's a Devstack review for zookeeper in support of this initiative:
> >>
> >> https://review.openstack.org/241040
> >>
> >> Thanks,
> >> Dims
> >
> > I thought that the operators at that session made it very clear that they 
> > would *not* run any Java applications, and that if OpenStack required a 
> > Java app to run, they would no longer use it.
> >
> > I like the idea of using Zookeeper as the DLM, but I don't think it should 
> > be set up as a default, even for devstack, given the vehement opposition 
> > expressed.
> >
> 
> What should be the default then?
> 
> As for 'vehement opposition' I didn't see that as being there, I saw a 
> small set of people say 'I don't want to run java or I can't run java', 
> some comments about requiring using oracles JVM (which isn't correct, 
> OpenJDK works for folks that I have asked in the zookeeper community and 
> else where) and the rest of the folks were ok with it...
> 
> If people want a alternate driver, propose it IMHO...
> 

The few operators who stated this position are very much appreciated
for standing up and making it clear. It has helped us not step into a
minefield with a native ZK driver!

Consul is the most popular second choice, and should work fine for the
use cases we identified. It will not be sufficient if we ever have
a use case where many agents must lock many resources, since Consul
does not offer a way to grant lock access in a fair manner (ZK does,
and we're not aware of any others that do actually). Using Consul or
etcd for this case would result in situations where lock waiters may
wait _forever_, and will likely wait longer than they should at times.
Hopefully we can simply avoid the need for this in OpenStack all together.

I do _not_ think we should wait for constrained operators to scream
at us about ZK to write a Consul driver. It's important enough that we
should start documenting all of the issues we expect to see with Consul
(it's not widely packaged, for instance) and writing a driver with its
own devstack plugin.

If there are Consul experts who did not make it to those sessions,
it would be greatly appreciated if you can spend some time on this.

What I don't want to see happen is we get into a deadlock where there's
a large portion of users who can't upgrade and no driver to support them.
So lets stay ahead of the problem, and get a set of drivers that works
for everybody!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Jeremy Stanley
On 2015-11-04 15:34:26 -0500 (-0500), Sean Dague wrote:
> This seems like incorrect logic. We should test devstack can do all the
> things on a devstack change, not on every neutron / trove / nova change.
> I'm fine if we want to have a slow version of this for devstack testing
> which starts from a massively stripped down state, but for the 99% of
> patches that aren't devstack changes, this seems like overkill.

We are, however, trying to get away from preinstalling additional
distro packages on our job workers (in favor of providing a warm
local cache) and leaving it up to the individual projects/jobs to
define the packages they'll need to be able to run. I'll save the
lengthy list of whys, it's been in progress for a while and we're
finally close to making it a reality.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Gregory Haynes
Excerpts from Clint Byrum's message of 2015-11-04 21:17:15 +:
> Excerpts from Joshua Harlow's message of 2015-11-04 12:57:53 -0800:
> > Ed Leafe wrote:
> > > On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
> > >> Here's a Devstack review for zookeeper in support of this initiative:
> > >>
> > >> https://review.openstack.org/241040
> > >>
> > >> Thanks,
> > >> Dims
> > >
> > > I thought that the operators at that session made it very clear that they 
> > > would *not* run any Java applications, and that if OpenStack required a 
> > > Java app to run, they would no longer use it.
> > >
> > > I like the idea of using Zookeeper as the DLM, but I don't think it 
> > > should be set up as a default, even for devstack, given the vehement 
> > > opposition expressed.
> > >
> > 
> > What should be the default then?
> > 
> > As for 'vehement opposition' I didn't see that as being there, I saw a 
> > small set of people say 'I don't want to run java or I can't run java', 
> > some comments about requiring using oracles JVM (which isn't correct, 
> > OpenJDK works for folks that I have asked in the zookeeper community and 
> > else where) and the rest of the folks were ok with it...
> > 
> > If people want a alternate driver, propose it IMHO...
> > 
> 
> The few operators who stated this position are very much appreciated
> for standing up and making it clear. It has helped us not step into a
> minefield with a native ZK driver!
> 
> Consul is the most popular second choice, and should work fine for the
> use cases we identified. It will not be sufficient if we ever have
> a use case where many agents must lock many resources, since Consul
> does not offer a way to grant lock access in a fair manner (ZK does,
> and we're not aware of any others that do actually). Using Consul or
> etcd for this case would result in situations where lock waiters may
> wait _forever_, and will likely wait longer than they should at times.
> Hopefully we can simply avoid the need for this in OpenStack all together.
> 
> I do _not_ think we should wait for constrained operators to scream
> at us about ZK to write a Consul driver. It's important enough that we
> should start documenting all of the issues we expect to see with Consul
> (it's not widely packaged, for instance) and writing a driver with its
> own devstack plugin.
> 
> If there are Consul experts who did not make it to those sessions,
> it would be greatly appreciated if you can spend some time on this.
> 
> What I don't want to see happen is we get into a deadlock where there's
> a large portion of users who can't upgrade and no driver to support them.
> So lets stay ahead of the problem, and get a set of drivers that works
> for everybody!
> 

One additional note - out of the three possible options I see for tooz
drivers in production (zk, consul, etcd) we currently only have drivers
for ZK. This means that unless new drivers are created, when we depend
on tooz we will be requiring folks deploy zk.

It would be *awesome* if some folks stepped up to create and support at
least one of the aternate backends.

Although I am a fan of the ZK solution, I have an old WIP patch for
creating an etcd driver. I would like to revive and maintain it, but I
would also need one more maintainer per the new rules for in tree
drivers...

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Armando M.
On 4 November 2015 at 13:21, Shraddha Pandhe 
wrote:

> Hi Salvatore,
>
> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
> make IPAM much more powerful. Some other projects already do things like
> this.
>
> e.g. In Ironic, node has driver_info, which is JSON. it also has an
> 'extras' arbitrary JSON field. This allows us to put any information in
> there that we think is important for us.
>

I personally feel that relying on json blobs is not only dangerously
affecting portability, but it causes us to bloat the business logic, and
forcing us to be doing less efficient when querying/filtering data.

Most importantly though, I feel it's like abdicating our responsibility to
do a good design job. Ultimately, we should be able to identify how to
model these extensions you're thinking of both conceptually and logically.

I couldn't care less if other projects use it, but we ended up using in
Neutron too, and since I lost this battle time and time again, all I am
left with is this rant :)


>
>
> Hoping to get some positive feedback from API and DB lieutenants too.
>
>
> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando 
> wrote:
>
>> Arbitrary blobs are a powerful tools to circumvent limitations of an API,
>> as well as other constraints which might be imposed for versioning or
>> portability purposes.
>> The parameters that should end up in such blob are typically specific for
>> the target IPAM driver (to an extent they might even identify a specific
>> driver to use), and therefore an API consumer who knows what backend is
>> performing IPAM can surely leverage it.
>>
>> Therefore this would make a lot of sense, assuming API portability and
>> not leaking backend details are not a concern.
>> The Neutron team API & DB lieutenants will be able to provide more input
>> on this regard.
>>
>> In this case other approaches such as a vendor specific extension are not
>> a solution - assuming your granularity level is the allocation pool; indeed
>> allocation pools are not first-class neutron resources, and it is not
>> therefore possible to have APIs which associate vendor specific properties
>> to allocation pools.
>>
>> Salvatore
>>
>> On 4 November 2015 at 21:46, Shraddha Pandhe > > wrote:
>>
>>> Hi folks,
>>>
>>> I have a small question/suggestion about IPAM.
>>>
>>> With IPAM, we are allowing users to have their own IPAM drivers so that
>>> they can manage IP allocation. The problem is, the new ipam tables in the
>>> database have the same columns as the old tables. So, as a user, if I want
>>> to have my own logic for ip allocation, I can't actually get any help from
>>> the database. Whereas, if we had an arbitrary json blob in the ipam tables,
>>> I could put any useful information/tags there, that can help me for
>>> allocation.
>>>
>>> Does this make sense?
>>>
>>> e.g. If I want to create multiple allocation pools in a subnet and use
>>> them for different purposes, I would need some sort of tag for each
>>> allocation pool for identification. Right now, there is no scope for doing
>>> something like that.
>>>
>>> Any thoughts? If there are any other way to solve the problem, please
>>> let me know
>>>
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Gregory Haynes
Excerpts from Jeremy Stanley's message of 2015-11-04 21:31:58 +:
> On 2015-11-04 15:34:26 -0500 (-0500), Sean Dague wrote:
> > This seems like incorrect logic. We should test devstack can do all the
> > things on a devstack change, not on every neutron / trove / nova change.
> > I'm fine if we want to have a slow version of this for devstack testing
> > which starts from a massively stripped down state, but for the 99% of
> > patches that aren't devstack changes, this seems like overkill.
> 
> We are, however, trying to get away from preinstalling additional
> distro packages on our job workers (in favor of providing a warm
> local cache) and leaving it up to the individual projects/jobs to
> define the packages they'll need to be able to run. I'll save the
> lengthy list of whys, it's been in progress for a while and we're
> finally close to making it a reality.

++

One way this could be done in DIB is to either:
bind mount the wheelhouse in from the build host, build an additional
image we dont use which fills up the wheel house, then bind mount that
in to the image we upload.

OR

make a chroot inside of our image build which creates the wheelhouse,
then either bind mount it out or copy it out in to the image we upload.

Either way, its pretty nasty and non trivial. I think the path of least
resistance for now is probably making a wheel mirror.

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Mark Voelker
On Nov 4, 2015, at 4:41 PM, Gregory Haynes  wrote:
> 
> Excerpts from Clint Byrum's message of 2015-11-04 21:17:15 +:
>> Excerpts from Joshua Harlow's message of 2015-11-04 12:57:53 -0800:
>>> Ed Leafe wrote:
 On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
> Here's a Devstack review for zookeeper in support of this initiative:
> 
> https://review.openstack.org/241040
> 
> Thanks,
> Dims
 
 I thought that the operators at that session made it very clear that they 
 would *not* run any Java applications, and that if OpenStack required a 
 Java app to run, they would no longer use it.
 
 I like the idea of using Zookeeper as the DLM, but I don't think it should 
 be set up as a default, even for devstack, given the vehement opposition 
 expressed.
 
>>> 
>>> What should be the default then?
>>> 
>>> As for 'vehement opposition' I didn't see that as being there, I saw a 
>>> small set of people say 'I don't want to run java or I can't run java', 
>>> some comments about requiring using oracles JVM (which isn't correct, 
>>> OpenJDK works for folks that I have asked in the zookeeper community and 
>>> else where) and the rest of the folks were ok with it...
>>> 
>>> If people want a alternate driver, propose it IMHO...
>>> 
>> 
>> The few operators who stated this position are very much appreciated
>> for standing up and making it clear. It has helped us not step into a
>> minefield with a native ZK driver!
>> 
>> Consul is the most popular second choice, and should work fine for the
>> use cases we identified. It will not be sufficient if we ever have
>> a use case where many agents must lock many resources, since Consul
>> does not offer a way to grant lock access in a fair manner (ZK does,
>> and we're not aware of any others that do actually). Using Consul or
>> etcd for this case would result in situations where lock waiters may
>> wait _forever_, and will likely wait longer than they should at times.
>> Hopefully we can simply avoid the need for this in OpenStack all together.
>> 
>> I do _not_ think we should wait for constrained operators to scream
>> at us about ZK to write a Consul driver. It's important enough that we
>> should start documenting all of the issues we expect to see with Consul
>> (it's not widely packaged, for instance) and writing a driver with its
>> own devstack plugin.
>> 
>> If there are Consul experts who did not make it to those sessions,
>> it would be greatly appreciated if you can spend some time on this.
>> 
>> What I don't want to see happen is we get into a deadlock where there's
>> a large portion of users who can't upgrade and no driver to support them.
>> So lets stay ahead of the problem, and get a set of drivers that works
>> for everybody!
>> 
> 
> One additional note - out of the three possible options I see for tooz
> drivers in production (zk, consul, etcd) we currently only have drivers
> for ZK. This means that unless new drivers are created, when we depend
> on tooz we will be requiring folks deploy zk.
> 
> It would be *awesome* if some folks stepped up to create and support at
> least one of the aternate backends.
> 
> Although I am a fan of the ZK solution, I have an old WIP patch for
> creating an etcd driver. I would like to revive and maintain it, but I
> would also need one more maintainer per the new rules for in tree
> drivers…

For those following along at home, said WIP etcd driver patch is here:

https://review.openstack.org/#/c/151463/

And said rules are at:

https://review.openstack.org/#/c/240645/

And FWIW, I too am personally fine with ZK as a default for devstack.

At Your Service,

Mark T. Voelker

> 
> Cheers,
> Greg
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] should we open gate for per sub-project stable-maint teams?

2015-11-04 Thread Armando M.
On 3 November 2015 at 08:49, Ihar Hrachyshka  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi all,
>
> currently we have a single neutron-wide stable-maint gerrit group that
> maintains all stable branches for all stadium subprojects. I believe
> that in lots of cases it would be better to have subproject members to
> run their own stable maintenance programs, leaving
> neutron-stable-maint folks to help them in non-obvious cases, and to
> periodically validate that project wide stable policies are still honore
> d.
>
> I suggest we open gate to creating subproject stable-maint teams where
> current neutron-stable-maint members feel those subprojects are ready
> for that and can be trusted to apply stable branch policies in
> consistent way.
>
> Note that I don't suggest we grant those new permissions completely
> automatically. If neutron-stable-maint team does not feel safe to give
> out those permissions to some stable branches, their feeling should be
> respected.
>
> I believe it will be beneficial both for subprojects that would be
> able to iterate on backports in more efficient way; as well as for
> neutron-stable-maint members who are often busy with other stuff, and
> often times are not the best candidates to validate technical validity
> of backports in random stadium projects anyway. It would also be in
> line with general 'open by default' attitude we seem to embrace in
> Neutron.
>
> If we decide it's the way to go, there are alternatives on how we
> implement it. For example, we can grant those subproject teams all
> permissions to merge patches; or we can leave +W votes to
> neutron-stable-maint group.
>
> I vote for opening the gates, *and* for granting +W votes where
> projects showed reasonable quality of proposed backports before; and
> leaving +W to neutron-stable-maint in those rare cases where history
> showed backports could get more attention and safety considerations
> [with expectation that those subprojects will eventually own +W votes
> as well, once quality concerns are cleared].
>
> If we indeed decide to bootstrap subproject stable-maint teams, I
> volunteer to reach the candidate teams for them to decide on initial
> lists of stable-maint members, and walk them thru stable policies.
>
> Comments?
>

It was like this in the past, then it got changed, now we're proposing of
changing it back? Will we change it back again in 6 months time? Just
wondering :)

I suppose this has to do with the larger question of what belonging to the
stadium really means. I guess this is a concept that is still shaping up,
but if the concept is here to stay, I personally believe that being part of
the stadium means adhering to a common set of practices and principles
(like those largely implemented in OpenStack) where all projects feel and
behave equally. We have evidence where a few feel that 'stable' is not a
concept worth honoring and for that reason I am wary to relax this

I suppose it could be fine to have a probation period only to grant full
rights later on, but who is going to police that? That's a job in itself.
Once the permission is granted are we ever really gonna revoke it? And what
does this mean once the damage is done?

Perhaps an alternative could be to add a selected member of each subproject
to the neutron-stable-maint, with the proviso that they are only supposed
to +2 their backports (the same way Lieutenant is supposed to +2 their
area, and *only their area* of expertise), leaving the +2/+A to more
seasoned folks who have been doing this for a lot longer.

Would that strike a better middle ground?

Kyle, Russell and I have talked during the summit about clarifying the
meaning of the stadium. Stable backports falls into this category, and I am
glad you brought this up.

Cheers,
Armando


>
> Ihar
> -BEGIN PGP SIGNATURE-
>
> iQEcBAEBAgAGBQJWOOWkAAoJEC5aWaUY1u57sVIIALrnqvuj3t7c25DBHvywxBZV
> tCMlRY4cRCmFuVy0VXokM5DxGQ3VRwbJ4uWzuXbeaJxuVWYT2Kn8JJ+yRjdg7Kc4
> 5KXy3Xv0MdJnQgMMMgyjJxlTK4MgBKEsCzIRX/HLButxcXh3tqWAh0oc8WW3FKtm
> wWFZ/2Gmf4K9OjuGc5F3dvbhVeT23IvN+3VkobEpWxNUHHoALy31kz7ro2WMiGs7
> GHzatA2INWVbKfYo2QBnszGTp4XXaS5KFAO8+4H+HvPLxOODclevfKchOIe6jthH
> F1z4JcJNMmQrQDg1WSqAjspAlne1sqdVLX0efbvagJXb3Ju63eSLrvUjyCsZG4Q=
> =HE+y
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack-barbican-authenticate-keystone-barbican-command

2015-11-04 Thread Dave McCowan (dmccowan)
Hi Arif--
Maybe using the OpenStack client would be easier for you.  It will take 
care of authenticating with Keystone, setting the HTTP headers, and providing 
reasonable defaults.
It looks like you have installed OpenStack with DevStack.  If this is the 
case:

$ cd ~/devstack
$ source openrc admin admin
$ openstack secret store -p "super secret data"
# An HREF is returned in the response when the secret has been stored
$ openstack secret get   -p
# Your secret is returned

   Drop by our IRC channel at #openstack-barbican on freenode if you have more 
questions, or if this suggestion doesn't work with your deployment.
--Dave

From: OpenStack Mailing List Archive >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, October 26, 2015 at 1:46 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] 
openstack-barbican-authenticate-keystone-barbican-command

Link: https://openstack.nimeyo.com/62868/?show=63238#c63238
From: marif82 >


Hi Dave,

Thanks for your response.
I am the beginner in OpenStack so I don't know how to get Keystone token so I 
searched and found "admintoken = a682f596-76f3-11e3-b3b2-e716f9080d50" in 
keystone.conf file. As you suggested i have removed the projectid from curl 
command and X-Auth-Token in curl command as per your suggestion and I am still 
getting the same error, please see below:

bash-4.2$ curl -X POST -H 'content-type:application/json' -H 
'X-Auth-Token:a682f59676f311e3b3b2e716f9080d50' -H 'X-Project-Id:12345' -d 
'{"payload": "my-secret-here", "payloadcontenttype": "text/plain"}' 
http://localhost:9311/v1/secrets -v
* About to connect() to localhost port 9311 
(#0)
* Trying ::1...
* Connection refused
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9311 
(#0)

POST /v1/secrets HTTP/1.1
User-Agent: curl/7.29.0
Host: localhost:9311
Accept: /
content-type:application/json
X-Auth-Token:a682f59676f311e3b3b2e716f9080d50
X-Project-Id:12345
Content-Length: 67

* upload completely sent off: 67 out of 67 bytes
< HTTP/1.1 401 Unauthorized
< Content-Type: text/html; charset=UTF-8
< Content-Length: 23
< WWW-Authenticate: Keystone uri='http://localhost:35357'
< Connection: close
<
* Closing connection 0
Authentication requiredbash-4.2$

Please help me.

Regards,
Arif
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stepping Down from Neutron Core Responsibilities

2015-11-04 Thread Anita Kuno
On 11/04/2015 07:28 PM, Edgar Magana wrote:
> Dear Colleagues,
> 
> I have been part of this community from the very beginning when in Santa 
> Clara, CA back in 2011 a bunch of we crazy people decided to work on this 
> networking project.
> Neutron has become is a very unique piece of code and it requires an approval 
> team that will always be on the top of everything, this is why I would like 
> to communicate you that I decided to step down as Neutron Core.
> 
> These are not breaking news for many of you because I shared this thought 
> during the summit in Tokyo and now it is a commitment. I want to let you know 
> that I learnt a lot from you and I hope my comments and reviews never 
> offended you.
> 
> I will be around of course. I will continue my work on code reviews and 
> coordination on the Networking Guide.
> 
> Thank you all for your support and good feedback,
> 
> Edgar
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Thanks so much for all your hard work, Edgar. I really appreciate
working with you and your dedication to task. You showed up and did what
you said what you would show up and do and I really admire that quality.

I'm glad you are taking a break, you have worked hard and need to take a
bit of time for yourself. Core reviewer responsibilities are time consuming.

I look forward to continuing to work with you on the Networking Guide.
See you at the next thing.

Thanks Edgar,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing a new library: requestsexceptions

2015-11-04 Thread Monty Taylor

On 11/04/2015 06:30 PM, James E. Blair wrote:

Hi,

I'm pleased to announce the availability of a new micro-library named
requestsexceptions.  Now the task of convincing the requests library
not to fill up your filesystem with warnings about SSL requests has
never been easier!

Over in infra-land, we use the requests library a lot, whether it's
gertty talking to Gerrit or shade talking to OpenStack, and we love
using it.  It's a pleasure.  Except for two little things.

Requests is in the middle of a number of unfortunate standoffs.  It is
attempting to push the bar on SSL security by letting us all know when
a request is substandard in some way -- whether that is because a
certificate is missing a subject alternate name field, or the version
of Python in use is missing the latest SSL features.

This is great, but in many cases a user of requests is unable to
address any of the underlying causes of these warnings.  For example,
try as we might, public cloud providers are still using non-SAN
certificates.  And upgrading python on a system (or even the
underlying ssl module) is often out of the question.

Requests has a solution to this -- a simple recipe to disable specific
warnings when users know they are not necessary.

This is when we run into another standoff.

Requests is helpfully packaged in many GNU/Linux distributions.
However, the standard version of requests bundles the urllib3 library.
Some packagers have *unbundled* the urllib3 library from requests and
cause it to use the packaged version of urllib3.  This would be a
simple matter for the packagers and requests authors to argue about
over beer at PyCon, except if you want to disable a specific warning
rather than all warnings you need to import the specific urllib3
exceptions that requests uses.  The import path for those exceptions
will be different depending on whether urllib3 is bundled or not.

This means that in order to find a specific exception in order to
handle a requests warning, code like this must be used:

   try:
   from requests.packages.urllib3.exceptions import InsecurePlatformWarning
   except ImportError:
   try:
   from urllib3.exceptions import InsecurePlatformWarning
   except ImportError:
   InsecurePlatformWarning = None

The requestsexceptions library handles that for you so that you can
simply type:

   from requestsexepctions import InsecurePlatformWarning

We have just released requestsexceptions to pypi at version 1.1.1, and
proposed it to global requirements.  You can find it here:

   https://pypi.python.org/pypi/requestsexceptions
   https://git.openstack.org/cgit/openstack-infra/requestsexceptions


All hail the removal of non-actionable warnings from my logs!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Boris Pavlovic
John,

> Our resources are not. We've also had specific requests to prevent
> > header bloat because it impacts the HTTP caching systems. Also, it's
> > pretty clear that headers are really not where you want to put volatile
> > information, which this is.
> Hmm, you do make a good point about caching.



Caching is useful only in such cases when you would like to return same
data many times.
In our case we are interested in latest state of resource, such kinds of
things can't be cached.


> I think we should step back here and figure out what the actual problem
> > is, and what ways we might go about solving it. This has jumped directly
> > to a point in time optimized fast poll loop. It will shave a few cycles
> > off right now on our current implementation, but will still be orders of
> > magnitude more costly that consuming the Nova notifications if the only
> > thing that is cared about is task state transitions. And it's an API
> > change we have to live with largely *forever* so short term optimization
> > is not what we want to go for.
> I do agree with that.


The thing here is that we have to have Async API, because we have long
running operations.
And basically there are 3 approaches to understand that operation is done:
1) pub/sub
2) polling resource status
3) long polling requests

All approaches have pros and cons, however the "actual" problem will stay
the same and you can't fix that..


Best regards,
Boris Pavlovic

On Thu, Nov 5, 2015 at 12:18 AM, John Garbutt  wrote:

> On 4 November 2015 at 15:00, Sean Dague  wrote:
> > On 11/04/2015 09:49 AM, Jay Pipes wrote:
> >> On 11/04/2015 09:32 AM, Sean Dague wrote:
> >>> On 11/04/2015 09:00 AM, Jay Pipes wrote:
>  On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
> > Hi stackers,
> >
> > Usually such projects like Heat, Tempest, Rally, Scalar, and other
> tool
> > that works with OpenStack are working with resources (e.g. VM,
> Volumes,
> > Images, ..) in the next way:
> >
> >   >>> resource = api.resouce_do_some_stuff()
> >   >>> while api.resource_get(resource["uuid"]) != expected_status
> >   >>>sleep(a_bit)
> >
> > For each async operation they are polling and call many times
> > resource_get() which creates significant load on API and DB layers
> due
> > the nature of this request. (Usually getting full information about
> > resources produces SQL requests that contains multiple JOINs, e,g for
> > nova vm it's 6 joins).
> >
> > What if we add new API method that will just resturn resource status
> by
> > UUID? Or even just extend get request with the new argument that
> > returns
> > only status?
> 
>  +1
> 
>  All APIs should have an HTTP HEAD call on important resources for
>  retrieving quick status information for the resource.
> 
>  In fact, I proposed exactly this in my Compute "vNext" API proposal:
> 
>  http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head
> 
>  Swift's API supports HEAD for accounts:
> 
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta
> 
> 
> 
>  containers:
> 
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta
> 
> 
> 
>  and objects:
> 
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta
> 
> 
>  So, yeah, I agree.
>  -jay
> >>>
> >>> How would you expect this to work on "servers"? HEAD specifically
> >>> forbids returning a body, and, unlike swift, we don't return very much
> >>> information in our headers.
> >>
> >> I didn't propose doing it on a collection resource like "servers". Only
> >> on an entity resource like a single "server".
> >>
> >> HEAD /v2/{tenant}/servers/{uuid}
> >> HTTP/1.1 200 OK
> >> Content-Length: 1022
> >> Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
> >> Content-Type: application/json
> >> Date: Thu, 16 Jan 2014 21:13:19 GMT
> >> OpenStack-Compute-API-Server-VM-State: ACTIVE
> >> OpenStack-Compute-API-Server-Power-State: RUNNING
> >> OpenStack-Compute-API-Server-Task-State: NONE
> >
> > Right, but these headers aren't in the normal resource. They are
> > returned in the body only.
> >
> > The point of HEAD is give me the same thing as GET without the body,
> > because I only care about the headers. Swift resources are structured in
> > a way where this information is useful.
>
> I guess we would have to add this to GET requests, for consistency,
> which feels like duplication.
>
> > Our resources are not. We've also had specific requests to prevent
> > header bloat because it impacts the HTTP caching systems. Also, it's
> > pretty clear that headers are really not where you want to put volatile
> > information, which this is.
>
> Hmm, you do make a good point about caching.
>
> > I think we should step back here and figure out what the actual 

Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Boris Pavlovic
Sean,

This seems like a fundamental abuse of HTTP honestly. If you find
> yourself creating a ton of new headers, you are probably doing it wrong.


I totally agree on this. We shouldn't add a lot of HTTP headers. Imho why
not just return in body string with status (in my case).


> I think longer term we probably need a dedicated event service in
> OpenStack.


Unfortunately, this will work slower then current solution with JOINs,
require more resources and it will be very hard to use... (like you'll need
to add one more service to openstack, and use one more client..)


Best regards,
Boris Pavlovic


On Thu, Nov 5, 2015 at 12:42 AM, Sean Dague  wrote:

> On 11/04/2015 10:13 AM, John Garbutt wrote:
> > On 4 November 2015 at 14:49, Jay Pipes  wrote:
> >> On 11/04/2015 09:32 AM, Sean Dague wrote:
> >>>
> >>> On 11/04/2015 09:00 AM, Jay Pipes wrote:
> 
>  On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
> >
> > Hi stackers,
> >
> > Usually such projects like Heat, Tempest, Rally, Scalar, and other
> tool
> > that works with OpenStack are working with resources (e.g. VM,
> Volumes,
> > Images, ..) in the next way:
> >
> >   >>> resource = api.resouce_do_some_stuff()
> >   >>> while api.resource_get(resource["uuid"]) != expected_status
> >   >>>sleep(a_bit)
> >
> > For each async operation they are polling and call many times
> > resource_get() which creates significant load on API and DB layers
> due
> > the nature of this request. (Usually getting full information about
> > resources produces SQL requests that contains multiple JOINs, e,g for
> > nova vm it's 6 joins).
> >
> > What if we add new API method that will just resturn resource status
> by
> > UUID? Or even just extend get request with the new argument that
> returns
> > only status?
> 
> 
>  +1
> 
>  All APIs should have an HTTP HEAD call on important resources for
>  retrieving quick status information for the resource.
> 
>  In fact, I proposed exactly this in my Compute "vNext" API proposal:
> 
>  http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head
> 
>  Swift's API supports HEAD for accounts:
> 
> 
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta
> 
> 
>  containers:
> 
> 
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta
> 
> 
>  and objects:
> 
> 
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta
> 
>  So, yeah, I agree.
>  -jay
> >>>
> >>>
> >>> How would you expect this to work on "servers"? HEAD specifically
> >>> forbids returning a body, and, unlike swift, we don't return very much
> >>> information in our headers.
> >>
> >>
> >> I didn't propose doing it on a collection resource like "servers". Only
> on
> >> an entity resource like a single "server".
> >>
> >> HEAD /v2/{tenant}/servers/{uuid}
> >> HTTP/1.1 200 OK
> >> Content-Length: 1022
> >> Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
> >> Content-Type: application/json
> >> Date: Thu, 16 Jan 2014 21:13:19 GMT
> >> OpenStack-Compute-API-Server-VM-State: ACTIVE
> >> OpenStack-Compute-API-Server-Power-State: RUNNING
> >> OpenStack-Compute-API-Server-Task-State: NONE
> >
> > For polling, that sounds quite efficient and handy.
> >
> > For "servers" we could do this (I think there was a spec up that wanted
> this):
> >
> > HEAD /v2/{tenant}/servers
> > HTTP/1.1 200 OK
> > Content-Length: 1022
> > Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
> > Content-Type: application/json
> > Date: Thu, 16 Jan 2014 21:13:19 GMT
> > OpenStack-Compute-API-Server-Count: 13
>
> This seems like a fundamental abuse of HTTP honestly. If you find
> yourself creating a ton of new headers, you are probably doing it wrong.
>
> I do think the near term work around is to actually use Searchlight.
> They're monitoring the notifications bus for nova, and refreshing
> resources when they see a notification which might have changed it. It
> still means that Searchlight is hitting our API more than ideal, but at
> least only one service is doing so, and if the rest hit that instead
> they'll get the resource without any db hits (it's all through an
> elastic search cluster).
>
> I think longer term we probably need a dedicated event service in
> OpenStack. A few of us actually had an informal conversation about this
> during the Nova notifications session to figure out if there was a way
> to optimize the Searchlight path. Nearly everyone wants websockets,
> which is good. The problem is, that means you've got to anticipate
> 10,000+ open websockets as soon as we expose this. Which means the stack
> to deliver that sanely isn't just a bit of python code, it's also the
> highly optimized server underneath.
>
> So, I feel like with Searchlight we've got a work 

[openstack-dev] [neutron] Stepping Down from Neutron Core Responsibilities

2015-11-04 Thread Edgar Magana
Dear Colleagues,

I have been part of this community from the very beginning when in Santa Clara, 
CA back in 2011 a bunch of we crazy people decided to work on this networking 
project.
Neutron has become is a very unique piece of code and it requires an approval 
team that will always be on the top of everything, this is why I would like to 
communicate you that I decided to step down as Neutron Core.

These are not breaking news for many of you because I shared this thought 
during the summit in Tokyo and now it is a commitment. I want to let you know 
that I learnt a lot from you and I hope my comments and reviews never offended 
you.

I will be around of course. I will continue my work on code reviews and 
coordination on the Networking Guide.

Thank you all for your support and good feedback,

Edgar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Joshua Harlow

Shraddha Pandhe wrote:



On Wed, Nov 4, 2015 at 3:12 PM, Kevin Benton > wrote:

>If we add our own database for internal stuff, we go back to the
same problem of allowing bad design.

I'm not sure I understand what you are saying here. A JSON blob that
only one driver knows how to interpret is worse than a vendor table.

I am only talking about the IPAM tables here. The reference
implementation doesn't need to play with JSON blob at all. Rather I
would say, it shouldn't. It can be left up to the vendors/users to
manage that blob responsibly. I can create my own database and point my
IPAM module to that, but then IPAM tables are practically useless for
me. The only reason for suggesting the blob is flexibility, which is the
main reason for pluggability of IPAM.

They both are specific to one driver but at least with a vendor
table you can have DB migrations, integrity, column queries, etc.
Additionally, the vendor table with extra features exposed via an
API extension makes it more clear to the API caller what is vendor
specific.


I agree that thats a huge advantage of having a db. But sometimes, it
may not be absolutely necessary to have an extra DB.

e.g. For multiple gateways support, a separate database would probably
add more overhead than required. All I want is to be able to fetch those
IPs.

The user can take a responsible decision whether to use the blob or the
database depending on the requirement, if they have the flexibility.

Can you elaborate what you mean by bad design?

When we are working on internal features, we have to follow different
timelines. Having an arbitrary blob can sometimes make us use that by
default, especially under pressing deadlines, instead of consulting with
broader audience and finding the right solution.


Just my 2 cents, and I know this since I'm on the team shraddha is on, 
but the above isn't really that great of an excuse for having/adding 
arbitrary blob(s); thinking long-term and figuring out what is really 
required (and perhaps that ends up being a structured format vs a json 
blob) is usually the better way of dealing with these types of issues 
(knowingly fully well that it is not always possible).


Everyone in every company has different timelines and that (IMHO) 
shouldn't be a 'way out' of consulting with a broader audience and 
finding the right solution...




On Nov 4, 2015 3:58 PM, "Shraddha Pandhe"
>
wrote:



On Wed, Nov 4, 2015 at 1:38 PM, Armando M. > wrote:



On 4 November 2015 at 13:21, Shraddha Pandhe
> wrote:

Hi Salvatore,

Thanks for the feedback. I agree with you that arbitrary
JSON blobs will make IPAM much more powerful. Some other
projects already do things like this.

e.g. In Ironic, node has driver_info, which is JSON. it
also has an 'extras' arbitrary JSON field. This allows
us to put any information in there that we think is
important for us.


I personally feel that relying on json blobs is not only
dangerously affecting portability, but it causes us to bloat
the business logic, and forcing us to be doing less
efficient when querying/filtering data


Most importantly though, I feel it's like abdicating our
responsibility to do a good design job.

How does it affect portability?

I don't think it forces us to do anything. 'Allows'? Maybe. But
that can be solved. Before making any design decisions for
internal feature-requests, we should first check with the
community if its a wider use-case. If it is a wider use-case, we
should collaborate and fix it upstream the right way.

I feel that, its impossible for the community to know all the
use-cases. Even if they knew, it would be impossible to
incorporate all of them. I filed a bug few months ago about
multiple gateway support for subnets.

https://bugs.launchpad.net/neutron/+bug/1464361
It was marked as 'Wont fix' because nobody else had this
use-case. Adding and maintaining a patch to support this is
super risky as it breaks the APIs. A JSON blob would have helped
me here.

I have another use-case. For multi-ip support for Ironic, we
want to divide the IP allocation ranges into two: Static IPs and
extra IPs. The static IPs are pre-configured IPs for Ironic
inventory whereas extra IPs are the multi-ips. Nobody else in
the community has this use-case.

If we add our own database for internal stuff, we 

Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Kevin Benton
>If we add our own database for internal stuff, we go back to the same
problem of allowing bad design.

I'm not sure I understand what you are saying here. A JSON blob that only
one driver knows how to interpret is worse than a vendor table. They both
are specific to one driver but at least with a vendor table you can have DB
migrations, integrity, column queries, etc. Additionally, the vendor table
with extra features exposed via an API extension makes it more clear to the
API caller what is vendor specific.

Can you elaborate what you mean by bad design?
On Nov 4, 2015 3:58 PM, "Shraddha Pandhe" 
wrote:

>
>
> On Wed, Nov 4, 2015 at 1:38 PM, Armando M.  wrote:
>
>>
>>
>> On 4 November 2015 at 13:21, Shraddha Pandhe > > wrote:
>>
>>> Hi Salvatore,
>>>
>>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>>> make IPAM much more powerful. Some other projects already do things like
>>> this.
>>>
>>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>>> 'extras' arbitrary JSON field. This allows us to put any information in
>>> there that we think is important for us.
>>>
>>
>> I personally feel that relying on json blobs is not only dangerously
>> affecting portability, but it causes us to bloat the business logic, and
>> forcing us to be doing less efficient when querying/filtering data
>>
>
>> Most importantly though, I feel it's like abdicating our responsibility
>> to do a good design job.
>>
>
>
> How does it affect portability?
>
> I don't think it forces us to do anything. 'Allows'? Maybe. But that can
> be solved. Before making any design decisions for internal
> feature-requests, we should first check with the community if its a wider
> use-case. If it is a wider use-case, we should collaborate and fix it
> upstream the right way.
>
> I feel that, its impossible for the community to know all the use-cases.
> Even if they knew, it would be impossible to incorporate all of them. I
> filed a bug few months ago about multiple gateway support for subnets.
>
> https://bugs.launchpad.net/neutron/+bug/1464361
>
> It was marked as 'Wont fix' because nobody else had this use-case. Adding
> and maintaining a patch to support this is super risky as it breaks the
> APIs. A JSON blob would have helped me here.
>
> I have another use-case. For multi-ip support for Ironic, we want to
> divide the IP allocation ranges into two: Static IPs and extra IPs. The
> static IPs are pre-configured IPs for Ironic inventory whereas extra IPs
> are the multi-ips. Nobody else in the community has this use-case.
>
> If we add our own database for internal stuff, we go back to the same
> problem of allowing bad design.
>
>
>
>> Ultimately, we should be able to identify how to model these extensions
>> you're thinking of both conceptually and logically.
>>
>
> I would agree with that. If theres an effort going on in this direction,
> ill be happy to join. Without this, people like us with unique use-cases
> are stuck with having patches.
>
>
>
>>
>> I couldn't care less if other projects use it, but we ended up using in
>> Neutron too, and since I lost this battle time and time again, all I am
>> left with is this rant :)
>>
>>
>>>
>>>
>>> Hoping to get some positive feedback from API and DB lieutenants too.
>>>
>>>
>>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando <
>>> salv.orla...@gmail.com> wrote:
>>>
 Arbitrary blobs are a powerful tools to circumvent limitations of an
 API, as well as other constraints which might be imposed for versioning or
 portability purposes.
 The parameters that should end up in such blob are typically specific
 for the target IPAM driver (to an extent they might even identify a
 specific driver to use), and therefore an API consumer who knows what
 backend is performing IPAM can surely leverage it.

 Therefore this would make a lot of sense, assuming API portability and
 not leaking backend details are not a concern.
 The Neutron team API & DB lieutenants will be able to provide more
 input on this regard.

 In this case other approaches such as a vendor specific extension are
 not a solution - assuming your granularity level is the allocation pool;
 indeed allocation pools are not first-class neutron resources, and it is
 not therefore possible to have APIs which associate vendor specific
 properties to allocation pools.

 Salvatore

 On 4 November 2015 at 21:46, Shraddha Pandhe <
 spandhe.openst...@gmail.com> wrote:

> Hi folks,
>
> I have a small question/suggestion about IPAM.
>
> With IPAM, we are allowing users to have their own IPAM drivers so
> that they can manage IP allocation. The problem is, the new ipam tables in
> the database have the same columns as the old tables. So, as a user, if I
> want to have my own logic for ip allocation, I 

[openstack-dev] [Designate] Mitaka Summit Notes

2015-11-04 Thread Hayes, Graham
Here are my rough notes I wrote up about the Mitaka design summit, and
what designate covered during the week.

Design Summit
=

This was a much more relaxed summit for Designate. We had done a huge
amount of
work in Vancouver, and we were nailing down details and doing cross
project work.

We got a few major features discussed, and laid out our priorities for
the next cycle.

We decided on the following:

1. Nova / Neutron Integration
2. Pool Scheduler
3. Pool Configuration migration to database
4. IXFR (Incremental Zone Transfer)
5. ALIAS Record type (Allows for CNAME like records at the root of a DNS
Zone)
6. DNSSEC (this may drag on for a cycle or two)

Nova & Neutron Integration
--

This is progressing pretty well, and Miguel Lavalle has patches up for
this. He,
Kiall Mac Innes and Carl Baldwin demoed this in a session on the
Thursday. If
you are interested in the idea, it is definitely worth a watch `here`_

Pool Scheduler
--

A vital piece of the pools re architecture that needs to be finished out.
There is no great debate on what we need, and I have taken on the task of
finishing this out.

Pool Configuration migration to database


Are current configuration file format is quite complex, and moving it to
an API
allows us to iterate on it much quicker, while reducing the complexity
of the
config file. I recently had to write an ansible play to write out this
file, and
it was not fun.

Kiall had a patch up, so we should be able to continue based on that.

IXFR


There was quite a lot of discussion on how this will be implemented,
both in the
work session, and the 1/2 day session on the Friday. Tim Simmons has
stepped up,
to continue the work on this.

ALIAS
-

This is quite a sort after feature - but is quite complex to implement.
The DNS RFCs explicitly ban this behavior, so we have to work the solution
around them. Eric Larson has been doing quite a lot of work on this in
Liberty,
and is going to continue in Mitaka.

DNSSEC
--

This is a feature that we have been looking at for a while, but we
started to plan
out our roadmap for it recently.

We (I) am allergic to storing private encryption keys in Designate, so
we had a
good conversation with Barbican about implementing a signing endpoint
that we would
post a hash to. This work is on me to now drive for Mitaka, so we can
consume it in N.

There is some raw notes in the `etherpad`_ and I expect we will soon be
seeing
specs building out of them.

.. _etherpad:
https://etherpad.openstack.org/p/mitaka-designate-summit-roadmap
.. _here: http://https://www.youtube.com/watch?v=AZbiARM9FPM

Thanks for reading!

Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-04 Thread Shraddha Pandhe
On Wed, Nov 4, 2015 at 3:12 PM, Kevin Benton  wrote:

> >If we add our own database for internal stuff, we go back to the same
> problem of allowing bad design.
>
> I'm not sure I understand what you are saying here. A JSON blob that only
> one driver knows how to interpret is worse than a vendor table.
>
I am only talking about the IPAM tables here. The reference implementation
doesn't need to play with JSON blob at all. Rather I would say, it
shouldn't. It can be left up to the vendors/users to manage that blob
responsibly. I can create my own database and point my IPAM module to that,
but then IPAM tables are practically useless for me. The only reason for
suggesting the blob is flexibility, which is the main reason for
pluggability of IPAM.


> They both are specific to one driver but at least with a vendor table you
> can have DB migrations, integrity, column queries, etc. Additionally, the
> vendor table with extra features exposed via an API extension makes it more
> clear to the API caller what is vendor specific.
>

I agree that thats a huge advantage of having a db. But sometimes, it may
not be absolutely necessary to have an extra DB.

e.g. For multiple gateways support, a separate database would probably add
more overhead than required. All I want is to be able to fetch those IPs.

The user can take a responsible decision whether to use the blob or the
database depending on the requirement, if they have the flexibility.

Can you elaborate what you mean by bad design?
>
When we are working on internal features, we have to follow different
timelines. Having an arbitrary blob can sometimes make us use that by
default, especially under pressing deadlines, instead of consulting with
broader audience and finding the right solution.



> On Nov 4, 2015 3:58 PM, "Shraddha Pandhe" 
> wrote:
>
>>
>>
>> On Wed, Nov 4, 2015 at 1:38 PM, Armando M.  wrote:
>>
>>>
>>>
>>> On 4 November 2015 at 13:21, Shraddha Pandhe <
>>> spandhe.openst...@gmail.com> wrote:
>>>
 Hi Salvatore,

 Thanks for the feedback. I agree with you that arbitrary JSON blobs
 will make IPAM much more powerful. Some other projects already do things
 like this.

 e.g. In Ironic, node has driver_info, which is JSON. it also has an
 'extras' arbitrary JSON field. This allows us to put any information in
 there that we think is important for us.

>>>
>>> I personally feel that relying on json blobs is not only dangerously
>>> affecting portability, but it causes us to bloat the business logic, and
>>> forcing us to be doing less efficient when querying/filtering data
>>>
>>
>>> Most importantly though, I feel it's like abdicating our responsibility
>>> to do a good design job.
>>>
>>
>>
>> How does it affect portability?
>>
>> I don't think it forces us to do anything. 'Allows'? Maybe. But that can
>> be solved. Before making any design decisions for internal
>> feature-requests, we should first check with the community if its a wider
>> use-case. If it is a wider use-case, we should collaborate and fix it
>> upstream the right way.
>>
>> I feel that, its impossible for the community to know all the use-cases.
>> Even if they knew, it would be impossible to incorporate all of them. I
>> filed a bug few months ago about multiple gateway support for subnets.
>>
>> https://bugs.launchpad.net/neutron/+bug/1464361
>>
>> It was marked as 'Wont fix' because nobody else had this use-case. Adding
>> and maintaining a patch to support this is super risky as it breaks the
>> APIs. A JSON blob would have helped me here.
>>
>> I have another use-case. For multi-ip support for Ironic, we want to
>> divide the IP allocation ranges into two: Static IPs and extra IPs. The
>> static IPs are pre-configured IPs for Ironic inventory whereas extra IPs
>> are the multi-ips. Nobody else in the community has this use-case.
>>
>> If we add our own database for internal stuff, we go back to the same
>> problem of allowing bad design.
>>
>>
>>
>>> Ultimately, we should be able to identify how to model these extensions
>>> you're thinking of both conceptually and logically.
>>>
>>
>> I would agree with that. If theres an effort going on in this direction,
>> ill be happy to join. Without this, people like us with unique use-cases
>> are stuck with having patches.
>>
>>
>>
>>>
>>> I couldn't care less if other projects use it, but we ended up using in
>>> Neutron too, and since I lost this battle time and time again, all I am
>>> left with is this rant :)
>>>
>>>


 Hoping to get some positive feedback from API and DB lieutenants too.


 On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando <
 salv.orla...@gmail.com> wrote:

> Arbitrary blobs are a powerful tools to circumvent limitations of an
> API, as well as other constraints which might be imposed for versioning or
> portability purposes.
> The parameters that should 

Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Boris Pavlovic
Robert,

I don't have the exactly numbers, but during the real testing of real
deployments I saw the impact of polling resource, this is one of the reason
why we have to add quite big sleep() during polling in Rally to reduce
amount of GET requests and avoid DDoS of OpenStack..

In any case it doesn't seem like hard task to collect the numbers.

Best regards,
Boris Pavlovic

On Thu, Nov 5, 2015 at 3:56 AM, Robert Collins 
wrote:

> On 5 November 2015 at 04:42, Sean Dague  wrote:
> > On 11/04/2015 10:13 AM, John Garbutt wrote:
>
> > I think longer term we probably need a dedicated event service in
> > OpenStack. A few of us actually had an informal conversation about this
> > during the Nova notifications session to figure out if there was a way
> > to optimize the Searchlight path. Nearly everyone wants websockets,
> > which is good. The problem is, that means you've got to anticipate
> > 10,000+ open websockets as soon as we expose this. Which means the stack
> > to deliver that sanely isn't just a bit of python code, it's also the
> > highly optimized server underneath.
>
> So any decent epoll implementation should let us hit that without a
> super optimised server - eventlet being in that category. I totally
> get that we're going to expect thundering herds, but websockets isn't
> new and the stacks we have - apache, eventlet - have been around long
> enough to adjust to the rather different scaling pattern.
>
> So - lets not panic, get a proof of concept up somewhere and then run
> an actual baseline test. If thats shockingly bad *then* lets panic.
>
> -Rob
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Gabriel Bezerra

Em 04.11.2015 17:19, Jim Rollenhagen escreveu:

On Wed, Nov 04, 2015 at 02:55:49PM -0500, Sean Dague wrote:
On 11/04/2015 02:42 PM, Jim Rollenhagen wrote:
> On Wed, Nov 04, 2015 at 04:08:18PM -0300, Gabriel Bezerra wrote:
>> Em 04.11.2015 11:32, Jim Rollenhagen escreveu:
>>> On Wed, Nov 04, 2015 at 08:44:36AM -0500, Jay Pipes wrote:
>>> On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:
 Hi,

 The change in https://review.openstack.org/237122 touches a feature from
 ironic that has not been released in any tag yet.

 At first, we from the team who has written the patch thought that, as it
 has not been part of any release, we could do backwards incompatible
 changes on that part of the code. As it turned out from discussing with
 the community, ironic commits to keeping the master branch backwards
 compatible and a deprecation process is needed in that case.

 That stated, the question at hand is: How long should this deprecation
 process last?

 This spec specifies the deprecation policy we should follow:
 
https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst


 As from its excerpt below, the minimum obsolescence period must be
 max(next_release, 3 months).

 """
 Based on that data, an obsolescence date will be set. At the very
 minimum the feature (or API, or configuration option) should be marked
 deprecated (and still be supported) in the next stable release branch,
 and for at least three months linear time. For example, a feature
 deprecated in November 2015 should still appear in the Mitaka release
 and stable/mitaka stable branch and cannot be removed before the
 beginning of the N development cycle in April 2016. A feature deprecated
 in March 2016 should still appear in the Mitaka release and
 stable/mitaka stable branch, and cannot be removed before June 2016.
 """

 This spec, however, only covers released and/or tagged code.

 tl;dr:

 How should we proceed regarding code/features/configs/APIs that have not
 even been tagged yet?

 Isn't waiting for the next OpenStack release in this case too long?
 Otherwise, we are going to have features/configs/APIs/etc. that are
 deprecated from their very first tag/release.

 How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
 months? max(next_tag, 3 months)?
>>>
>>> -1
>>>
>>> The reason the wording is that way is because lots of people deploy
>>> OpenStack services in a continuous deployment model, from the master
>>> source
>>> branches (sometimes minus X number of commits as these deployers run the
>>> code through their test platforms).
>>>
>>> Not everyone uses tagged releases, and OpenStack as a community has
>>> committed (pun intended) to serving these continuous deployment scenarios.
>>>
>>> Right, so I asked Gabriel to send this because it's an odd case, and I'd
>>> like to clear up the governance doc on this, since it doesn't seem to
>>> say much about code that was never released.
>>>
>>> The rule is a cycle boundary *and* at least 3 months. However, in this
>>> case, the code was never in a release at all, much less a stable
>>> release. So looking at the two types of deployers:
>>>
>>> 1) CD from trunk: 3 months is fine, we do that, done.
>>>
>>> 2) Deploying stable releases: if we only wait three months and not a
>>> cycle boundary, they'll never see it. If we do wait for a cycle
>>> boundary, we're pushing deprecated code to them for (seemingly to me) no
>>> benefit.
>>>
>>> So, it makes sense to me to not introduce the cycle boundary thing in
>>> this case. But there is value in keeping the rule simple, and if we want
>>> this one to pass a cycle boundary to optimize for that, I'm okay with
>>> that too. :)
>>>
>>> (Side note: there's actually a third type of deployer for Ironic; one
>>> that deploys intermediate releases. I think if we give them at least one
>>> release and three months, they're okay, so the general standard
>>> deprecation rule covers them.)
>>>
>>> // jim
>>
>> So, summarizing that:
>>
>> * untagged/master: 3 months
>>
>> * tagged/intermediate release: max(next tag/intermediate release, 3 months)
>>
>> * stable release: max(next release, 3 months)
>>
>> Is it correct?
>
> No, my proposal is that, but s/max/AND/.
>
> This also needs buyoff from other folks in the community, and an update
> to the document in the governance repo which requires TC approval.
>
> For now we must assume a cycle boundary and three months, and/or hold off on
> the patch until this is decided.

The AND version of this seems to respect the spirit of the original
intent. The 3 month window was designed to push back a little on last
minute deprecations for release, that we deleted the second master
landed. Which looked very different for stable release vs. CD consuming
folks.

The intermediate 

[openstack-dev] [nova][api]

2015-11-04 Thread Tony Breeds
Hi All,
Around the middle of October a spec [1] was uploaded to add pagination
support to the os-hypervisors API.  While I recognize the use case it seemed
like adding another pagination implementation wasn't an awesome idea.

Today I see 3 more requests to add pagination to APIs [2]

Perhaps I'm over thinking it but should we do something more strategic rather
than scattering "add pagination here".

It looks to me like we have at least 3 parties interested in this.

Yours Tony.

[1] https://review.openstack.org/#/c/234038
[2] 
https://review.openstack.org/#/q/message:pagination+project:openstack/nova-specs+status:open,n,z


pgpQ4Af2zQVVu.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing a new library: requestsexceptions

2015-11-04 Thread James E. Blair
Hi,

I'm pleased to announce the availability of a new micro-library named
requestsexceptions.  Now the task of convincing the requests library
not to fill up your filesystem with warnings about SSL requests has
never been easier!

Over in infra-land, we use the requests library a lot, whether it's
gertty talking to Gerrit or shade talking to OpenStack, and we love
using it.  It's a pleasure.  Except for two little things.

Requests is in the middle of a number of unfortunate standoffs.  It is
attempting to push the bar on SSL security by letting us all know when
a request is substandard in some way -- whether that is because a
certificate is missing a subject alternate name field, or the version
of Python in use is missing the latest SSL features.

This is great, but in many cases a user of requests is unable to
address any of the underlying causes of these warnings.  For example,
try as we might, public cloud providers are still using non-SAN
certificates.  And upgrading python on a system (or even the
underlying ssl module) is often out of the question.

Requests has a solution to this -- a simple recipe to disable specific
warnings when users know they are not necessary.

This is when we run into another standoff.

Requests is helpfully packaged in many GNU/Linux distributions.
However, the standard version of requests bundles the urllib3 library.
Some packagers have *unbundled* the urllib3 library from requests and
cause it to use the packaged version of urllib3.  This would be a
simple matter for the packagers and requests authors to argue about
over beer at PyCon, except if you want to disable a specific warning
rather than all warnings you need to import the specific urllib3
exceptions that requests uses.  The import path for those exceptions
will be different depending on whether urllib3 is bundled or not.

This means that in order to find a specific exception in order to
handle a requests warning, code like this must be used:

  try:
  from requests.packages.urllib3.exceptions import InsecurePlatformWarning
  except ImportError:
  try:
  from urllib3.exceptions import InsecurePlatformWarning
  except ImportError:
  InsecurePlatformWarning = None

The requestsexceptions library handles that for you so that you can
simply type:

  from requestsexepctions import InsecurePlatformWarning
  
We have just released requestsexceptions to pypi at version 1.1.1, and
proposed it to global requirements.  You can find it here:

  https://pypi.python.org/pypi/requestsexceptions
  https://git.openstack.org/cgit/openstack-infra/requestsexceptions

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Robert Collins
On 5 November 2015 at 13:06, Boris Pavlovic  wrote:
> Robert,
>
> I don't have the exactly numbers, but during the real testing of real
> deployments I saw the impact of polling resource, this is one of the reason
> why we have to add quite big sleep() during polling in Rally to reduce
> amount of GET requests and avoid DDoS of OpenStack..
>
> In any case it doesn't seem like hard task to collect the numbers.

Please do!.

But for clarity - in case the sub-thread wasn't clear - I was talking
about the numbers for a websocket based push thing, not polling.

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Pagination in thre API

2015-11-04 Thread Tony Breeds
On Thu, Nov 05, 2015 at 01:09:36PM +1100, Tony Breeds wrote:
> Hi All,
> Around the middle of October a spec [1] was uploaded to add pagination
> support to the os-hypervisors API.  While I recognize the use case it seemed
> like adding another pagination implementation wasn't an awesome idea.
> 
> Today I see 3 more requests to add pagination to APIs [2]
> 
> Perhaps I'm over thinking it but should we do something more strategic rather
> than scattering "add pagination here".
> 
> It looks to me like we have at least 3 parties interested in this.
> 
> Yours Tony.
> 
> [1] https://review.openstack.org/#/c/234038
> [2] 
> https://review.openstack.org/#/q/message:pagination+project:openstack/nova-specs+status:open,n,z

Sorry about the send without complete subject.

Yours Tony.


pgplqNDkgelpr.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo_messaging] Regarding " WARNING [oslo_messaging.server] wait() should have been called after stop() as wait() ...

2015-11-04 Thread Nader Lahouti
Hi,

I'm seeing the below warning message continuously:

2015-11-04 21:09:38  WARNING [oslo_messaging.server] wait() should have
been called after stop() as wait() waits for existing messages to finish
processing, it has been 692.98 seconds and stop() still has not been called

How to avoid this waring message? Anything needs to be changed when using
the notification API with the latest oslo_messaging?

Thanks,
Nader.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [senlin] Mitaka summit meetup - a summary

2015-11-04 Thread Qiming Teng
Hi,

Thanks for joining the senlin meetup last week at Tokyo summit. We know
some of you were not able to make it for various reasons. I'm trying to
summarize things we discussed during the meetup and some preliminary
conclusions we got. Please feel free to reply to this email or find the
team on #senlin channel if you have questions/suggestions.


Short Version
-

- Senlin will focus more on two things during Mitaka cycle: 1)
  stability regarding API and engine; 2) Heat resource type support.

- Senlin engine won't do "convergence" as suggested by some people,
  however the engine should be responsible to manage the lifecycles of
  the objects it creates on behalf of users.

- Team will revise the APIs according to the recent guidelines from
  api-wg and make the first version released as stable as possible.
  Before having a versioning scheme in place, we won't bump the API
  versions in ad-hoc ways.

- Senlin will NOT introduce complicated monitoring mechanisms into the
  engine albeit we'd strive to support cluster/node status checkings.
  We opt to use whatever external monitoring services and leave that
  an option for users.

- We will continue working with TOSCA team to polish policy definitions.

- We will document guidelines on how policy decisions are passed from
  one policy to another.

- We are interested in building baremetal clusters, but we will keep it
  in pipeline unless there are: 1) real requests, and 2) resources to
  get it done.

- As part of the API stabilization effort, we will generalize the
  concept of 'webhook' into 'receiver'.


Long Version (TL;DR)


* Stability v.s. Features

We had some feature requests like managing container clusters, doing
smart scheduling, running scripts on a cluster of servers, supporting
clusters of non-compute resources... etc. These are all good ideas.
However, Senlin is not aiming to become a service of everything. We have
to refrain from the temptation of too wide a scope. There are millions
of things we can do, but the first priority at this stage is about
stability. Making it usable and stable before adding fancy features,
this was the consensus we achieved during the meetup. We will stick to
that during Mitaka cycle.

* Heat Resource Type Support

Team had a discussion with heat team during a design summit slot. The
basic vision remained the same: let senlin do autoscaling and deprecate
heat autoscaling when senlin is stable. There are quite some details
to be figured out. The first thing we would do is to land senlin
cluster, node and profile resource types in Heat and build a
auto-scaling end-to-end solution comparable to existing one. Then the
two teams will make decision on how to make the transition smooth for
both developers and users.

* Convergence or Not

There were suggestions to define 'desired' state and 'observed' state
for clusters and have senlin engine do the convergence. After some
closer examination of the use case, we decided not to do it. The
'desired' state of a node is obvious (i.e. ACTIVE). The 'desired' state
of a cluster is a little bit vague. It boils down to whether we would
allow 'partial success' when creating a cluster of 1,000 nodes. Failures
are unavoidable, thus something we have to live with. However, we are
very cautious about making decisions for users. Say we have 90% nodes
ACTIVE in a cluster, should we label the cluster an 'ERROR' state, or a
'WARNING' state, or just 'ACTIVE'? We tend to leave this decision to
users who are smart people too. To avoid too much burdens on users, we
will add some defaults that can be set by operators.

There are cases where senlin engine creates objects when enforcing a
policy, e.g. the load-balancing policy. The engine should do a good job
managing the status of those objects.

* API Design

Senlin already have an API design which is documented. Before doing a
verion 1.0 release, we need to further hammer on it. Most of these
revisions would be related to guidelines from api-wg. For example, the
following changes are expected to land during Mitaka:

 - return 202 instead of 200 for asynchronous operations
 - better align with the proposed change to 'action' APIs
 - sorting keys and directions
 - returning 400 or 404 for resources not found
 - add location headers where appropriate

Another change to the current API will be about webhook. We got
suggestions related to receving notifications from other channels other
than webhooks, e.g. message queues, external monitoring services. To avoid
disruptive changes to the APIs in future, we decided to generalize webhook
APIs to 'receivers'. This is an important work even if we only support
webhook as the only type of receivers. We don't want to see webhook APIs
provided and soon replaced/deprecated.

* Relying on External Monitoring

There used to be some interests in doing status polling on cluster
nodes so that the engine will know whether nodes are healthy or not.
This idea was rejected during 

Re: [openstack-dev] [neutron] Stepping Down from Neutron Core Responsibilities

2015-11-04 Thread Fawad Khaliq
On Thu, Nov 5, 2015 at 1:28 AM, Edgar Magana 
wrote:

> Dear Colleagues,
>
> I have been part of this community from the very beginning when in Santa
> Clara, CA back in 2011 a bunch of we crazy people decided to work on this
> networking project.
> Neutron has become is a very unique piece of code and it requires an
> approval team that will always be on the top of everything, this is why I
> would like to communicate you that I decided to step down as Neutron Core.
>
> These are not breaking news for many of you because I shared this thought
> during the summit in Tokyo and now it is a commitment. I want to let you
> know that I learnt a lot from you and I hope my comments and reviews never
> offended you.
>
> I will be around of course. I will continue my work on code reviews and
> coordination on the Networking Guide.
>
> Thank you all for your support and good feedback,
>

It has been great working with you since the early days. Have a nice well
deserved break. Enjoy!


>
> Edgar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Ian Wienand

On 11/05/2015 01:43 AM, Matthew Thode wrote:

python wheel repo could help maybe?


So I think we've (i.e. greghaynes) got that mostly in place, we just
got a bit side-tracked.

[1] adds mirror slaves, that build the wheels using pypi-mirror [2],
and then [3] adds the jobs.

This should give us wheels of everything in requirements

I think this could be enhanced by using bindep to install
build-requirements on the mirrors; in chat we tossed around some ideas
of making this a puppet provider, etc.

-i

[1] https://review.openstack.org/165240
[2] https://git.openstack.org/cgit/openstack-infra/pypi-mirror
[3] https://review.openstack.org/164927
[4] https://git.openstack.org/cgit/openstack-infra/bindep

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] network control plane (libkv role)

2015-11-04 Thread Vikas Choudhary
Hi all,

By network control plane i specifically mean here sharing network state
across docker daemons sitting on different hosts/nova_vms in multi-host
networking.

libnetwork provides flexibility where vendors have a choice between network
control plane to be handled by libnetwork(libkv) or remote driver itself
OOB. Vendor can choose to "mute" libnetwork/libkv by advertising remote
driver capability as "local".

"local" is our current default "capability" configuration in kuryr.

I have following queries:
1. Does it mean Kuryr is taking responsibility of sharing network state
across docker daemons? If yes, network created on one docker host should be
visible in "docker network ls" on other hosts. To achieve this, I guess
kuryr driver will need help of some distributed data-store like consul etc.
so that kuryr driver on other hosts could create network in docker on other
hosts. Is this correct?

2. Why we cannot  set default scope as "Global" and let libkv do the
network state sync work?

Thoughts?

Regards
-Vikas Choudhary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] competing implementations

2015-11-04 Thread Vikas Choudhary
@Toni ,

In scenarios where two developers, with different implementation
approaches, are not able to reach any consensus over gerrit or ml, IMO,
other core members can do a voting or discussion and then PTL should take a
call which one to accept and allow for implementation. Anyways community
has to make a call even after implementations, so why to unnecessary waste
effort in implementation.
WDYT?
On 4 Nov 2015 19:35, "Baohua Yang"  wrote:

> Sure, thanks!
> And suggest add the time and channel information at the kuryr wiki page.
>
>
> On Wed, Nov 4, 2015 at 9:45 PM, Antoni Segura Puimedon <
> toni+openstac...@midokura.com> wrote:
>
>>
>>
>> On Wed, Nov 4, 2015 at 2:38 PM, Baohua Yang  wrote:
>>
>>> +1, Antoni!
>>> btw, is our weekly meeting still on meeting-4 channel?
>>> Not found it there yesterday.
>>>
>>
>> Yes, it is still on openstack-meeting-4, but this week we skipped it,
>> since some of us were
>> traveling and we already held the meeting on Friday. Next Monday it will
>> be held as usual
>> and the following week we start alternating (we have yet to get a room
>> for that one).
>>
>>>
>>> On Wed, Nov 4, 2015 at 9:27 PM, Antoni Segura Puimedon <
>>> toni+openstac...@midokura.com> wrote:
>>>
 Hi Kuryrs,

 Last Friday, as part of the contributors meetup, we discussed also code
 contribution etiquette. Like other OpenStack project (Magnum comes to
 mind), the etiquette for what to do when there is disagreement in the way
 to code a blueprint of fix a bug is as follows:

 1.- Try to reach out so that the original implementation gets closer to
 a compromise by having the discussion in gerrit (and Mailing list if it
 requires a wider range of arguments).
 2.- If a compromise can't be reached, feel free to make a separate
 implementation arguing well its difference, virtues and comparative
 disadvantages. We trust the whole community of reviewers to be able to
 judge which is the best implementation and I expect that often the
 reviewers will steer both submissions closer than they originally were.
 3.- If both competing implementations get the necessary support, the
 core reviewers will take a specific decision on which to take based on
 technical merit. Important factor are:
 * conciseness,
 * simplicity,
 * loose coupling,
 * logging and error reporting,
 * test coverage,
 * extensibility (when an immediate pending and blueprinted feature
 can better be built on top of it).
 * documentation,
 * performance.

 It is important to remember that technical disagreement is a healthy
 thing and should be tackled with civility. If we follow the rules above, it
 will lead to a healthier project and a more friendly community in which
 everybody can propose their vision with equal standing. Of course,
 sometimes there may be a feeling of duplicity, but even in the case where
 one's solution it is not selected (and I can assure you I've been there and
 know how it can feel awkward) it usually still enriches the discussion and
 constitutes a contribution that improves the project.

 Regards,

 Toni


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Best wishes!
>>> Baohua
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best wishes!
> Baohua
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] competing implementations

2015-11-04 Thread Vikas Choudhary
If we see from the angle of the contributor whose approach would not be
better than other competing one, it will be far easy for him to accept
logic at discussion stage rather after weeks of tracking review request and
addressing review comments.
On 5 Nov 2015 08:24, "Vikas Choudhary"  wrote:

> @Toni ,
>
> In scenarios where two developers, with different implementation
> approaches, are not able to reach any consensus over gerrit or ml, IMO,
> other core members can do a voting or discussion and then PTL should take a
> call which one to accept and allow for implementation. Anyways community
> has to make a call even after implementations, so why to unnecessary waste
> effort in implementation.
> WDYT?
> On 4 Nov 2015 19:35, "Baohua Yang"  wrote:
>
>> Sure, thanks!
>> And suggest add the time and channel information at the kuryr wiki page.
>>
>>
>> On Wed, Nov 4, 2015 at 9:45 PM, Antoni Segura Puimedon <
>> toni+openstac...@midokura.com> wrote:
>>
>>>
>>>
>>> On Wed, Nov 4, 2015 at 2:38 PM, Baohua Yang 
>>> wrote:
>>>
 +1, Antoni!
 btw, is our weekly meeting still on meeting-4 channel?
 Not found it there yesterday.

>>>
>>> Yes, it is still on openstack-meeting-4, but this week we skipped it,
>>> since some of us were
>>> traveling and we already held the meeting on Friday. Next Monday it will
>>> be held as usual
>>> and the following week we start alternating (we have yet to get a room
>>> for that one).
>>>

 On Wed, Nov 4, 2015 at 9:27 PM, Antoni Segura Puimedon <
 toni+openstac...@midokura.com> wrote:

> Hi Kuryrs,
>
> Last Friday, as part of the contributors meetup, we discussed also
> code contribution etiquette. Like other OpenStack project (Magnum comes to
> mind), the etiquette for what to do when there is disagreement in the way
> to code a blueprint of fix a bug is as follows:
>
> 1.- Try to reach out so that the original implementation gets closer
> to a compromise by having the discussion in gerrit (and Mailing list if it
> requires a wider range of arguments).
> 2.- If a compromise can't be reached, feel free to make a separate
> implementation arguing well its difference, virtues and comparative
> disadvantages. We trust the whole community of reviewers to be able to
> judge which is the best implementation and I expect that often the
> reviewers will steer both submissions closer than they originally were.
> 3.- If both competing implementations get the necessary support, the
> core reviewers will take a specific decision on which to take based on
> technical merit. Important factor are:
> * conciseness,
> * simplicity,
> * loose coupling,
> * logging and error reporting,
> * test coverage,
> * extensibility (when an immediate pending and blueprinted feature
> can better be built on top of it).
> * documentation,
> * performance.
>
> It is important to remember that technical disagreement is a healthy
> thing and should be tackled with civility. If we follow the rules above, 
> it
> will lead to a healthier project and a more friendly community in which
> everybody can propose their vision with equal standing. Of course,
> sometimes there may be a feeling of duplicity, but even in the case where
> one's solution it is not selected (and I can assure you I've been there 
> and
> know how it can feel awkward) it usually still enriches the discussion and
> constitutes a contribution that improves the project.
>
> Regards,
>
> Toni
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 --
 Best wishes!
 Baohua


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best wishes!
>> Baohua
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

[openstack-dev] [kuryr] external-network-connectivity

2015-11-04 Thread Vikas Choudhary
Hi All,

Would appreciate your views on
https://blueprints.launchpad.net/kuryr/+spec/external-network-connectivity .



-Vikas Choudhary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] mutihost networking with nova vm as docker host

2015-11-04 Thread Vikas Choudhary
Hi All,

I would appreciate inputs on following queries:
1. Are we assuming nova bm nodes to be docker host for now?

If Not:
 - Assuming nova vm as docker host and ovs as networking plugin:
This line is from the etherpad[1], "Eachdriver would have an
executable that receives the name of the veth pair that has to be bound to
the overlay" .
Query 1:  As per current ovs binding proposals by Feisky[2] and
Diga[3], vif seems to be binding with br-int on vm. I am unable to
understand how overlay will work. AFAICT , neutron will configure br-tun of
compute machines ovs only. How overlay(br-tun) configuration will happen
inside vm ?

 Query 2: Are we having double encapsulation(both at vm and
compute)? Is not it possible to bind vif into compute host br-int?

 Query3: I did not see subnet tags for network plugin being
passed in any of the binding patches[2][3][4]. Dont we need that?


[1]  https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
[2]  https://review.openstack.org/#/c/241558/
[3]  https://review.openstack.org/#/c/232948/1
[4]  https://review.openstack.org/#/c/227972/


-Vikas Choudhary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] [Bug#1497073]The return sample body of sample-list is different when use -m and not

2015-11-04 Thread Lin Juan IX Xia
Hi,Here is an open bug : https://bugs.launchpad.net/ceilometer/+bug/1497073Is it a bug or not?For the command "ceilometer sample-list --meter cpu", it calls "/v2/meter" API and return the OldSample objectswhich return body is different from "ceilometer sample-list --query 'meter=cpu'".To fix this inconformity, we can deprecate the command using -m or fix it to return the same body as command sample-listBest Regards,Xia Linjuan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Planning and prioritizing session for Mitaka

2015-11-04 Thread Renat Akhmerov
Team,

We’ve done a great job at the summit discussing our most hot topics within the 
project and a lot of important decisions were made. I would like to have though 
one more session in IRC to wrap up this by going over all the BPs/bugs we 
created in order to scope and prioritize them.

I’m proposing next Monday 9 Nov at 7.00 UTC. If you have other time options 
let’s communicate.

Thanks

Renat Akhmerov
@ Mirantis Inc.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Sean Dague
On 11/04/2015 02:42 PM, Jim Rollenhagen wrote:
> On Wed, Nov 04, 2015 at 04:08:18PM -0300, Gabriel Bezerra wrote:
>> Em 04.11.2015 11:32, Jim Rollenhagen escreveu:
>>> On Wed, Nov 04, 2015 at 08:44:36AM -0500, Jay Pipes wrote:
>>> On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:
 Hi,

 The change in https://review.openstack.org/237122 touches a feature from
 ironic that has not been released in any tag yet.

 At first, we from the team who has written the patch thought that, as it
 has not been part of any release, we could do backwards incompatible
 changes on that part of the code. As it turned out from discussing with
 the community, ironic commits to keeping the master branch backwards
 compatible and a deprecation process is needed in that case.

 That stated, the question at hand is: How long should this deprecation
 process last?

 This spec specifies the deprecation policy we should follow:
 https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst


 As from its excerpt below, the minimum obsolescence period must be
 max(next_release, 3 months).

 """
 Based on that data, an obsolescence date will be set. At the very
 minimum the feature (or API, or configuration option) should be marked
 deprecated (and still be supported) in the next stable release branch,
 and for at least three months linear time. For example, a feature
 deprecated in November 2015 should still appear in the Mitaka release
 and stable/mitaka stable branch and cannot be removed before the
 beginning of the N development cycle in April 2016. A feature deprecated
 in March 2016 should still appear in the Mitaka release and
 stable/mitaka stable branch, and cannot be removed before June 2016.
 """

 This spec, however, only covers released and/or tagged code.

 tl;dr:

 How should we proceed regarding code/features/configs/APIs that have not
 even been tagged yet?

 Isn't waiting for the next OpenStack release in this case too long?
 Otherwise, we are going to have features/configs/APIs/etc. that are
 deprecated from their very first tag/release.

 How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
 months? max(next_tag, 3 months)?
>>>
>>> -1
>>>
>>> The reason the wording is that way is because lots of people deploy
>>> OpenStack services in a continuous deployment model, from the master
>>> source
>>> branches (sometimes minus X number of commits as these deployers run the
>>> code through their test platforms).
>>>
>>> Not everyone uses tagged releases, and OpenStack as a community has
>>> committed (pun intended) to serving these continuous deployment scenarios.
>>>
>>> Right, so I asked Gabriel to send this because it's an odd case, and I'd
>>> like to clear up the governance doc on this, since it doesn't seem to
>>> say much about code that was never released.
>>>
>>> The rule is a cycle boundary *and* at least 3 months. However, in this
>>> case, the code was never in a release at all, much less a stable
>>> release. So looking at the two types of deployers:
>>>
>>> 1) CD from trunk: 3 months is fine, we do that, done.
>>>
>>> 2) Deploying stable releases: if we only wait three months and not a
>>> cycle boundary, they'll never see it. If we do wait for a cycle
>>> boundary, we're pushing deprecated code to them for (seemingly to me) no
>>> benefit.
>>>
>>> So, it makes sense to me to not introduce the cycle boundary thing in
>>> this case. But there is value in keeping the rule simple, and if we want
>>> this one to pass a cycle boundary to optimize for that, I'm okay with
>>> that too. :)
>>>
>>> (Side note: there's actually a third type of deployer for Ironic; one
>>> that deploys intermediate releases. I think if we give them at least one
>>> release and three months, they're okay, so the general standard
>>> deprecation rule covers them.)
>>>
>>> // jim
>>
>> So, summarizing that:
>>
>> * untagged/master: 3 months
>>
>> * tagged/intermediate release: max(next tag/intermediate release, 3 months)
>>
>> * stable release: max(next release, 3 months)
>>
>> Is it correct?
> 
> No, my proposal is that, but s/max/AND/.
> 
> This also needs buyoff from other folks in the community, and an update
> to the document in the governance repo which requires TC approval.
> 
> For now we must assume a cycle boundary and three months, and/or hold off on
> the patch until this is decided.

The AND version of this seems to respect the spirit of the original
intent. The 3 month window was designed to push back a little on last
minute deprecations for release, that we deleted the second master
landed. Which looked very different for stable release vs. CD consuming
folks.

The intermediate release or no-release model just wasn't considered
initially.

-Sean

-- 
Sean Dague
http://dague.net

Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Ed Leafe
On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
> 
> Here's a Devstack review for zookeeper in support of this initiative:
> 
> https://review.openstack.org/241040
> 
> Thanks,
> Dims

I thought that the operators at that session made it very clear that they would 
*not* run any Java applications, and that if OpenStack required a Java app to 
run, they would no longer use it.

I like the idea of using Zookeeper as the DLM, but I don't think it should be 
set up as a default, even for devstack, given the vehement opposition expressed.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Glance] Feedback on the proposed refactor to the image import process required

2015-11-04 Thread Brian Rosmaita
Thanks to everyone who has commented on the spec and/or participated in
the discussions at the summit last week.

I've uploaded a new patch set that describes my interpretation of the
image import workflow and API calls that were discussed.

Please take a look and leave comments.

--
cheers,
brian


On 10/20/15, 1:06 PM, "Brian Rosmaita" 
wrote:

>Hello,
>
>I've updated the image import spec [4] to incorporate the discussion thus
>far.
>
>The fishbowl session [5] is scheduled for Thursday, October 29,
>2:40pm-3:20pm.
>
>If you read through the spec and the current discussion on the review,
>you'll be in a good position to help us get this worked out during the
>summit.
>
>--
>cheers,
>brian 
>
>On 10/9/15, 3:39 AM, "Flavio Percoco"  wrote:
>
>>Greetings,
>>
>>There was recently a discussion[0] on the mailing list, started by Doug
>>Hellman, to discuss some issues related to Glance's API, the conflicts
>>between v1 and v2 and how this is making some pandas sad.
>>
>>The above served as a starting point for a discussion around the
>>current API, how it can be improved, etc. This discussions happened on
>>IRC[1], on  a call (sorry, I forgot to record this call, this is entirely
>>my fault) and on an etherpad[2]. Later on, Brian Rosmaita summarized
>>all this in a document[3], which became a spec[4]. :D
>>
>>The spec is the central point of discussion now and it contains a more
>>structured, more organized and more concrete proposal that needs to be
>>discussed. Nevertheless, I believe there's still lot to do there and I
>>also believe - I'm sure others do as well - this spec could use
>>opinions from a broader audience. Therefore, I'd really appreciate
>>your opinion on this thread.
>>
>>This will also be discussed at the summit[5] in a fishbowl session and
>>I hope to see you all there as well.
>>
>>I'd like to thank everyone that has participated in this discussion so
>>far and I hope to see others chime in as well.
>>
>>Flavio
>>
>>[0] 
>>http://lists.openstack.org/pipermail/openstack-dev/2015-September/074360.
>>h
>>tml
>>[1] 
>>http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-g
>>l
>>ance.2015-09-22.log.html#t2015-09-22T14:31:00
>>[2] https://etherpad.openstack.org/p/glance-upload-mechanism-reloaded
>>[3] 
>>https://docs.google.com/document/d/1_mQZlUN_AtqhH6qh3ANz-m1zCOYkp1GyxndLt
>>Y
>>MFRb0
>>[4] https://review.openstack.org/#/c/232371/
>>[5] 
>>http://mitakadesignsummit.sched.org/event/398b1f44af7a4ae3dde9cb47d4d52d9
>>a
>>
>>-- 
>>@flaper87
>>Flavio Percoco
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Jim Rollenhagen
On Wed, Nov 04, 2015 at 02:55:49PM -0500, Sean Dague wrote:
> On 11/04/2015 02:42 PM, Jim Rollenhagen wrote:
> > On Wed, Nov 04, 2015 at 04:08:18PM -0300, Gabriel Bezerra wrote:
> >> Em 04.11.2015 11:32, Jim Rollenhagen escreveu:
> >>> On Wed, Nov 04, 2015 at 08:44:36AM -0500, Jay Pipes wrote:
> >>> On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:
>  Hi,
> 
>  The change in https://review.openstack.org/237122 touches a feature from
>  ironic that has not been released in any tag yet.
> 
>  At first, we from the team who has written the patch thought that, as it
>  has not been part of any release, we could do backwards incompatible
>  changes on that part of the code. As it turned out from discussing with
>  the community, ironic commits to keeping the master branch backwards
>  compatible and a deprecation process is needed in that case.
> 
>  That stated, the question at hand is: How long should this deprecation
>  process last?
> 
>  This spec specifies the deprecation policy we should follow:
>  https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst
> 
> 
>  As from its excerpt below, the minimum obsolescence period must be
>  max(next_release, 3 months).
> 
>  """
>  Based on that data, an obsolescence date will be set. At the very
>  minimum the feature (or API, or configuration option) should be marked
>  deprecated (and still be supported) in the next stable release branch,
>  and for at least three months linear time. For example, a feature
>  deprecated in November 2015 should still appear in the Mitaka release
>  and stable/mitaka stable branch and cannot be removed before the
>  beginning of the N development cycle in April 2016. A feature deprecated
>  in March 2016 should still appear in the Mitaka release and
>  stable/mitaka stable branch, and cannot be removed before June 2016.
>  """
> 
>  This spec, however, only covers released and/or tagged code.
> 
>  tl;dr:
> 
>  How should we proceed regarding code/features/configs/APIs that have not
>  even been tagged yet?
> 
>  Isn't waiting for the next OpenStack release in this case too long?
>  Otherwise, we are going to have features/configs/APIs/etc. that are
>  deprecated from their very first tag/release.
> 
>  How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
>  months? max(next_tag, 3 months)?
> >>>
> >>> -1
> >>>
> >>> The reason the wording is that way is because lots of people deploy
> >>> OpenStack services in a continuous deployment model, from the master
> >>> source
> >>> branches (sometimes minus X number of commits as these deployers run the
> >>> code through their test platforms).
> >>>
> >>> Not everyone uses tagged releases, and OpenStack as a community has
> >>> committed (pun intended) to serving these continuous deployment scenarios.
> >>>
> >>> Right, so I asked Gabriel to send this because it's an odd case, and I'd
> >>> like to clear up the governance doc on this, since it doesn't seem to
> >>> say much about code that was never released.
> >>>
> >>> The rule is a cycle boundary *and* at least 3 months. However, in this
> >>> case, the code was never in a release at all, much less a stable
> >>> release. So looking at the two types of deployers:
> >>>
> >>> 1) CD from trunk: 3 months is fine, we do that, done.
> >>>
> >>> 2) Deploying stable releases: if we only wait three months and not a
> >>> cycle boundary, they'll never see it. If we do wait for a cycle
> >>> boundary, we're pushing deprecated code to them for (seemingly to me) no
> >>> benefit.
> >>>
> >>> So, it makes sense to me to not introduce the cycle boundary thing in
> >>> this case. But there is value in keeping the rule simple, and if we want
> >>> this one to pass a cycle boundary to optimize for that, I'm okay with
> >>> that too. :)
> >>>
> >>> (Side note: there's actually a third type of deployer for Ironic; one
> >>> that deploys intermediate releases. I think if we give them at least one
> >>> release and three months, they're okay, so the general standard
> >>> deprecation rule covers them.)
> >>>
> >>> // jim
> >>
> >> So, summarizing that:
> >>
> >> * untagged/master: 3 months
> >>
> >> * tagged/intermediate release: max(next tag/intermediate release, 3 months)
> >>
> >> * stable release: max(next release, 3 months)
> >>
> >> Is it correct?
> > 
> > No, my proposal is that, but s/max/AND/.
> > 
> > This also needs buyoff from other folks in the community, and an update
> > to the document in the governance repo which requires TC approval.
> > 
> > For now we must assume a cycle boundary and three months, and/or hold off on
> > the patch until this is decided.
> 
> The AND version of this seems to respect the spirit of the original
> intent. The 3 month window was designed to push back a little 

Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Robert Collins
On 5 November 2015 at 07:37, Sean Dague  wrote:

> It only really will screw when upper-constraints.txt gets updated on a
> branch.

Bah, yes.

> I honestly think it's ok to not be perfect here. In the base case we'll
> speed up a good chunk, and we'll be slower (though not as slow as today)
> for a day after we bump upper-constraints for something expensive (like
> numpy). It seems like a reasonable trade off for not much complexity.

Oh, I clearly wasn't clear. I think your patch is a good thing. I'm
highlighting the corner case and proposing a down-the-track way to
address it.

And the reason I'm doing that is that Clark has said that we have lots
and lots of trouble updating images, so I'm expecting the corner case
to be fairly common :/.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Joshua Harlow
Along this line, thinks like the following are likely more changeable 
(and my guess is operators would want to change them when things start 
going badly), for example from a nova.conf that I have laying around...


[DEFAULT]

rabbit_hosts=...
rpc_response_timeout=...
default_notification_level=...
default_log_levels=...

[glance]

api_servers=...

(and more)

Some of those I think should have higher priority as being 
reconfigurable, but I think operators should be asked what they think 
would be useful and prioritize those.


Some of those really are service discovery 'types' (rabbit_hosts, 
glance/api_servers, keystone/api_servers) but fixing this is likely a 
longer term goal (see conversations in keystone).


Joshua Harlow wrote:

gord chung wrote:

we actually had a solution implemented in Ceilometer to handle this[1].

that said, based on the results of our survey[2], we found that most
operators *never* update configuration files after the initial setup and
if they did it was very rarely (monthly updates). the question related
to Ceilometer and its pipeline configuration file so the results might
be specific to Ceilometer. I think you should definitely query operators
before undertaking any work. the last thing you want to do is implement
a feature no one really needs/wants.

[1]
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html

[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html



So my general though on the above is yes, definitely consult operators
to see if they would use this, although if a feature doesn't exist and
has never existed (say outside of ceilometer) then it's sort of hard to
get an accurate survey result from a group of people that have never had
the feature in the first place... Either way it should be done, just to
get more knowledge...

I know operators (at yahoo!) want to be able to dynamically change the
logging level, and that's not a monthly task, but more of an 'as-needed'
one that would be very helpful when things start going badly... So
perhaps the set of reloadable configuration should start out small and
not encompass all the things...



On 04/11/2015 10:00 AM, Marian Horban wrote:

Hi guys,

Unfortunately I haven't been on Tokio summit but I know that there was
discussion about dynamic reloading of configuration.
Etherpad refs:
https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services,


https://etherpad.openstack.org/p/mitaka-oslo-security-logging

In this thread I want to discuss agreements reached on the summit and
discuss
implementation details.

Some notes taken from etherpad and my remarks:

1. "Adding "mutable" parameter for each option."
"Do we have an option mutable=True on CfgOpt? Yes"
-
As I understood 'mutable' parameter must indicate whether service
contains
code responsible for reloading of this option or not.
And this parameter should be one of the arguments of cfg.Opt
constructor.
Problems:
1. Library's options.
SSL options ca_file, cert_file, key_file taken from oslo.service library
could be reloaded in nova-api so these options should be mutable...
But for some projects that don't need SSL support reloading of SSL
options
doesn't make sense. For such projects this option should be non mutable.
Problem is that oslo.service - single and there are many different
projects
which use it in different way.
The same options could be mutable and non mutable in different contexts.
2. Support of config options on some platforms.
Parameter "mutable" could be different for different platforms. Some
options
make sense only for specific platforms. If we mark such options as
mutable
it could be misleading on some platforms.
3. Dependency of options.
There are many 'workers' options(osapi_compute_workers, ec2_workers,
metadata_workers, workers). These options specify number of workers for
OpenStack API services.
If value of the 'workers' option is greater than '1' instance of
ProcessLauncher is created otherwise instance of ServiceLauncher is
created.
When ProcessLauncher receives SIGHUP it reloads it own configuration,
gracefully terminates children and respawns new children.
This mechanism allows to reload many config options implicitly.
But if value of the 'workers' option equals '1' instance of
ServiceLauncher
is created.
ServiceLauncher starts everything in single process and in this case we
don't have such implicit reloading.

I think that mutability of options is a complicated feature and I
think that
adding of 'mutable' parameter into cfg.Opt constructor could just add
mess.

2. "oslo.service catches SIGHUP and calls oslo.config"
-
From my point of view every service should register list of hooks to
reload
config options. oslo.service should catch SIGHUP and call list of
registered
hooks one by one with specified order.
Discussion of 

[openstack-dev] Cinder mid-cycle planning survey

2015-11-04 Thread Duncan Thomas
Hi Folks

The Cinder team is trying to plan our mid-cycle meetup again.

Can anybody interested in attending please fill out this quick survey to
help with planning, please?

https://www.surveymonkey.com/r/Q5FZX68

Closing date is 11th November.

Thanks
-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Jim Rollenhagen
On Wed, Nov 04, 2015 at 04:08:18PM -0300, Gabriel Bezerra wrote:
> Em 04.11.2015 11:32, Jim Rollenhagen escreveu:
> >On Wed, Nov 04, 2015 at 08:44:36AM -0500, Jay Pipes wrote:
> >On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:
> >>Hi,
> >>
> >>The change in https://review.openstack.org/237122 touches a feature from
> >>ironic that has not been released in any tag yet.
> >>
> >>At first, we from the team who has written the patch thought that, as it
> >>has not been part of any release, we could do backwards incompatible
> >>changes on that part of the code. As it turned out from discussing with
> >>the community, ironic commits to keeping the master branch backwards
> >>compatible and a deprecation process is needed in that case.
> >>
> >>That stated, the question at hand is: How long should this deprecation
> >>process last?
> >>
> >>This spec specifies the deprecation policy we should follow:
> >>https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst
> >>
> >>
> >>As from its excerpt below, the minimum obsolescence period must be
> >>max(next_release, 3 months).
> >>
> >>"""
> >>Based on that data, an obsolescence date will be set. At the very
> >>minimum the feature (or API, or configuration option) should be marked
> >>deprecated (and still be supported) in the next stable release branch,
> >>and for at least three months linear time. For example, a feature
> >>deprecated in November 2015 should still appear in the Mitaka release
> >>and stable/mitaka stable branch and cannot be removed before the
> >>beginning of the N development cycle in April 2016. A feature deprecated
> >>in March 2016 should still appear in the Mitaka release and
> >>stable/mitaka stable branch, and cannot be removed before June 2016.
> >>"""
> >>
> >>This spec, however, only covers released and/or tagged code.
> >>
> >>tl;dr:
> >>
> >>How should we proceed regarding code/features/configs/APIs that have not
> >>even been tagged yet?
> >>
> >>Isn't waiting for the next OpenStack release in this case too long?
> >>Otherwise, we are going to have features/configs/APIs/etc. that are
> >>deprecated from their very first tag/release.
> >>
> >>How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
> >>months? max(next_tag, 3 months)?
> >
> >-1
> >
> >The reason the wording is that way is because lots of people deploy
> >OpenStack services in a continuous deployment model, from the master
> >source
> >branches (sometimes minus X number of commits as these deployers run the
> >code through their test platforms).
> >
> >Not everyone uses tagged releases, and OpenStack as a community has
> >committed (pun intended) to serving these continuous deployment scenarios.
> >
> >Right, so I asked Gabriel to send this because it's an odd case, and I'd
> >like to clear up the governance doc on this, since it doesn't seem to
> >say much about code that was never released.
> >
> >The rule is a cycle boundary *and* at least 3 months. However, in this
> >case, the code was never in a release at all, much less a stable
> >release. So looking at the two types of deployers:
> >
> >1) CD from trunk: 3 months is fine, we do that, done.
> >
> >2) Deploying stable releases: if we only wait three months and not a
> >cycle boundary, they'll never see it. If we do wait for a cycle
> >boundary, we're pushing deprecated code to them for (seemingly to me) no
> >benefit.
> >
> >So, it makes sense to me to not introduce the cycle boundary thing in
> >this case. But there is value in keeping the rule simple, and if we want
> >this one to pass a cycle boundary to optimize for that, I'm okay with
> >that too. :)
> >
> >(Side note: there's actually a third type of deployer for Ironic; one
> >that deploys intermediate releases. I think if we give them at least one
> >release and three months, they're okay, so the general standard
> >deprecation rule covers them.)
> >
> >// jim
> 
> So, summarizing that:
> 
> * untagged/master: 3 months
> 
> * tagged/intermediate release: max(next tag/intermediate release, 3 months)
> 
> * stable release: max(next release, 3 months)
> 
> Is it correct?

No, my proposal is that, but s/max/AND/.

This also needs buyoff from other folks in the community, and an update
to the document in the governance repo which requires TC approval.

For now we must assume a cycle boundary and three months, and/or hold off on
the patch until this is decided.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Robert Collins
On 5 November 2015 at 04:42, Sean Dague  wrote:
> On 11/04/2015 10:13 AM, John Garbutt wrote:

> I think longer term we probably need a dedicated event service in
> OpenStack. A few of us actually had an informal conversation about this
> during the Nova notifications session to figure out if there was a way
> to optimize the Searchlight path. Nearly everyone wants websockets,
> which is good. The problem is, that means you've got to anticipate
> 10,000+ open websockets as soon as we expose this. Which means the stack
> to deliver that sanely isn't just a bit of python code, it's also the
> highly optimized server underneath.

So any decent epoll implementation should let us hit that without a
super optimised server - eventlet being in that category. I totally
get that we're going to expect thundering herds, but websockets isn't
new and the stacks we have - apache, eventlet - have been around long
enough to adjust to the rather different scaling pattern.

So - lets not panic, get a proof of concept up somewhere and then run
an actual baseline test. If thats shockingly bad *then* lets panic.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Gabriel Bezerra

Em 04.11.2015 11:32, Jim Rollenhagen escreveu:

On Wed, Nov 04, 2015 at 08:44:36AM -0500, Jay Pipes wrote:
On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:
>Hi,
>
>The change in https://review.openstack.org/237122 touches a feature from
>ironic that has not been released in any tag yet.
>
>At first, we from the team who has written the patch thought that, as it
>has not been part of any release, we could do backwards incompatible
>changes on that part of the code. As it turned out from discussing with
>the community, ironic commits to keeping the master branch backwards
>compatible and a deprecation process is needed in that case.
>
>That stated, the question at hand is: How long should this deprecation
>process last?
>
>This spec specifies the deprecation policy we should follow:
>https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst
>
>
>As from its excerpt below, the minimum obsolescence period must be
>max(next_release, 3 months).
>
>"""
>Based on that data, an obsolescence date will be set. At the very
>minimum the feature (or API, or configuration option) should be marked
>deprecated (and still be supported) in the next stable release branch,
>and for at least three months linear time. For example, a feature
>deprecated in November 2015 should still appear in the Mitaka release
>and stable/mitaka stable branch and cannot be removed before the
>beginning of the N development cycle in April 2016. A feature deprecated
>in March 2016 should still appear in the Mitaka release and
>stable/mitaka stable branch, and cannot be removed before June 2016.
>"""
>
>This spec, however, only covers released and/or tagged code.
>
>tl;dr:
>
>How should we proceed regarding code/features/configs/APIs that have not
>even been tagged yet?
>
>Isn't waiting for the next OpenStack release in this case too long?
>Otherwise, we are going to have features/configs/APIs/etc. that are
>deprecated from their very first tag/release.
>
>How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
>months? max(next_tag, 3 months)?

-1

The reason the wording is that way is because lots of people deploy
OpenStack services in a continuous deployment model, from the master 
source
branches (sometimes minus X number of commits as these deployers run 
the

code through their test platforms).

Not everyone uses tagged releases, and OpenStack as a community has
committed (pun intended) to serving these continuous deployment 
scenarios.


Right, so I asked Gabriel to send this because it's an odd case, and 
I'd

like to clear up the governance doc on this, since it doesn't seem to
say much about code that was never released.

The rule is a cycle boundary *and* at least 3 months. However, in this
case, the code was never in a release at all, much less a stable
release. So looking at the two types of deployers:

1) CD from trunk: 3 months is fine, we do that, done.

2) Deploying stable releases: if we only wait three months and not a
cycle boundary, they'll never see it. If we do wait for a cycle
boundary, we're pushing deprecated code to them for (seemingly to me) 
no

benefit.

So, it makes sense to me to not introduce the cycle boundary thing in
this case. But there is value in keeping the rule simple, and if we 
want

this one to pass a cycle boundary to optimize for that, I'm okay with
that too. :)

(Side note: there's actually a third type of deployer for Ironic; one
that deploys intermediate releases. I think if we give them at least 
one

release and three months, they're okay, so the general standard
deprecation rule covers them.)

// jim


So, summarizing that:

* untagged/master: 3 months

* tagged/intermediate release: max(next tag/intermediate release, 3 
months)


* stable release: max(next release, 3 months)

Is it correct?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Joshua Harlow

gord chung wrote:

we actually had a solution implemented in Ceilometer to handle this[1].

that said, based on the results of our survey[2], we found that most
operators *never* update configuration files after the initial setup and
if they did it was very rarely (monthly updates). the question related
to Ceilometer and its pipeline configuration file so the results might
be specific to Ceilometer. I think you should definitely query operators
before undertaking any work. the last thing you want to do is implement
a feature no one really needs/wants.

[1]
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html


So my general though on the above is yes, definitely consult operators 
to see if they would use this, although if a feature doesn't exist and 
has never existed (say outside of ceilometer) then it's sort of hard to 
get an accurate survey result from a group of people that have never had 
the feature in the first place... Either way it should be done, just to 
get more knowledge...


I know operators (at yahoo!) want to be able to dynamically change the 
logging level, and that's not a monthly task, but more of an 'as-needed' 
one that would be very helpful when things start going badly... So 
perhaps the set of reloadable configuration should start out small and 
not encompass all the things...




On 04/11/2015 10:00 AM, Marian Horban wrote:

Hi guys,

Unfortunately I haven't been on Tokio summit but I know that there was
discussion about dynamic reloading of configuration.
Etherpad refs:
https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services,

https://etherpad.openstack.org/p/mitaka-oslo-security-logging

In this thread I want to discuss agreements reached on the summit and
discuss
implementation details.

Some notes taken from etherpad and my remarks:

1. "Adding "mutable" parameter for each option."
"Do we have an option mutable=True on CfgOpt? Yes"
-
As I understood 'mutable' parameter must indicate whether service
contains
code responsible for reloading of this option or not.
And this parameter should be one of the arguments of cfg.Opt constructor.
Problems:
1. Library's options.
SSL options ca_file, cert_file, key_file taken from oslo.service library
could be reloaded in nova-api so these options should be mutable...
But for some projects that don't need SSL support reloading of SSL
options
doesn't make sense. For such projects this option should be non mutable.
Problem is that oslo.service - single and there are many different
projects
which use it in different way.
The same options could be mutable and non mutable in different contexts.
2. Support of config options on some platforms.
Parameter "mutable" could be different for different platforms. Some
options
make sense only for specific platforms. If we mark such options as
mutable
it could be misleading on some platforms.
3. Dependency of options.
There are many 'workers' options(osapi_compute_workers, ec2_workers,
metadata_workers, workers). These options specify number of workers for
OpenStack API services.
If value of the 'workers' option is greater than '1' instance of
ProcessLauncher is created otherwise instance of ServiceLauncher is
created.
When ProcessLauncher receives SIGHUP it reloads it own configuration,
gracefully terminates children and respawns new children.
This mechanism allows to reload many config options implicitly.
But if value of the 'workers' option equals '1' instance of
ServiceLauncher
is created.
ServiceLauncher starts everything in single process and in this case we
don't have such implicit reloading.

I think that mutability of options is a complicated feature and I
think that
adding of 'mutable' parameter into cfg.Opt constructor could just add
mess.

2. "oslo.service catches SIGHUP and calls oslo.config"
-
From my point of view every service should register list of hooks to
reload
config options. oslo.service should catch SIGHUP and call list of
registered
hooks one by one with specified order.
Discussion of such implementation was started in ML:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074558.html.
Raw reviews:
https://review.openstack.org/#/c/228892/,
https://review.openstack.org/#/c/223668/.

3. "oslo.config is responsible to log changes which were ignored on
SIGHUP"
-
Some config options could be changed using API(for example quotas)
that's why
oslo.config doesn't know actual configuration of service and can't log
changes of configuration.

Regards, Marian Horban


__
OpenStack Development Mailing List (not for usage questions)

[openstack-dev] [Fuel] Default PostgreSQL server encoding is 'ascii'

2015-11-04 Thread Artem Roma
Hi, folks!

Recently I've been working on this bug [1] and have found that default
encoding of database server used by FUEL infrastructure components
(Nailgun, OSTF, etc) is ascii. At least this is true for environment set up
via VirtualBox scripts. This situation may (and already does returning to
the bug) cause obfuscating problems when dealing with non-ascii string data
supplied by user such as names for nodes, clusters etc. Nailgun encodes
such data in UTF-8 before sending to the database so misinterpretation by
former while saving it is sure thing.

I wonder if we have such situation on all Fuel environments or only on
those set by VB scripts, because as for me it seems as pretty serious flaw
in our infrastructure. It would be great to have some comments from people
more competent in areas regarding to the matter.

​[1]​ https://bugs.launchpad.net/fuel/+bug/1472275

-- 
Regards!)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Infra] "gate-neutron-lbaasv1-dsvm-api" blocking patch merge

2015-11-04 Thread Mohan Kumar
Hi,

Jenkins blocking patch merge due to " gate-neutron-lbaasv1-dsvm-api
"
failures which is unrelated to patch-set changes .

Patch: https://review.openstack.org/#/c/237896/

Please help to resolve this issue .

Regards.,
Mohankumar.N
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] Visibility consistency for packages and images

2015-11-04 Thread Olivier Lemasle
Hi all,

Ekaterina Chernova suggested last week to discuss the matter of
visibility consistency for murano packages and glance images,
following my bug report on that subject [1].

The general idea is to make sure that if a murano package is public,
it should be really available for all projects, which mean that:
- if it depends on other murano packages, these packages must be public,
- if it depends on glance images, these images must be public.

In fact, I created this bug report after Alexander Tivelkov's
suggesion on a review request [2] I did to fix a related bug [3]. In
this other bug report, I focused on images visibility during the
initial import of a package, because dependant murano packages are
already imported with the same visibility. It seemed to me most
confusing that packages are made public if the images are private. So
I did a fix in murano-dashboard, which is already merged [4], and
another one for python-muranoclient, still in review ([2]).

What are your thoughts on this subject? Do we need to address first
the general dependency issue? Is this a murano, glance or glare
subject?

Do we still need to do something specific for the initial import
(currently, dependency resolution for packages and images is done both
in murano-dashboard and in python-muranoclient)?

Thank you for your inputs,

[1] https://bugs.launchpad.net/murano/+bug/1509208
[2] https://review.openstack.org/#/c/236834/
[3] https://bugs.launchpad.net/murano/+bug/1507139
[4] https://review.openstack.org/#/c/236830/

-- 
Olivier Lemasle
Software Engineer
Apalia™
Mobile: +33-611-69-12-11
http://www.apalia.net
olivier.lema...@apalia.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Logging - filling up my tiny SSDs

2015-11-04 Thread Sean Dague
On 11/02/2015 10:36 AM, Sean M. Collins wrote:
> On Sun, Nov 01, 2015 at 10:12:10PM EST, Davanum Srinivas wrote:
>> Sean,
>>
>> I typically switch off screen and am able to redirect logs to a specified
>> directory. Does this help?
>>
>> USE_SCREEN=False
>> LOGDIR=/opt/stack/logs/
> 
> It's not that I want to disable screen. I want screen to run, and not
> log the output to files, since I have a tiny 16GB ssd card on these NUCs
> and it fills it up if I leave it running for a week or so. 

If you right a patch, I think it's fine to include, however it's a
pretty edge case. Super small disks (I didn't even realize they made SSD
that small, I thought 120 was about the floor), and running devstack for
long times without rebuild.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [networking-powervm] Please createnetworking-powervm on PyPI

2015-11-04 Thread Andrew Thorstensen
Hi Kyle,

My team owns the networking-powervm project.  When we moved from the 
StackForge to OpenStack namespace we changed the name from neutron-powervm 
to networking-powervm.  There is no reason for the PyPI project to have a 
different name and we were planning to publish an update shortly with the 
networking-powervm name.

We were planning to do this sometime next week.  Do we need it done 
sooner?


Thanks!

Drew Thorstensen
Power Systems / Cloud Software



From:   Kyle Mestery 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   11/03/2015 10:09 PM
Subject:[openstack-dev] [neutron] [networking-powervm] Please 
create  networking-powervm on PyPI



I'm reaching out to whoever owns the networking-powervm project [1]. I 
have a review out [2] which updates the PyPI publishing jobs so we can 
push releases for networking-powervm. However, in looking at PyPI, I don't 
see a networking-powervm project, but instead a neutron-powervm project. 
Is there a reason for the PyPI project to have a different name? I believe 
this will not allow us to push releases, as the name of the projects need 
to match. Further, the project creation guide recommends naming them the 
same [4].

Can someone from the PowerVM team look at registering networking-powervm 
on PyPI and correcting this please?

Thanks!
Kyle

[1] https://launchpad.net/neutron-powervm
[2] https://review.openstack.org/#/c/233466/
[3] https://pypi.python.org/pypi/neutron-powervm/0.1.0
[4] http://docs.openstack.org/infra/manual/creators.html#pypi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Handling lots of GET query string parameters?

2015-11-04 Thread Salvatore Orlando
Regarding Jay's proposal, this would be tantamount to defining an API
action for retrieving instances, something currently being discussed here
[1].
The only comment I have is that I am not entirely surely whether using the
POST verb for operations which do no alter at all the server representation
of any object is in accordance with RFC 7231.
A search API like the one pointed out by Julien is interesting; at first
glance I'm not able to comment on its RESTfulness - it definitely has
plenty of use cases and enables users to run complex queries; one possible
downside is that it increases the complexity of simple queries.

For the purpose of the Nova spec I think it might be ok to limit the
functionality to a "small number of instance ids" as expressed in the spec.
On the other hand how crazy it would be to limit the number of bytes in the
URL by allowing to specify contract form of instance UUIDs - in a way
similar to git commits?

[1] https://review.openstack.org/#/c/234994/

On 4 November 2015 at 13:17, Sean Dague  wrote:

> On 11/03/2015 05:45 AM, Julien Danjou wrote:
> > On Tue, Nov 03 2015, Jay Pipes wrote:
> >
> >> My suggestion was to add a new POST /servers/search URI resource that
> can take
> >> a request body containing large numbers of filter arguments, encoded in
> a JSON
> >> object.
> >>
> >> API working group, what thoughts do you have about this? Please add your
> >> comments to the Gerrit spec patch if you have time.
> >
> > FWIW, we already have an extensive support for that in both Ceilometer
> > and Gnocchi. It looks like a small JSON query DSL that we're able to
> > "compile" down to SQL Alchemy filters.
> >
> > A few examples are:
> >
> http://docs.openstack.org/developer/gnocchi/rest.html#searching-for-resources
> >
> > I've planed for a long time to move this code to a library, so if Nova's
> > interested, I can try to move that forward eagerly.
>
> I guess I wonder what the expected interaction with things like
> Searchlight is? Searchlight was largely created for providing this kind
> of fast access to subsets of resources based on arbitrary attribute search.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Sean Dague
On 11/04/2015 09:00 AM, Jay Pipes wrote:
> On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
>> Hi stackers,
>>
>> Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
>> that works with OpenStack are working with resources (e.g. VM, Volumes,
>> Images, ..) in the next way:
>>
>>  >>> resource = api.resouce_do_some_stuff()
>>  >>> while api.resource_get(resource["uuid"]) != expected_status
>>  >>>sleep(a_bit)
>>
>> For each async operation they are polling and call many times
>> resource_get() which creates significant load on API and DB layers due
>> the nature of this request. (Usually getting full information about
>> resources produces SQL requests that contains multiple JOINs, e,g for
>> nova vm it's 6 joins).
>>
>> What if we add new API method that will just resturn resource status by
>> UUID? Or even just extend get request with the new argument that returns
>> only status?
> 
> +1
> 
> All APIs should have an HTTP HEAD call on important resources for
> retrieving quick status information for the resource.
> 
> In fact, I proposed exactly this in my Compute "vNext" API proposal:
> 
> http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head
> 
> Swift's API supports HEAD for accounts:
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta
> 
> 
> containers:
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta
> 
> 
> and objects:
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta
> 
> So, yeah, I agree.
> -jay

How would you expect this to work on "servers"? HEAD specifically
forbids returning a body, and, unlike swift, we don't return very much
information in our headers.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread John Garbutt
On 4 November 2015 at 14:49, Jay Pipes  wrote:
> On 11/04/2015 09:32 AM, Sean Dague wrote:
>>
>> On 11/04/2015 09:00 AM, Jay Pipes wrote:
>>>
>>> On 11/03/2015 05:20 PM, Boris Pavlovic wrote:

 Hi stackers,

 Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
 that works with OpenStack are working with resources (e.g. VM, Volumes,
 Images, ..) in the next way:

   >>> resource = api.resouce_do_some_stuff()
   >>> while api.resource_get(resource["uuid"]) != expected_status
   >>>sleep(a_bit)

 For each async operation they are polling and call many times
 resource_get() which creates significant load on API and DB layers due
 the nature of this request. (Usually getting full information about
 resources produces SQL requests that contains multiple JOINs, e,g for
 nova vm it's 6 joins).

 What if we add new API method that will just resturn resource status by
 UUID? Or even just extend get request with the new argument that returns
 only status?
>>>
>>>
>>> +1
>>>
>>> All APIs should have an HTTP HEAD call on important resources for
>>> retrieving quick status information for the resource.
>>>
>>> In fact, I proposed exactly this in my Compute "vNext" API proposal:
>>>
>>> http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head
>>>
>>> Swift's API supports HEAD for accounts:
>>>
>>>
>>> http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta
>>>
>>>
>>> containers:
>>>
>>>
>>> http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta
>>>
>>>
>>> and objects:
>>>
>>>
>>> http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta
>>>
>>> So, yeah, I agree.
>>> -jay
>>
>>
>> How would you expect this to work on "servers"? HEAD specifically
>> forbids returning a body, and, unlike swift, we don't return very much
>> information in our headers.
>
>
> I didn't propose doing it on a collection resource like "servers". Only on
> an entity resource like a single "server".
>
> HEAD /v2/{tenant}/servers/{uuid}
> HTTP/1.1 200 OK
> Content-Length: 1022
> Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
> Content-Type: application/json
> Date: Thu, 16 Jan 2014 21:13:19 GMT
> OpenStack-Compute-API-Server-VM-State: ACTIVE
> OpenStack-Compute-API-Server-Power-State: RUNNING
> OpenStack-Compute-API-Server-Task-State: NONE

For polling, that sounds quite efficient and handy.

For "servers" we could do this (I think there was a spec up that wanted this):

HEAD /v2/{tenant}/servers
HTTP/1.1 200 OK
Content-Length: 1022
Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
Content-Type: application/json
Date: Thu, 16 Jan 2014 21:13:19 GMT
OpenStack-Compute-API-Server-Count: 13

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Infra] "gate-neutron-lbaasv1-dsvm-api" blocking patch merge

2015-11-04 Thread Smigiel, Dariusz
> 
> On 04/11/15 12:48, Mohan Kumar wrote:
> > Hi,
> >
> > Jenkins blocking patch merge due to "
> > gate-neutron-lbaasv1-dsvm-api
> >  dsvm-
> api/f631adf/>"
> >
> >
> failures which is unrelated to patch-set changes .
> >
> > Patch: https://review.openstack.org/#/c/237896/
> >
> > Please help to resolve this issue .
> >
> 
> It's a known issue: https://launchpad.net/bugs/1512937
> 
> There is already a lbaas patch for that:
> https://review.openstack.org/#/c/241481/
> 
> Ihar

It's currently merged, so in next ~30 minutes should be fixed.

-- 
 Dariusz Smigiel
 Intel Technology Poland


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Jay Pipes

On 11/04/2015 09:32 AM, Sean Dague wrote:

On 11/04/2015 09:00 AM, Jay Pipes wrote:

On 11/03/2015 05:20 PM, Boris Pavlovic wrote:

Hi stackers,

Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
that works with OpenStack are working with resources (e.g. VM, Volumes,
Images, ..) in the next way:

  >>> resource = api.resouce_do_some_stuff()
  >>> while api.resource_get(resource["uuid"]) != expected_status
  >>>sleep(a_bit)

For each async operation they are polling and call many times
resource_get() which creates significant load on API and DB layers due
the nature of this request. (Usually getting full information about
resources produces SQL requests that contains multiple JOINs, e,g for
nova vm it's 6 joins).

What if we add new API method that will just resturn resource status by
UUID? Or even just extend get request with the new argument that returns
only status?


+1

All APIs should have an HTTP HEAD call on important resources for
retrieving quick status information for the resource.

In fact, I proposed exactly this in my Compute "vNext" API proposal:

http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head

Swift's API supports HEAD for accounts:

http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta


containers:

http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta


and objects:

http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta

So, yeah, I agree.
-jay


How would you expect this to work on "servers"? HEAD specifically
forbids returning a body, and, unlike swift, we don't return very much
information in our headers.


I didn't propose doing it on a collection resource like "servers". Only 
on an entity resource like a single "server".


HEAD /v2/{tenant}/servers/{uuid}
HTTP/1.1 200 OK
Content-Length: 1022
Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
Content-Type: application/json
Date: Thu, 16 Jan 2014 21:13:19 GMT
OpenStack-Compute-API-Server-VM-State: ACTIVE
OpenStack-Compute-API-Server-Power-State: RUNNING
OpenStack-Compute-API-Server-Task-State: NONE

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Question about Microsoft Hyper-V CI tests

2015-11-04 Thread Johnston, Nate
I noticed the same failure in the neutron-dsvm-tempest test for the Neutron DVR 
HA change, https://review.openstack.org/#/c/143169

I have not yet been able to determine the cause.

Thanks,

—N.

On Nov 3, 2015, at 3:57 PM, sla...@kaplonski.pl 
wrote:

Hello,

I'm now working on patch to neutron to add QoS in linuxbridge: https://
review.openstack.org/#/c/236210/
Patch is not finished yet but I have some "problem" with some tests. For
example Microsoft Hyper-V CI check are failing. When I checked logs of this
tests in http://64.119.130.115/neutron/236210/7/results.html.gz file I found
error like:

ft1.1: setUpClass
(tempest.api.network.test_networks.NetworksIpV6TestAttrs)_StringException:
Traceback (most recent call last):
 File "tempest/test.py", line 274, in setUpClass
   six.reraise(etype, value, trace)
 File "tempest/test.py", line 267, in setUpClass
   cls.resource_setup()
 File "tempest/api/network/test_networks.py", line 65, in resource_setup
   cls.network = cls.create_network()
 File "tempest/api/network/base.py", line 152, in create_network
   body = cls.networks_client.create_network(name=network_name)
 File "tempest/services/network/json/networks_client.py", line 21, in
create_network
   return self.create_resource(uri, post_data)
 File "tempest/services/network/json/base.py", line 59, in create_resource
   resp, body = self.post(req_uri, req_post_data)
 File "/usr/local/lib/python2.7/dist-packages/tempest_lib/common/
rest_client.py", line 259, in post
   return self.request('POST', url, extra_headers, headers, body)
 File "/usr/local/lib/python2.7/dist-packages/tempest_lib/common/
rest_client.py", line 639, in request
   resp, resp_body)
 File "/usr/local/lib/python2.7/dist-packages/tempest_lib/common/
rest_client.py", line 757, in _error_checker
   resp=resp)
tempest_lib.exceptions.UnexpectedResponseCode: Unexpected response code
received
Details: 503


It is strange for me because it looks that error is somewhere in
create_network. I didn't change anything in code which is creating networks.
Other tests are fine IMHO.
So my question is: should I check reason of this errors and try to fix it also
in my patch? Or how should I proceed with such kind of errors?

--
Pozdrawiam / Best regards
Sławek Kapłoński
slawek@kaplonski.pl__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Handling lots of GET query string parameters?

2015-11-04 Thread Cory Benfield

> On 4 Nov 2015, at 13:13, Salvatore Orlando  wrote:
> 
> Regarding Jay's proposal, this would be tantamount to defining an API action 
> for retrieving instances, something currently being discussed here [1].
> The only comment I have is that I am not entirely surely whether using the 
> POST verb for operations which do no alter at all the server representation 
> of any object is in accordance with RFC 7231.

It’s totally fine, so long as you define things appropriately. Jay’s suggestion 
does exactly that, and is entirely in line with RFC 7231.

The analogy here is to things like complex search forms. Many search engines 
allow you to construct very complex search queries (consider something like 
Amazon or eBay, where you can filter on all kinds of interesting criteria). 
These forms are often submitted to POST endpoints rather than GET.

This is totally fine. In fact, the first example from RFC 7231 Section 4.3.3 
(POST) applies here: “POST is used for the following functions (among others): 
Providing a block of data […] to a data-handling process”. In this case, the 
data-handling function is the search function on the server.

The *only* downside of Jay’s approach is that the response cannot really be 
cached. It’s not clear to me whether anyone actually deploys a cache in this 
kind of role though, so it may not hurt too much.

Cory




signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry][ceilometer][aodh][gnocchi] no (in person) mid-cycle for Mitaka

2015-11-04 Thread gord chung

hi all,

after discussions on the usefulness of a Telemetry mid-cycle, we've 
decided to forego having an in-person mid-cycle for Mitaka. to avoid 
rehashing already discussed items, see: same reasons as Glance[1]. much 
thanks to jasonamyers for offering a venue for a Telemetry mid-cycle.


that said we will try to have a virtual one similar to Liberty[2] should 
any items pop up over the development cycle. we would target some time 
during January.


looking forward to N*, an in-person mid-cycle might be beneficial if all 
data/telemetry related projects were to meetup. we had great 
participation from projects such as CloudKitty, Vitrage, etc...[3] which 
leverage parts of Ceilometer/Aodh/Gnocchi. if we all came together, i 
think that would make a worthwhile mid-cycle. this is something we can 
discuss over the coming cycle.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078239.html

[2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/068911.html
[3] https://wiki.openstack.org/wiki/Ceilometer#Ceilometer_Extensions

cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Sean Dague
On 11/04/2015 09:49 AM, Jay Pipes wrote:
> On 11/04/2015 09:32 AM, Sean Dague wrote:
>> On 11/04/2015 09:00 AM, Jay Pipes wrote:
>>> On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
 Hi stackers,

 Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
 that works with OpenStack are working with resources (e.g. VM, Volumes,
 Images, ..) in the next way:

   >>> resource = api.resouce_do_some_stuff()
   >>> while api.resource_get(resource["uuid"]) != expected_status
   >>>sleep(a_bit)

 For each async operation they are polling and call many times
 resource_get() which creates significant load on API and DB layers due
 the nature of this request. (Usually getting full information about
 resources produces SQL requests that contains multiple JOINs, e,g for
 nova vm it's 6 joins).

 What if we add new API method that will just resturn resource status by
 UUID? Or even just extend get request with the new argument that
 returns
 only status?
>>>
>>> +1
>>>
>>> All APIs should have an HTTP HEAD call on important resources for
>>> retrieving quick status information for the resource.
>>>
>>> In fact, I proposed exactly this in my Compute "vNext" API proposal:
>>>
>>> http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head
>>>
>>> Swift's API supports HEAD for accounts:
>>>
>>> http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta
>>>
>>>
>>>
>>> containers:
>>>
>>> http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta
>>>
>>>
>>>
>>> and objects:
>>>
>>> http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta
>>>
>>>
>>> So, yeah, I agree.
>>> -jay
>>
>> How would you expect this to work on "servers"? HEAD specifically
>> forbids returning a body, and, unlike swift, we don't return very much
>> information in our headers.
> 
> I didn't propose doing it on a collection resource like "servers". Only
> on an entity resource like a single "server".
> 
> HEAD /v2/{tenant}/servers/{uuid}
> HTTP/1.1 200 OK
> Content-Length: 1022
> Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
> Content-Type: application/json
> Date: Thu, 16 Jan 2014 21:13:19 GMT
> OpenStack-Compute-API-Server-VM-State: ACTIVE
> OpenStack-Compute-API-Server-Power-State: RUNNING
> OpenStack-Compute-API-Server-Task-State: NONE

Right, but these headers aren't in the normal resource. They are
returned in the body only.

The point of HEAD is give me the same thing as GET without the body,
because I only care about the headers. Swift resources are structured in
a way where this information is useful.

Our resources are not. We've also had specific requests to prevent
header bloat because it impacts the HTTP caching systems. Also, it's
pretty clear that headers are really not where you want to put volatile
information, which this is.

I think we should step back here and figure out what the actual problem
is, and what ways we might go about solving it. This has jumped directly
to a point in time optimized fast poll loop. It will shave a few cycles
off right now on our current implementation, but will still be orders of
magnitude more costly that consuming the Nova notifications if the only
thing that is cared about is task state transitions. And it's an API
change we have to live with largely *forever* so short term optimization
is not what we want to go for. We should focus on the long term game here.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Marian Horban
Hi guys,

Unfortunately I haven't been on Tokio summit but I know that there was
discussion about dynamic reloading of configuration.
Etherpad refs:
https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services
,
https://etherpad.openstack.org/p/mitaka-oslo-security-logging

In this thread I want to discuss agreements reached on the summit and
discuss
implementation details.

Some notes taken from etherpad and my remarks:

1. "Adding "mutable" parameter for each option."
"Do we have an option mutable=True on CfgOpt? Yes"
-
As I understood 'mutable' parameter must indicate whether service contains
code responsible for reloading of this option or not.
And this parameter should be one of the arguments of cfg.Opt constructor.
Problems:
1. Library's options.
SSL options ca_file, cert_file, key_file taken from oslo.service library
could be reloaded in nova-api so these options should be mutable...
But for some projects that don't need SSL support reloading of SSL options
doesn't make sense. For such projects this option should be non mutable.
Problem is that oslo.service - single and there are many different projects
which use it in different way.
The same options could be mutable and non mutable in different contexts.
2. Support of config options on some platforms.
Parameter "mutable" could be different for different platforms. Some
options
make sense only for specific platforms. If we mark such options as mutable
it could be misleading on some platforms.
3. Dependency of options.
There are many 'workers' options(osapi_compute_workers, ec2_workers,
metadata_workers, workers). These options specify number of workers for
OpenStack API services.
If value of the 'workers' option is greater than '1' instance of
ProcessLauncher is created otherwise instance of ServiceLauncher is created.
When ProcessLauncher receives SIGHUP it reloads it own configuration,
gracefully terminates children and respawns new children.
This mechanism allows to reload many config options implicitly.
But if value of the 'workers' option equals '1' instance of ServiceLauncher
is created.
ServiceLauncher starts everything in single process and in this case we
don't have such implicit reloading.

I think that mutability of options is a complicated feature and I think that
adding of 'mutable' parameter into cfg.Opt constructor could just add mess.

2. "oslo.service catches SIGHUP and calls oslo.config"
-
>From my point of view every service should register list of hooks to reload
config options. oslo.service should catch SIGHUP and call list of
registered
hooks one by one with specified order.
Discussion of such implementation was started in ML:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074558.html
.
Raw reviews:
https://review.openstack.org/#/c/228892/,
https://review.openstack.org/#/c/223668/.

3. "oslo.config is responsible to log changes which were ignored on SIGHUP"
-
Some config options could be changed using API(for example quotas) that's
why
oslo.config doesn't know actual configuration of service and can't log
changes of configuration.

Regards, Marian Horban
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Question about Microsoft Hyper-V CI tests

2015-11-04 Thread Andrei Bacos
Hi,

Since the response code was 503 (Service Unavailable) it might have been
a timeout/problem in our CI, we usually debug failed builds and recheck
if necessary.

I rechecked https://review.openstack.org/#/c/236210/ and it was
successful on the same patchset.

Going to take a look at https://review.openstack.org/#/c/143169 too.

Thanks,
Andrei


On 11/04/2015 04:48 PM, Johnston, Nate wrote:
> I noticed the same failure in the neutron-dsvm-tempest test for the
> Neutron DVR HA change, https://review.openstack.org/#/c/143169
> 
> I have not yet been able to determine the cause.
> 
> Thanks,
> 
> —N.
> 
>> On Nov 3, 2015, at 3:57 PM, sla...@kaplonski.pl
>>  wrote:
>>
>> Hello,
>>
>> I'm now working on patch to neutron to add QoS in linuxbridge: https://
>> review.openstack.org/#/c/236210/ 
>> Patch is not finished yet but I have some "problem" with some tests. For
>> example Microsoft Hyper-V CI check are failing. When I checked logs of
>> this
>> tests in http://64.119.130.115/neutron/236210/7/results.html.gz file I
>> found
>> error like:
>>
>> ft1.1: setUpClass
>> (tempest.api.network.test_networks.NetworksIpV6TestAttrs)_StringException:
>>
>> Traceback (most recent call last):
>>  File "tempest/test.py", line 274, in setUpClass
>>six.reraise(etype, value, trace)
>>  File "tempest/test.py", line 267, in setUpClass
>>cls.resource_setup()
>>  File "tempest/api/network/test_networks.py", line 65, in resource_setup
>>cls.network = cls.create_network()
>>  File "tempest/api/network/base.py", line 152, in create_network
>>body = cls.networks_client.create_network(name=network_name)
>>  File "tempest/services/network/json/networks_client.py", line 21, in
>> create_network
>>return self.create_resource(uri, post_data)
>>  File "tempest/services/network/json/base.py", line 59, in create_resource
>>resp, body = self.post(req_uri, req_post_data)
>>  File "/usr/local/lib/python2.7/dist-packages/tempest_lib/common/
>> rest_client.py", line 259, in post
>>return self.request('POST', url, extra_headers, headers, body)
>>  File "/usr/local/lib/python2.7/dist-packages/tempest_lib/common/
>> rest_client.py", line 639, in request
>>resp, resp_body)
>>  File "/usr/local/lib/python2.7/dist-packages/tempest_lib/common/
>> rest_client.py", line 757, in _error_checker
>>resp=resp)
>> tempest_lib.exceptions.UnexpectedResponseCode: Unexpected response code
>> received
>> Details: 503
>>
>>
>> It is strange for me because it looks that error is somewhere in
>> create_network. I didn't change anything in code which is creating
>> networks.
>> Other tests are fine IMHO.
>> So my question is: should I check reason of this errors and try to fix
>> it also
>> in my patch? Or how should I proceed with such kind of errors?
>>
>> --
>> Pozdrawiam / Best regards
>> Sławek Kapłoński
>> slawek@kaplonski.pl__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][keystone][release][docs] cross-project liaison update

2015-11-04 Thread Steve Martinelli

In Tokyo the Keystone team decided to make a few changes to its
cross-project liaisons

  - Lance Bradstag will be the new Docs liaison
  - I'll be taking over Morgan's duties as the Release liaison

The following folks will continue to act as liaisons:

  - Brant Knudson for Oslo
  - David Stanek for QA
  - Dolph Matthews for Stable and VMT

I've updated https://wiki.openstack.org/wiki/CrossProjectLiaisons
accordingly

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [networking-powervm] Please createnetworking-powervm on PyPI

2015-11-04 Thread Kyle Mestery
On Wed, Nov 4, 2015 at 6:29 AM, Andrew Thorstensen 
wrote:

> Hi Kyle,
>
> My team owns the networking-powervm project.  When we moved from the
> StackForge to OpenStack namespace we changed the name from neutron-powervm
> to networking-powervm.  There is no reason for the PyPI project to have a
> different name and we were planning to publish an update shortly with the
> networking-powervm name.
>
> We were planning to do this sometime next week.  Do we need it done sooner?
>
> That should be perfect, let me know when it's done so I can remove the WIP
on the patch below. Thanks!


>
> Thanks!
>
> Drew Thorstensen
> Power Systems / Cloud Software
>
>
>
> From:Kyle Mestery 
> To:"OpenStack Development Mailing List (not for usage questions)"
> 
> Date:11/03/2015 10:09 PM
> Subject:[openstack-dev] [neutron] [networking-powervm] Please
> createnetworking-powervm on PyPI
> --
>
>
>
> I'm reaching out to whoever owns the networking-powervm project [1]. I
> have a review out [2] which updates the PyPI publishing jobs so we can push
> releases for networking-powervm. However, in looking at PyPI, I don't see a
> networking-powervm project, but instead a neutron-powervm project. Is there
> a reason for the PyPI project to have a different name? I believe this will
> not allow us to push releases, as the name of the projects need to match.
> Further, the project creation guide recommends naming them the same [4].
>
> Can someone from the PowerVM team look at registering networking-powervm
> on PyPI and correcting this please?
>
> Thanks!
> Kyle
>
> [1] *https://launchpad.net/neutron-powervm*
> 
> [2] *https://review.openstack.org/#/c/233466/*
> 
> [3] *https://pypi.python.org/pypi/neutron-powervm/0.1.0*
> 
>
> [4] *http://docs.openstack.org/infra/manual/creators.html#pypi*
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Mitaka priorities

2015-11-04 Thread Jim Rollenhagen
Hi folks,

I posted a review to add our Mitaka priorities to our specs repo:
https://review.openstack.org/#/c/241223/

Ruby made a good point that not everyone was at the summit and she'd
like buyoff on the patch from all cores before we land it. I tend to
agree, so I ask that cores that were not in the planning session please
review this ASAP.

There are some cores that were there and still in Tokyo, so in the
interest of landing this quickly, I'm okay with moving forward without
them.

Thanks!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Matthew Thode
On 11/04/2015 06:47 AM, Sean Dague wrote:
> I was spot checking the grenade multinode job to make sure it looks like
> it was doing the correct thing. In doing so I found that ~15minutes of
> it's hour long build time is compiling lxml and numpy 3 times each.
> 
> Due to our exact calculations by upper-constraints.txt we ensure exactly
> the right version of each of those in old & new & subnode (old).
> 
> Is there a nodepool cache strategy where we could pre build these? A 25%
> performance win comes out the other side if there is a strategy here.
> 
>   -Sean
> 
python wheel repo could help maybe?

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread John Garbutt
On 4 November 2015 at 15:00, Sean Dague  wrote:
> On 11/04/2015 09:49 AM, Jay Pipes wrote:
>> On 11/04/2015 09:32 AM, Sean Dague wrote:
>>> On 11/04/2015 09:00 AM, Jay Pipes wrote:
 On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
> Hi stackers,
>
> Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
> that works with OpenStack are working with resources (e.g. VM, Volumes,
> Images, ..) in the next way:
>
>   >>> resource = api.resouce_do_some_stuff()
>   >>> while api.resource_get(resource["uuid"]) != expected_status
>   >>>sleep(a_bit)
>
> For each async operation they are polling and call many times
> resource_get() which creates significant load on API and DB layers due
> the nature of this request. (Usually getting full information about
> resources produces SQL requests that contains multiple JOINs, e,g for
> nova vm it's 6 joins).
>
> What if we add new API method that will just resturn resource status by
> UUID? Or even just extend get request with the new argument that
> returns
> only status?

 +1

 All APIs should have an HTTP HEAD call on important resources for
 retrieving quick status information for the resource.

 In fact, I proposed exactly this in my Compute "vNext" API proposal:

 http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head

 Swift's API supports HEAD for accounts:

 http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta



 containers:

 http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta



 and objects:

 http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta


 So, yeah, I agree.
 -jay
>>>
>>> How would you expect this to work on "servers"? HEAD specifically
>>> forbids returning a body, and, unlike swift, we don't return very much
>>> information in our headers.
>>
>> I didn't propose doing it on a collection resource like "servers". Only
>> on an entity resource like a single "server".
>>
>> HEAD /v2/{tenant}/servers/{uuid}
>> HTTP/1.1 200 OK
>> Content-Length: 1022
>> Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
>> Content-Type: application/json
>> Date: Thu, 16 Jan 2014 21:13:19 GMT
>> OpenStack-Compute-API-Server-VM-State: ACTIVE
>> OpenStack-Compute-API-Server-Power-State: RUNNING
>> OpenStack-Compute-API-Server-Task-State: NONE
>
> Right, but these headers aren't in the normal resource. They are
> returned in the body only.
>
> The point of HEAD is give me the same thing as GET without the body,
> because I only care about the headers. Swift resources are structured in
> a way where this information is useful.

I guess we would have to add this to GET requests, for consistency,
which feels like duplication.

> Our resources are not. We've also had specific requests to prevent
> header bloat because it impacts the HTTP caching systems. Also, it's
> pretty clear that headers are really not where you want to put volatile
> information, which this is.

Hmm, you do make a good point about caching.

> I think we should step back here and figure out what the actual problem
> is, and what ways we might go about solving it. This has jumped directly
> to a point in time optimized fast poll loop. It will shave a few cycles
> off right now on our current implementation, but will still be orders of
> magnitude more costly that consuming the Nova notifications if the only
> thing that is cared about is task state transitions. And it's an API
> change we have to live with largely *forever* so short term optimization
> is not what we want to go for.

I do agree with that.

> We should focus on the long term game here.

The long term plan being the end user async API? Maybe using
websockets, or similar?
https://etherpad.openstack.org/p/liberty-cross-project-user-notifications

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Sean Dague
Thanks Dims,

+2

On 11/03/2015 07:45 AM, Davanum Srinivas wrote:
> Here's a Devstack review for zookeeper in support of this initiative:
> 
> https://review.openstack.org/241040
> 
> Thanks,
> Dims
> 
> 
> On Mon, Nov 2, 2015 at 11:05 PM, Joshua Harlow  wrote:
>> Thanks robert,
>>
>> I've started to tweak https://review.openstack.org/#/c/209661/ with regard
>> to the outcome of that (at least to cover the basics)... Should be finished
>> up soon (I hope).
>>
>>
>> Robert Collins wrote:
>>>
>>> Hi, at the summit we had a big session on distributed lock managers
>>> (DLMs).
>>>
>>> I'd just like to highlight the conclusions we came to in the session (
>>>  https://etherpad.openstack.org/p/mitaka-cross-project-dlm
>>>  )
>>>
>>> Firstly OpenStack projects that want to use a DLM can make it a hard
>>> dependency. Previously we've had a unwritten policy that DLMs should
>>> be optional, which has led to us writing poor DLM-like things backed
>>> by databases :(. So this is a huge and important step forward in our
>>> architecture.
>>>
>>> As in our existing pattern of usage for database and message-queues,
>>> we'll use an oslo abstraction layer: tooz. This doesn't preclude a
>>> different answer in special cases - but they should be considered
>>> special and exception, not the general case.
>>>
>>> Based on the project requirements surfaced in the discussion, it seems
>>> likely that all of konsul, etc and zookeeper will be able to have
>>> suitable production ready drivers written for tooz. Specifically no
>>> project required a fair locking implementation in the DLM.
>>>
>>> After our experience with oslo.messaging however, we wanted to avoid
>>> the situation of having unmaintained drivers and no signalling to
>>> users about them.
>>>
>>> So, we resolved to adopt roughly the oslo.messaging requirements for
>>> drivers, with a couple of tweaks...
>>>
>>> Production drivers in-tree will need:
>>>   - two nominated developers responsible for it
>>>   - gating functional tests that use dsvm
>>> Test drivers in-tree will need:
>>>   - clear identification that the driver is a test driver - in the
>>> module name at minimum
>>>
>>> All hail our new abstraction overlords.
>>>
>>> -Rob
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting cross-project comms

2015-11-04 Thread Thierry Carrez
Davanum Srinivas wrote:
>> Has anyone considered using #openstack-dev, instead of a new meeting
>> room? #openstack-dev is mostly a ghost town at this point, and deciding
>> that instead it would be the dedicated cross project space, including
>> meetings support, might be interesting.

+1

Originally #openstack-dev was the only dev channel, the one we ask every
dev to join by default. Then it was the channel that teams would use if
they didn't have their own. Now that most/all teams have their own,
nobody discusses in it anymore, but it still is our most crowded channel
(by virtue of being the old default dev channel).

So officially repurposing it for cross-project discussions /
announcements sounds like a good idea. Think of it as a permanent
cross-project meeting / announcement space.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] competing implementations

2015-11-04 Thread Antoni Segura Puimedon
On Wed, Nov 4, 2015 at 2:38 PM, Baohua Yang  wrote:

> +1, Antoni!
> btw, is our weekly meeting still on meeting-4 channel?
> Not found it there yesterday.
>

Yes, it is still on openstack-meeting-4, but this week we skipped it, since
some of us were
traveling and we already held the meeting on Friday. Next Monday it will be
held as usual
and the following week we start alternating (we have yet to get a room for
that one).

>
> On Wed, Nov 4, 2015 at 9:27 PM, Antoni Segura Puimedon <
> toni+openstac...@midokura.com> wrote:
>
>> Hi Kuryrs,
>>
>> Last Friday, as part of the contributors meetup, we discussed also code
>> contribution etiquette. Like other OpenStack project (Magnum comes to
>> mind), the etiquette for what to do when there is disagreement in the way
>> to code a blueprint of fix a bug is as follows:
>>
>> 1.- Try to reach out so that the original implementation gets closer to a
>> compromise by having the discussion in gerrit (and Mailing list if it
>> requires a wider range of arguments).
>> 2.- If a compromise can't be reached, feel free to make a separate
>> implementation arguing well its difference, virtues and comparative
>> disadvantages. We trust the whole community of reviewers to be able to
>> judge which is the best implementation and I expect that often the
>> reviewers will steer both submissions closer than they originally were.
>> 3.- If both competing implementations get the necessary support, the core
>> reviewers will take a specific decision on which to take based on technical
>> merit. Important factor are:
>> * conciseness,
>> * simplicity,
>> * loose coupling,
>> * logging and error reporting,
>> * test coverage,
>> * extensibility (when an immediate pending and blueprinted feature
>> can better be built on top of it).
>> * documentation,
>> * performance.
>>
>> It is important to remember that technical disagreement is a healthy
>> thing and should be tackled with civility. If we follow the rules above, it
>> will lead to a healthier project and a more friendly community in which
>> everybody can propose their vision with equal standing. Of course,
>> sometimes there may be a feeling of duplicity, but even in the case where
>> one's solution it is not selected (and I can assure you I've been there and
>> know how it can feel awkward) it usually still enriches the discussion and
>> constitutes a contribution that improves the project.
>>
>> Regards,
>>
>> Toni
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best wishes!
> Baohua
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Adding a new feature in Kilo. Is it possible?

2015-11-04 Thread John Garbutt
In terms of adding this into master, we can go for a spec-less
blueprint in Nova.

Reach out to me on IRC if I can help you through the process.

Thanks,
johnthetubaguy

PS
We are working on making this easier in the future, by using OS VIF Lib.

On 4 November 2015 at 08:56, Michał Dubiel  wrote:
> Ok, I see. Thanks for all the answers.
>
> Regards,
> Michal
>
> On 3 November 2015 at 22:50, Matt Riedemann 
> wrote:
>>
>>
>>
>> On 11/3/2015 11:57 AM, Michał Dubiel wrote:
>>>
>>> Hi all,
>>>
>>> We have a simple patch allowing to use OpenContrail's vrouter with
>>> vhostuser vif types (currently only OVS has support for that). We would
>>> like to contribute it.
>>>
>>> However, We would like this change to land in the next maintenance
>>> release of Kilo. Is it possible? What should be the process for this?
>>> Should we prepare a blueprint and review request for the 'master' branch
>>> first? It is small self contained change so I believe it does not need a
>>> nova-spec.
>>>
>>> Regards,
>>> Michal
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> The short answer is 'no' to backporting features to stable branches.
>>
>> As the other reply said, feature changes are targeted to master.
>>
>> The full stable branch policy is here:
>>
>> https://wiki.openstack.org/wiki/StableBranch
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting cross-project comms

2015-11-04 Thread Sean Dague
On 10/28/2015 07:15 PM, Anne Gentle wrote:
> Hi all, 
> 
> I wanted to write up some of the discussion points from a cross-project
> session here at the Tokyo Summit about cross-project communications. The
> etherpad is here: 
> https://etherpad.openstack.org/p/mitaka-crossproject-comms
> 
> One item that came up that I wanted to ensure we gather feedback on is
> evolving the cross-project meeting to an "as needed" rotation, held at
> any time on Tuesdays or Wednesdays. We can set up meetbot in a new
> meeting room, #cross-project-meeting, and then bring in the necessary
> participants while also inviting everyone to attend. 
> 
> I sense this helps with the timezone issues we all face, as well as
> brings together the relevant projects in a moment, while allowing other
> projects to filter out unnecessary discussions to help everyone focus
> further on solving cross-project issues.
> 
> The rest of the action items are in the etherpad, but since I originally
> suggested changing the meeting time, I wanted to circle back on this new
> idea specifically. Let us know your thoughts.

Has anyone considered using #openstack-dev, instead of a new meeting
room? #openstack-dev is mostly a ghost town at this point, and deciding
that instead it would be the dedicated cross project space, including
meetings support, might be interesting.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Jay Pipes

On 11/03/2015 05:20 PM, Boris Pavlovic wrote:

Hi stackers,

Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
that works with OpenStack are working with resources (e.g. VM, Volumes,
Images, ..) in the next way:

 >>> resource = api.resouce_do_some_stuff()
 >>> while api.resource_get(resource["uuid"]) != expected_status
 >>>sleep(a_bit)

For each async operation they are polling and call many times
resource_get() which creates significant load on API and DB layers due
the nature of this request. (Usually getting full information about
resources produces SQL requests that contains multiple JOINs, e,g for
nova vm it's 6 joins).

What if we add new API method that will just resturn resource status by
UUID? Or even just extend get request with the new argument that returns
only status?


+1

All APIs should have an HTTP HEAD call on important resources for 
retrieving quick status information for the resource.


In fact, I proposed exactly this in my Compute "vNext" API proposal:

http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head

Swift's API supports HEAD for accounts:

http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta

containers:

http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta

and objects:

http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta

So, yeah, I agree.
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] competing implementations

2015-11-04 Thread Baohua Yang
Sure, thanks!
And suggest add the time and channel information at the kuryr wiki page.


On Wed, Nov 4, 2015 at 9:45 PM, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

>
>
> On Wed, Nov 4, 2015 at 2:38 PM, Baohua Yang  wrote:
>
>> +1, Antoni!
>> btw, is our weekly meeting still on meeting-4 channel?
>> Not found it there yesterday.
>>
>
> Yes, it is still on openstack-meeting-4, but this week we skipped it,
> since some of us were
> traveling and we already held the meeting on Friday. Next Monday it will
> be held as usual
> and the following week we start alternating (we have yet to get a room for
> that one).
>
>>
>> On Wed, Nov 4, 2015 at 9:27 PM, Antoni Segura Puimedon <
>> toni+openstac...@midokura.com> wrote:
>>
>>> Hi Kuryrs,
>>>
>>> Last Friday, as part of the contributors meetup, we discussed also code
>>> contribution etiquette. Like other OpenStack project (Magnum comes to
>>> mind), the etiquette for what to do when there is disagreement in the way
>>> to code a blueprint of fix a bug is as follows:
>>>
>>> 1.- Try to reach out so that the original implementation gets closer to
>>> a compromise by having the discussion in gerrit (and Mailing list if it
>>> requires a wider range of arguments).
>>> 2.- If a compromise can't be reached, feel free to make a separate
>>> implementation arguing well its difference, virtues and comparative
>>> disadvantages. We trust the whole community of reviewers to be able to
>>> judge which is the best implementation and I expect that often the
>>> reviewers will steer both submissions closer than they originally were.
>>> 3.- If both competing implementations get the necessary support, the
>>> core reviewers will take a specific decision on which to take based on
>>> technical merit. Important factor are:
>>> * conciseness,
>>> * simplicity,
>>> * loose coupling,
>>> * logging and error reporting,
>>> * test coverage,
>>> * extensibility (when an immediate pending and blueprinted feature
>>> can better be built on top of it).
>>> * documentation,
>>> * performance.
>>>
>>> It is important to remember that technical disagreement is a healthy
>>> thing and should be tackled with civility. If we follow the rules above, it
>>> will lead to a healthier project and a more friendly community in which
>>> everybody can propose their vision with equal standing. Of course,
>>> sometimes there may be a feeling of duplicity, but even in the case where
>>> one's solution it is not selected (and I can assure you I've been there and
>>> know how it can feel awkward) it usually still enriches the discussion and
>>> constitutes a contribution that improves the project.
>>>
>>> Regards,
>>>
>>> Toni
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best wishes!
>> Baohua
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best wishes!
Baohua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >