Re: [openstack-dev] [fuel] Fuel Community ISO 8.0

2016-02-04 Thread Ivan Kolodyazhny
Thanks, Igor.

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Thu, Feb 4, 2016 at 1:21 PM, Igor Belikov  wrote:

> Hi Ivan,
>
> I think this counts as a bug in our community page, thanks for noticing.
> You can get 8.0 Community ISO using links in status dashboard on
> https://ci.fuel-infra.org
> --
> Igor Belikov
> Fuel CI Engineer
> ibeli...@mirantis.com
>
> On 04 Feb 2016, at 13:53, Ivan Kolodyazhny  wrote:
>
> Hi team,
>
> I've tried to download Fuel Community ISO 8.0 from [1] and failed. We've
> got 2 options there: the latest stable (7.0) and nightly build (9.0). Where
> can I download 8.0 build?
>
> [1] https://www.fuel-infra.org/#fuelget
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Disaster recovery solutions for OpenStack clouds?

2016-02-04 Thread Shahbaz Nazir
Hi,

Disaster recovery is an important capability of cloud system. I have looked
around for available disaster solutions for OpenStack clouds.This is what I
have found

   - Project Smaugn (DRaaS) very early stages
   - There are vendor specific solutions like Rackspace has VMware SRM
   based solution

What are other available disaster recovery solutions available out there
for OpenStack based clouds?
What options are available to leverage existing volume, VM, database
replication options in Cinder/Nova/Swift to automate disaster recovery e.g
through Heat?

-- 
*Regards, *
*Shahbaz Nazir*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Hayes, Graham
On 04/02/2016 11:40, Sean Dague wrote:
> A few issues have crept up recently with the service catalog, API
> headers, API end points, and even similarly named resources in
> different resources (e.g. backup), that are all circling around a key
> problem. Distributed teams and naming collision.
>
> Every OpenStack project has a unique name by virtue of having a git
> tree. Once they claim 'openstack/foo', foo is theirs in the
> OpenStack universe for all time (or until trademarks say otherwise).
> Nova in OpenStack will always mean one project.
>
> There has also been a desire to replace project names with
> common/generic names, in the service catalog, API headers, and a few
> other places. Nova owns 'compute'. Except... that's only because we
> all know that it does. We don't *actually* have a registry for those
> values.
>
> So the code names are well regulated, the common names, that we
> encourage use of, are not. Devstack in tree code defines some
> conventions. However with the big tent, things get kind of squirely
> pretty quickly. Congress registering 'policy' as their endpoint type
> is a good example of that -
> https://github.com/openstack/congress/blob/master/devstack/plugin.sh#L147
>
>  Naming is hard. And trying to boil down complicated state machines
> to one or two word shiboliths means that inevitably we're going to
> find some words just keep cropping up: policy, flavor, backup, meter.
> We do however need to figure out a way forward.
>
> Lets start with the top level names (resource overlap cascades from
> there).
>
> What options do we have?
>
> 1) Use the names we already have: nova, glance, swift, etc.
>
> Upside, collision problem is solved. Downside, you need a secret
> decoder ring to figure out what project does what.
>
> 2) Have a registry of "common" names.
>
> Upside, we can safely use common names everywhere and not fear
> collision down the road.
>
> Downside, yet another contention point.
>
> A registry would clearly be under TC administration, though all the
> heavy lifting might be handed over to the API working group. I still
> imagine collision around some areas might be contentious.

++ to a central registry. It could easily be added to the projects.yaml
file, and is a single source of truth.

I imagine collisions are going to be contentious - but having a central
source makes finding potential collisions much easier.

>
> 3) Use either, inconsistently, hope for the best. (aka - status quo)
>
> Upside, no long mailing list thread to figure out the answer.
> Downside, it sucks.
>
>
> Are there other options missing? Where are people leaning at this
> point?
>
> Personally I'm way less partial to any particular answer as long as
> it's not #3.
>
>
> -Sean
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2016-02-04 06:38:26 -0500:
> A few issues have crept up recently with the service catalog, API
> headers, API end points, and even similarly named resources in different
> resources (e.g. backup), that are all circling around a key problem.
> Distributed teams and naming collision.
> 
> Every OpenStack project has a unique name by virtue of having a git
> tree. Once they claim 'openstack/foo', foo is theirs in the OpenStack
> universe for all time (or until trademarks say otherwise). Nova in
> OpenStack will always mean one project.
> 
> There has also been a desire to replace project names with
> common/generic names, in the service catalog, API headers, and a few
> other places. Nova owns 'compute'. Except... that's only because we all
> know that it does. We don't *actually* have a registry for those values.
> 
> So the code names are well regulated, the common names, that we
> encourage use of, are not. Devstack in tree code defines some
> conventions. However with the big tent, things get kind of squirely
> pretty quickly. Congress registering 'policy' as their endpoint type is
> a good example of that -
> https://github.com/openstack/congress/blob/master/devstack/plugin.sh#L147
> 
> Naming is hard. And trying to boil down complicated state machines to
> one or two word shiboliths means that inevitably we're going to find
> some words just keep cropping up: policy, flavor, backup, meter. We do
> however need to figure out a way forward.
> 
> Lets start with the top level names (resource overlap cascades from there).
> 
> What options do we have?
> 
> 1) Use the names we already have: nova, glance, swift, etc.
> 
> Upside, collision problem is solved. Downside, you need a secret decoder
> ring to figure out what project does what.
> 
> 2) Have a registry of "common" names.
> 
> Upside, we can safely use common names everywhere and not fear collision
> down the road.
> 
> Downside, yet another contention point.
> 
> A registry would clearly be under TC administration, though all the
> heavy lifting might be handed over to the API working group. I still
> imagine collision around some areas might be contentious.
> 
> 3) Use either, inconsistently, hope for the best. (aka - status quo)
> 
> Upside, no long mailing list thread to figure out the answer. Downside,
> it sucks.
> 
> 
> Are there other options missing? Where are people leaning at this point?
> 
> Personally I'm way less partial to any particular answer as long as it's
> not #3.
> 
> 
> -Sean
> 

This feels like something that should be designed with end-users
in mind, and that means making choices about descriptive words
rather than quirky in-jokes.  As much as I hate to think about the
long threads some of the contention is likely to introduce, not to
mention the bikeshedding over the terms themselves, I have become
convinced that our best long-term solution is a term/name registry
(option 2). We already have that pattern in the governance repository
where official projects describe their service type.

To reduce contention, we could agree in advance to support multi-word
names ("block storage" and "object storage", "block backup" and
"file backup", etc.). Decisions about noun-verb vs. verb-noun,
punctuation, etc. can be dealt with by the group that takes on the
task of setting standards.

As I said in the TC meeting, this seems like something the API working
group could do, if they wanted to take on the responsibility. If not,
then we should establish a new group with a mandate from the TC. Since
we have something like a product catalog already in the governance repo,
we can keep the new data there.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Location of Heka Lua plugins

2016-02-04 Thread Eric LEMOINE
Hi

As discussed yesterday in our IRC meeting we'll need specific Lua
plugins for parsing OpenStack, MariaDB and RabbitMQ logs.  We already
have these Lua plugins in one of our Fuel plugins [*].  So our plan is
to move these Lua plugins in their own Git repo, and distribute them
as deb and rpm packages in the future.  This will allow to easily
share the Lua plugins between projects, and having a separate Git repo
will facilitate testing, documentation, etc.

But we may not have time to do that by the 4th of March (for Mitaka
3), so my suggestion is to copy the Lua plugins that we need in Kolla.
This would be a temporary thing.  When our Lua plugins are
available/distributed as deb and rpm packages we will remove the Lua
plugins from the kolla repository and change the Heka Dockerfile to
install the Lua plugins from the package.

Please tell me if you agree with the approach.  Thank you!

[*] 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer] New project: collectd-ceilometer-plugin

2016-02-04 Thread Foley, Emma L

> nice, do you have resource to look at this? or maybe something to add to
> Gnocchi's potential backlog. existing plugin still seems useful to those who
> want to use custom/proprietary storage.

I should have resources for this. 

Question is where it should live. 
Since gnocchi is designed to be standalone, it seem like that's a potential 
home for it. 
If not, it also fits in with the existing plugin.

Regards,
Emma

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Fuel Community ISO 8.0

2016-02-04 Thread Igor Belikov
Hi Ivan,

I think this counts as a bug in our community page, thanks for noticing.
You can get 8.0 Community ISO using links in status dashboard on 
https://ci.fuel-infra.org
--
Igor Belikov
Fuel CI Engineer
ibeli...@mirantis.com

> On 04 Feb 2016, at 13:53, Ivan Kolodyazhny  wrote:
> 
> Hi team,
> 
> I've tried to download Fuel Community ISO 8.0 from [1] and failed. We've got 
> 2 options there: the latest stable (7.0) and nightly build (9.0). Where can I 
> download 8.0 build?
> 
> [1] https://www.fuel-infra.org/#fuelget 
> 
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/ 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] save this URL for CI watching

2016-02-04 Thread Emilien Macchi
Now we have our tempest jobs running in gate, we can use openstackhealth
service:

http://status.openstack.org/openstack-health/#/?groupKey=project=puppet
(kudos to mtreinish)

Example of job overview:
http://status.openstack.org/openstack-health/#/job/gate-puppet-openstack-integration-scenario001-tempest-dsvm-centos7?groupKey=project=hour

It will be very useful for us to determine our CI failures, feel free to
use it and report any weird thing you would see.

Feedback is highly appreciated, thanks
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] compatibility of puppet upstream modules

2016-02-04 Thread Ptacek, MichalX




-Original Message-
From: Emilien Macchi [mailto:emil...@redhat.com]
Sent: Thursday, February 04, 2016 10:06 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [puppet] compatibility of puppet upstream modules







On 02/03/2016 04:03 PM, Ptacek, MichalX wrote:

> Hi all,

>

>

>

> I have one general question,

>

> currently I am deploying liberty openstack as described in

> https://wiki.openstack.org/wiki/Puppet/Deploy

>

> Unfortunately puppet modules specified in

> puppet-openstack-integration/Puppetfile are not compatible



Did you take the file from stable/liberty branch?

https://github.com/openstack/puppet-openstack-integration/tree/stable/liberty



[Michal Ptacek]  I am deploying scenario003 with stable/liberty

>

> and some are also missing as visible from following output of “puppet

> module list”

>

>

>

> Warning: Setting templatedir is deprecated. See

> http://links.puppetlabs.com/env-settings-deprecations

>

>(at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in

> `issue_deprecation_warning')

>

> Warning: Module 'openstack-openstacklib' (v7.0.0) fails to meet some

> dependencies:

>

>   'openstack-barbican' (v0.0.1) requires 'openstack-openstacklib'

> (>=6.0.0 <7.0.0)

>

>   'openstack-zaqar' (v0.0.1) requires 'openstack-openstacklib'

> (>=6.0.0

> <7.0.0)

>

> Warning: Module 'puppetlabs-postgresql' (v4.4.2) fails to meet some

> dependencies:

>

>   'openstack-openstacklib' (v7.0.0) requires 'puppetlabs-postgresql'

> (>=3.3.0 <4.0.0)

>

> Warning: Missing dependency 'deric-storm':

>

>   'openstack-monasca' (v1.0.0) requires 'deric-storm' (>=0.0.1 <1.0.0)

>

> Warning: Missing dependency 'deric-zookeeper':

>

>   'openstack-monasca' (v1.0.0) requires 'deric-zookeeper' (>=0.0.1

> <1.0.0)

>

> Warning: Missing dependency 'dprince-qpid':

>

>   'openstack-cinder' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)

>

>   'openstack-manila' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)

>

>   'openstack-nova' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)

>

> Warning: Missing dependency 'jdowning-influxdb':

>

>   'openstack-monasca' (v1.0.0) requires 'jdowning-influxdb' (>=0.3.0

> <1.0.0)

>

> Warning: Missing dependency 'opentable-kafka':

>

>   'openstack-monasca' (v1.0.0) requires 'opentable-kafka' (>=1.0.0

> <2.0.0)

>

> Warning: Missing dependency 'puppetlabs-stdlib':

>

>   'antonlindstrom-powerdns' (v0.0.5) requires 'puppetlabs-stdlib' (>=

> 0.0.0)

>

> Warning: Missing dependency 'puppetlabs-corosync':

>

>   'openstack-openstack_extras' (v7.0.0) requires 'puppetlabs-corosync'

> (>=0.1.0 <1.0.0)

>

> /etc/puppet/modules

>

> ├──antonlindstrom-powerdns (v0.0.5)

>

> ├──duritong-sysctl (v0.0.11)

>

> ├──nanliu-staging (v1.0.4)

>

> ├──openstack-barbican (v0.0.1)

>

> ├──openstack-ceilometer (v7.0.0)

>

> ├──openstack-cinder (v7.0.0)

>

> ├──openstack-designate (v7.0.0)

>

> ├──openstack-glance (v7.0.0)

>

> ├──openstack-gnocchi (v7.0.0)

>

> ├──openstack-heat (v7.0.0)

>

> ├──openstack-horizon (v7.0.0)

>

> ├──openstack-ironic (v7.0.0)

>

> ├──openstack-keystone (v7.0.0)

>

> ├──openstack-manila (v7.0.0)

>

> ├──openstack-mistral (v0.0.1)

>

> ├──openstack-monasca (v1.0.0)

>

> ├──openstack-murano (v7.0.0)

>

> ├──openstack-neutron (v7.0.0)

>

> ├──openstack-nova (v7.0.0)

>

> ├──openstack-openstack_extras (v7.0.0)

>

> ├──openstack-openstacklib (v7.0.0)  invalid

>

> ├──openstack-sahara (v7.0.0)

>

> ├──openstack-swift (v7.0.0)

>

> ├──openstack-tempest (v7.0.0)

>

> ├──openstack-trove (v7.0.0)

>

> ├──openstack-tuskar (v7.0.0)

>

> ├──openstack-vswitch (v3.0.0)

>

> ├──openstack-zaqar (v0.0.1)

>

> ├──openstack_integration (???)

>

> ├──puppet-aodh (v7.0.0)

>

> ├──puppet-corosync (v0.8.0)

>

> ├──puppetlabs-apache (v1.4.1)

>

> ├──puppetlabs-apt (v2.1.1)

>

> ├──puppetlabs-concat (v1.2.5)

>

> ├──puppetlabs-firewall (v1.6.0)

>

> ├──puppetlabs-inifile (v1.4.3)

>

> ├──puppetlabs-mongodb (v0.11.0)

>

> ├──puppetlabs-mysql (v3.6.2)

>

> ├──puppetlabs-postgresql (v4.4.2)  invalid

>

> ├──puppetlabs-rabbitmq (v5.2.3)

>

> ├──puppetlabs-rsync (v0.4.0)

>

> ├──puppetlabs-stdlib (v4.6.0)

>

> ├──puppetlabs-vcsrepo (v1.3.2)

>

> ├──puppetlabs-xinetd (v1.5.0)

>

> ├──qpid (???)

>

> ├──saz-memcached (v2.8.1)

>

> ├──stankevich-python (v1.8.0)

>

> └── theforeman-dns (v3.0.0)

>

>

>

>

>

> Most of the warning can be probably ignored, e.g I assume that latest

> barbican & zaqar are compatible with liberty (7.0) version of

> openstack-openstacklib

>

>   'openstack-barbican' (v0.0.1) requires 'openstack-openstacklib'

> (>=6.0.0 <7.0.0)

>

>   'openstack-zaqar' (v0.0.1) requires 'openstack-openstacklib'

> (>=6.0.0

> <7.0.0)

>

>

>

> Am I right or I need to get rid of all of these compatibility warnings

> before proceeding further ?

>



If you look at our CI jobs, we also have 

[openstack-dev] [gate] build wedged very oddly

2016-02-04 Thread Sean Dague
I just was looking at the gate to try to sort out why things are backed
up, and noticed top of gate was waiting on one test -
https://jenkins02.openstack.org/job/gate-tempest-dsvm-full/32506/console

Which Jenkins said had been running for 6 hours It failed on
internal timeout about 4 hours ago, but has not been reaped by Jenkins.
I'm kicking the patch out now, so there will be no log on the log
servers, but the console log above remains for post mortem.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2016-02-04 Thread Gal Sagie
Hi Assaf,

I think that if we define a certain criteria we need to make sure that it
applies to everyone equally.
and it is well understood.
I have contributed and still am to both OVN and Dragonflow and hope to
continue do so in the future,
i want to see both of these solutions become a great production grade open
source alternatives.

I have less experience in open source and in this community from most of
you, but from what i saw users
do take these things into consideration, its hard for a new user and even
not so new to understand the possibilities correctly
specially if we cant even define them ourselves

Instead of spending time on technology and on solving the problems for our
users we are concentrating
on this conversation, we haven't even talked about production maturity,
feature richness and stability as you say
and by doing this move, we are signaling something else for our users
without actually discussing about all the
former ourselves.

I will be ok with what ever the Neutron team decide on this, as they can
define the criteria as they please.
Just shared my opinion on this process and my disappointment from it as
someone who values open source
a lot.

Gal.


On Thu, Feb 4, 2016 at 11:31 AM, Assaf Muller  wrote:

> On Thu, Feb 4, 2016 at 10:20 AM, Assaf Muller  wrote:
> > On Thu, Feb 4, 2016 at 8:33 AM, Gal Sagie  wrote:
> >> As i have commented on the patch i will also send this to the mailing
> list:
> >>
> >> I really dont see why Dragonflow is not part of this list, given the
> >> criteria you listed.
> >>
> >> Dragonflow is fully developed under Neutron/OpenStack, no other
> >> repositories. It is fully Open source and already have a community of
> people
> >> contributing and interest from various different companies and OpenStack
> >> deployers. (I can prepare the list of active contributions and of
> interested
> >> parties) It also puts OpenStack Neutron APIs and use cases as first
> class
> >> citizens and working on being an integral part of OpenStack.
> >>
> >> I agree that OVN needs to be part of the list, but you brought up this
> >> criteria in regards to ODL, so: OVN like ODL is not only Neutron and
> >> OpenStack and is even running/being implemented on a whole different
> >> governance model and requirements to it.
> >>
> >> I think you also forgot to mention some other projects as well that are
> >> fully open source with a vibrant and diverse community, i will let them
> >> comment here by themselves.
> >>
> >> Frankly this approach disappoints me, I have honestly worked hard to
> make
> >> Dragonflow fully visible and add and support open discussion and follow
> the
> >> correct guidelines to work in a project. I think that Dragonflow
> community
> >> has already few members from various companies and this is only going to
> >> grow in the near future. (in addition to deployers that are considering
> it
> >> as a solution)  we also welcome anyone that wants to join and be part
> of the
> >> process to step in, we are very welcoming
> >>
> >> I also think that the correct way to do this is to actually add as
> reviewers
> >> all lieutenants of the projects you are now removing from Neutron big
> >> stadium and letting them comment.
> >>
> >> Gal.
> >
> > I understand you see 'Dragonflow being part of the Neutron stadium'
> > and 'Dragonflow having high visibility' as tied together. I'm curious,
> > from a practical perspective, how does being a part of the stadium
> > give Dragonflow visibility? If it were not a part of the stadium and
> > you had your own PTL etc, what specifically would change so that
> > Dragonflow would be less visible. Currently I don't understand why
> > being a part of the stadium is good or bad for a networking project,
> > or why does it matter. Looking at Russell's patch, it's concerned with
> > placing projects (e.g. ODL, OVN, Dragonflow) either in or out of the
> > stadium and the criteria for doing so, I'm just asking how do you
> > (Gal) perceive the practical effect of that decision.
>
> Allow me to expand:
> It seems to me like there is no significance to who is 'in or out'.
> However, people, including potential customers, look at the list of
> the Neutron stadium and deduce that project X is better than Y because
> X is in but Y is out, and *that* in itself is the value of being in or
> out, even though it has no meaning. Maybe we should explain what
> exactly does it mean being in or out. It's just a governance decision,
> it doesn't reflect in any way of the quality or appeal of a project
> (For example some of the open source Neutron drivers out of the
> stadium are much more mature, stable and feature full than other
> drivers in the stadium).
>
> >
> >>
> >>
> >> On Wed, Feb 3, 2016 at 11:48 PM, Russell Bryant 
> wrote:
> >>>
> >>> On 11/30/2015 07:56 PM, Armando M. wrote:
> >>> > I would like to suggest that we evolve the structure of the Neutron
> 

[openstack-dev] [all]Whom should I contact to ask for ATC code

2016-02-04 Thread Qiao, Liyong
Hello all
I am sorry for the broadcasting, but I really don't know whom should I contact.
Things is:
My friend is a newly developer who already has contribute to openstack project, 
and would like to know in what way(or email contactor) he can get the ATC code 
for Austin summit.

BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] the trouble with names

2016-02-04 Thread Sean Dague
A few issues have crept up recently with the service catalog, API
headers, API end points, and even similarly named resources in different
resources (e.g. backup), that are all circling around a key problem.
Distributed teams and naming collision.

Every OpenStack project has a unique name by virtue of having a git
tree. Once they claim 'openstack/foo', foo is theirs in the OpenStack
universe for all time (or until trademarks say otherwise). Nova in
OpenStack will always mean one project.

There has also been a desire to replace project names with
common/generic names, in the service catalog, API headers, and a few
other places. Nova owns 'compute'. Except... that's only because we all
know that it does. We don't *actually* have a registry for those values.

So the code names are well regulated, the common names, that we
encourage use of, are not. Devstack in tree code defines some
conventions. However with the big tent, things get kind of squirely
pretty quickly. Congress registering 'policy' as their endpoint type is
a good example of that -
https://github.com/openstack/congress/blob/master/devstack/plugin.sh#L147

Naming is hard. And trying to boil down complicated state machines to
one or two word shiboliths means that inevitably we're going to find
some words just keep cropping up: policy, flavor, backup, meter. We do
however need to figure out a way forward.

Lets start with the top level names (resource overlap cascades from there).

What options do we have?

1) Use the names we already have: nova, glance, swift, etc.

Upside, collision problem is solved. Downside, you need a secret decoder
ring to figure out what project does what.

2) Have a registry of "common" names.

Upside, we can safely use common names everywhere and not fear collision
down the road.

Downside, yet another contention point.

A registry would clearly be under TC administration, though all the
heavy lifting might be handed over to the API working group. I still
imagine collision around some areas might be contentious.

3) Use either, inconsistently, hope for the best. (aka - status quo)

Upside, no long mailing list thread to figure out the answer. Downside,
it sucks.


Are there other options missing? Where are people leaning at this point?

Personally I'm way less partial to any particular answer as long as it's
not #3.


-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Fuel Community ISO 8.0

2016-02-04 Thread Ivan Kolodyazhny
Hi team,

I've tried to download Fuel Community ISO 8.0 from [1] and failed. We've
got 2 options there: the latest stable (7.0) and nightly build (9.0). Where
can I download 8.0 build?

[1] https://www.fuel-infra.org/#fuelget

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker][tacker] TOSCA-Parser 0.4.0 PyPI release

2016-02-04 Thread Sahdev P Zala
:) Hi Sridhar,

You are welcome and absolutely. Looking forward for a continuous 
collaboration and enhanced NFV support in parser as we go forward. 

Thanks!

Regards, 
Sahdev Zala




From:   Sridhar Ramaswamy 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   02/04/2016 03:04 PM
Subject:Re: [openstack-dev] [tacker][tacker] TOSCA-Parser 0.4.0 
PyPI release



Thanks for the note Sahdev. Appreciate your support in getting in TOSCA 
NFV Profile quickly in time for our Mitaka plans.

This is awesome collaboration between tacker and heat-translator projects. 
Lets keep this going!

- Sridhar

PS: fixed the subject :)

On Wed, Feb 3, 2016 at 6:08 PM, Sahdev P Zala  wrote:
Hello Tacker team,

Sorry I forgot to include the project name in the subject of the following 
original email, so FYI. 

Thanks! 

Regards, 
Sahdev Zala

- Forwarded by Sahdev P Zala/Durham/IBM on 02/03/2016 09:03 PM -

From:Sahdev P Zala/Durham/IBM@IBMUS
To:"OpenStack Development Mailing List (not for usage questions)" 

Date:02/03/2016 08:16 PM
Subject:[openstack-dev] [tosca-parser] [heat-translator] [heat] 
TOSCA-Parser 0.4.0 PyPI release




Hello Everyone, 

On behalf of the TOSCA-Parser team, I am pleased to announce the 0.4.0 
PyPI release of tosca-parser which can be downloaded from 
https://pypi.python.org/pypi/tosca-parser
This release includes following enhancements:
1) Initial support for TOSCA Simple Profile for Network Functions 
Virtualization (NFV) v1.0
2) Support for TOSCA Groups and Group Type
3) Initial support for TOSCA Policy and Policy Types
4) Support for TOSCA Namespaces
5) Many bug fixes and minor enhancements including:
   -Fix for proper inheritance among types and custom 
relationships based on it
   -Updated min and max length with map
   -New get_property function for HOST properties similar to 
get_attribute function
   -Updated datatype_definition 
   -Support for nested properties
   -Fix for incorrect inheritance in properties of 
capabilities 
   -High level validation of imported template types
   -Six compatibility for urllib
   -Test updates
 -Documentation updates

Please let me know if you have any questions or comments.

Thanks!

Regards,
Sahdev Zala
PTL, Tosca-Parser
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2016-02-04 Thread Assaf Muller
On Thu, Feb 4, 2016 at 5:55 PM, Sean M. Collins  wrote:
> On Thu, Feb 04, 2016 at 04:20:50AM EST, Assaf Muller wrote:
>> I understand you see 'Dragonflow being part of the Neutron stadium'
>> and 'Dragonflow having high visibility' as tied together. I'm curious,
>> from a practical perspective, how does being a part of the stadium
>> give Dragonflow visibility? If it were not a part of the stadium and
>> you had your own PTL etc, what specifically would change so that
>> Dragonflow would be less visible.
>
>> Currently I don't understand why
>> being a part of the stadium is good or bad for a networking project,
>> or why does it matter.
>
>
> I think the issue is of public perception.

That's what I was trying to point out. But it must be something other
than perception, otherwise we could remove the inclusion list
altogether. A project would not be in or out.

> As others have stated, the
> issue is the "in" vs. "out" problem. We had a similar situation
> with 3rd party CI, where we had a list of drivers that were "nice" and
> had CI running vs drivers that were "naughty" and didn't. Prior to the
> vendor decomposition effort, We had a multitude of drivers that were
> in-tree, with the public perception that drivers that were in Neutron's
> tree were "sanctioned" by the Neutron project.
>
> That may not have been the intention, but that's what I think happened.
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][Magnum] ways to get CA certificate in make-cert.sh from Magnum

2016-02-04 Thread Guz Egor
Wanghua,
Could you elaborate why using token is problem? Provision cluster takes 
deterministic time and expiration time shouldn't be a problem (e.g. we can 
always assume that provision shouldn't take more than hour for example). Also 
we can generate new token every time when we update stack, can't we?   ---  Egor
  From: Corey O'Brien 
 To: OpenStack Development Mailing List (not for usage questions) 
 
 Sent: Thursday, February 4, 2016 8:24 PM
 Subject: Re: [openstack-dev] [openstack][Magnum] ways to get CA certificate in 
make-cert.sh from Magnum
   
There currently isn't a way to distinguish between user who creates the bay and 
the nodes in the bay because the user is root on those nodes. Any credential 
that the node uses to communicate with Magnum is going to be accessible to the 
user.
Since we already have the trust, that seems like the best way to proceed for 
now just to get something working.

Corey
On Thu, Feb 4, 2016 at 10:53 PM 王华  wrote:

Hi all,
Magnum now use a token to get CA certificate in make-cert.sh. Token has a 
expiration time. So we should change this method. Here are two proposals.
1. Use trust which I have introduced in [1]. The way has a disadvantage. We 
can't limit the access to some APIs. For example, if we want to add a 
limitation that some APIs can only be accessed from Bay and can't be accessed 
by users outside. We need a way to distinguish these users, fromBay or from 
outside.
2. We create a user with the role to access Magnum. The way is used in Heat. 
Heat creates a user for each stack to communicate with Heat. We can add a role 
to the user which is already introduced in [1]. The user can directly access 
Magnum for some limited APIs. With trust id, the user can access other services.
[1] https://review.openstack.org/#/c/268852/
Regards,Wanghua__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Authorization by user_id does not work in V2.1 API

2016-02-04 Thread Takashi Natsume
Hi Nova developers,

I have already submitted a bug report[1],
authorization by user_id when deleting a VM instance does not work in Nova V2.1 
API
although it works in Nova V2.0 API.

Has this change been done intentionally?

[1] Authorization by user_id does not work in V2.1 API
https://bugs.launchpad.net/nova/+bug/1539351

Regards,
Takashi Natsume
NTT Software Innovation Center
Tel: +81-422-59-4399
E-mail: natsume.taka...@lab.ntt.co.jp





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] gate issues

2016-02-04 Thread Guz Egor
Corey, I think we should do more investigation before applying any "hot" 
patches. E.g. I look at several failures today and honestly there is no way to 
find out reasons.I believe we are not copying logs 
(https://github.com/openstack/magnum/blob/master/magnum/tests/functional/python_client_base.py#L163)
 during test failure,  we register handler at setUp 
(https://github.com/openstack/magnum/blob/master/magnum/tests/functional/python_client_base.py#L244),
 but Swarm tests, createbay in setUpClass 
(https://github.com/openstack/magnum/blob/master/magnum/tests/functional/swarm/test_swarm_python_client.py#L48)
 which called before setUp.So there is no way to see any logs from vm.
sorry, I cannot submit patch/debug by myself because I will get my laptop back 
only on Tue ):
---  Egor
  From: Corey O'Brien 
 To: OpenStack Development Mailing List (not for usage questions) 
 
 Sent: Thursday, February 4, 2016 9:03 PM
 Subject: [openstack-dev] [Magnum] gate issues
   
So as we're all aware, the gate is a mess right now. I wanted to sum up some of 
the issues so we can figure out solutions.
1. The functional-api job sometimes fails because bays timeout building after 1 
hour. The logs look something like 
this:magnum.tests.functional.api.v1.test_bay.BayTest.test_create_list_and_delete_bays
 [3733.626171s] ... FAILEDI can reproduce this hang on my devstack with etcdctl 
2.0.10 as described in this bug 
(https://bugs.launchpad.net/magnum/+bug/1541105), but apparently either my fix 
with using 2.2.5 (https://review.openstack.org/#/c/275994/) is incomplete or 
there is another intermittent problem because it happened again even with that 
fix: 
(http://logs.openstack.org/94/275994/1/check/gate-functional-dsvm-magnum-api/32aacb1/console.html)
2. The k8s job has some sort of intermittent hang as well that causes a similar 
symptom as with swarm. https://bugs.launchpad.net/magnum/+bug/1541964
3. When the functional-api job runs, it frequently destroys the VM causing the 
jenkins slave agent to die. Example: 
http://logs.openstack.org/03/275003/6/check/gate-functional-dsvm-magnum-api/a9a0eb9//console.htmlWhen
 this happens, zuul re-queues a new build from the start on a new VM. This can 
happen many times in a row before the job completes.I chatted with 
openstack-infra about this and after taking a look at one of the VMs, it looks 
like memory over consumption leading to thrashing was a possible culprit. The 
sshd daemon was also dead but the console showed things like "INFO: task 
kswapd0:77 blocked for more than 120 seconds". A cursory glance and following 
some of the jobs seems to indicate that this doesn't happen on RAX VMs which 
have swap devices unlike the OVH VMs as well.
4. In general, even when things work, the gate is really slow. The sequential 
master-then-node build process in combination with underpowered VMs makes bay 
builds take 25-30 minutes when they do succeed. Since we're already close to 
tipping over a VM, we run functional tests with concurrency=1, so 2 bay builds 
means almost the entire allotted devstack testing time (generally 75 minutes of 
actual test time available it seems).
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][Magnum] ways to get CA certificate in make-cert.sh from Magnum

2016-02-04 Thread 王华
Hi Corey,

The user is root on those nodes and can get any credentials on those nodes.
We can not avoid that, but by this way we can disallow those users who can
not login into nodes to access some limited APIs.

Regards,
Wanghua

On Fri, Feb 5, 2016 at 12:24 PM, Corey O'Brien 
wrote:

> There currently isn't a way to distinguish between user who creates the
> bay and the nodes in the bay because the user is root on those nodes. Any
> credential that the node uses to communicate with Magnum is going to be
> accessible to the user.
>
> Since we already have the trust, that seems like the best way to proceed
> for now just to get something working.
>
> Corey
>
> On Thu, Feb 4, 2016 at 10:53 PM 王华  wrote:
>
>> Hi all,
>>
>> Magnum now use a token to get CA certificate in make-cert.sh. Token has a
>> expiration time. So we should change this method. Here are two proposals.
>>
>> 1. Use trust which I have introduced in [1]. The way has a disadvantage.
>> We can't limit the access to some APIs. For example, if we want to add a
>> limitation that some APIs can only be accessed from Bay and can't be
>> accessed by users outside. We need a way to distinguish these users, from
>> Bay or from outside.
>>
>> 2. We create a user with the role to access Magnum. The way is used in
>> Heat. Heat creates a user for each stack to communicate with Heat. We can
>> add a role to the user which is already introduced in [1]. The user can
>> directly access Magnum for some limited APIs. With trust id, the user can
>> access other services.
>>
>> [1] https://review.openstack.org/#/c/268852/
>>
>> Regards,
>> Wanghua
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Kilo gate doesn't like testtools 2.0.0

2016-02-04 Thread Tony Breeds
Hi All,
Just a quick heads up that the kilo gate (and therefore anything that
relies on kilo)[1] is a little busted.

This was originally noticed in 1541879[2] and a quick cap for g-r was proposed,
however if my analysis is correct this can't land because of 1542164[3].

testtools 2.0.0 was released 2016-02-04 and has a hard requirement on on
fixtures >=1.3.0 which isn't compatible with stable/kilo's global-requirements.

We can't land an update to requirements to cap testtools as we install
testtools (2.0.0) when we install os-testr[4].  Nothing we install from git in a
typical devstack run requires testtools (it's only listed in
test-requirements.txt) so we end up with 2.0.0.

Then when we run services, the requirements kick in and balk because, as an
example, keystone requires fixtures>=0.3.14,<1.3.0 and testtools requires
fixtures>=1.3.0

The way forward is land https://review.openstack.org/276580 and
https://review.openstack.org/276275/ This will unblock the gate and buy us time
to work out the right way to make kilo, os-testr and testtools play nice.
There are a few options none are very nice and generate work at a time when the
key players are travelling.

Of course I could be way off base and there is a much easier option.

Yours Tony.

[1] liberty grenade and neutron master seems to run a bunch of *-kilo jobs
[2] https://bugs.launchpad.net/neutron/+bug/1541879
[3] https://bugs.launchpad.net/devstack/+bug/1542164
[4] As we're pip_install'ing it we don't massage the requirements to match g-r.
We could update the gate to install os-testr from git as a work around but
that's not what I chose to do


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all]Whom should I contact to ask for ATC code

2016-02-04 Thread Eli Qiao

hi Thierry,
That's great, thanks for your reply :)

On 2016年02月04日 21:28, Thierry Carrez wrote:


The Summit events staff regularly emits new discount codes to account 
for recent new contributors (or new project teams being recognized as 
official). Your friend should automatically receive a code in a future 
batch. 


--
Best Regards, Eli(Li Yong)Qiao
Intel OTC China

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas] [octavia] Proposing Stephen Balukoff as Octavia Core

2016-02-04 Thread Brandon Logan
+1

On Fri, 2016-02-05 at 01:07 +, Adam Harwell wrote:
> +1 from me!
> 
> From: Michael Johnson 
> Sent: Thursday, February 4, 2016 7:03 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [lbaas] [octavia] Proposing Stephen Balukoff as 
> Octavia Core
> 
> Octavia Team,
> 
> I would like to nominate Stephen Balukoff as a core reviewer for the
> OpenStack Octavia project.  His contributions[1] are in line with
> other cores and he has been an active member of our community.
> 
> Octavia cores please vote by replying to this e-mail.
> 
> Michael
> 
> [1] http://stackalytics.com/report/contribution/octavia/90
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas] [octavia] Proposing Stephen Balukoff as Octavia Core

2016-02-04 Thread Doug Wiegley
+1

Doug


> On Feb 4, 2016, at 7:06 PM, Brandon Logan  wrote:
> 
> +1
> 
>> On Fri, 2016-02-05 at 01:07 +, Adam Harwell wrote:
>> +1 from me!
>> 
>> From: Michael Johnson 
>> Sent: Thursday, February 4, 2016 7:03 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [lbaas] [octavia] Proposing Stephen Balukoff as 
>> Octavia Core
>> 
>> Octavia Team,
>> 
>> I would like to nominate Stephen Balukoff as a core reviewer for the
>> OpenStack Octavia project.  His contributions[1] are in line with
>> other cores and he has been an active member of our community.
>> 
>> Octavia cores please vote by replying to this e-mail.
>> 
>> Michael
>> 
>> [1] http://stackalytics.com/report/contribution/octavia/90
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Migration progress

2016-02-04 Thread Zhenyu Zheng
I think we can add a config option for this and set a theoretical proper
default value,
we also add help messages to inform the the user about how inappropriate
value of
this config option will effect the performance.



On Wed, Feb 3, 2016 at 7:45 PM, Daniel P. Berrange 
wrote:

> On Wed, Feb 03, 2016 at 11:27:16AM +, Paul Carlton wrote:
> > On 03/02/16 10:49, Daniel P. Berrange wrote:
> > >On Wed, Feb 03, 2016 at 10:44:36AM +, Daniel P. Berrange wrote:
> > >>On Wed, Feb 03, 2016 at 10:37:24AM +, Koniszewski, Pawel wrote:
> > >>>Hello everyone,
> > >>>
> > >>>On the yesterday's live migration meeting we had concerns that
> interval of
> > >>>writing migration progress to the database is too short.
> > >>>
> > >>>Information about migration progress will be stored in the database
> and
> > >>>exposed through the API (/servers//migrations/). In current
> > >>>proposition [1] migration progress will be updated every 2 seconds. It
> > >>>basically means that every 2 seconds a call through RPC will go from
> compute
> > >>>to conductor to write migration data to the database. In case of
> parallel
> > >>>live migrations each migration will report progress by itself.
> > >>>
> > >>>Isn't 2 seconds interval too short for updates if the information is
> exposed
> > >>>through the API and it requires RPC and DB call to actually save it
> in the
> > >>>DB?
> > >>>
> > >>>Our default configuration allows only for 1 concurrent live migration
> [2],
> > >>>but it might vary between different deployments and use cases as it is
> > >>>configurable. Someone might want to trigger 10 (or even more)
> parallel live
> > >>>migrations and each might take even a day to finish in case of block
> > >>>migration. Also if deployment is big enough rabbitmq might be
> fully-loaded.
> > >>>I'm not sure whether updating each migration every 2 seconds makes
> sense in
> > >>>this case. On the other hand it might be hard to observe fast enough
> that
> > >>>migration is stuck if we increase this interval...
> > >>Do we have any actual data that this is a real problem. I have a
> pretty hard
> > >>time believing that a database update of a single field every 2
> seconds is
> > >>going to be what pushes Nova over the edge into a performance
> collapse, even
> > >>if there are 20 migrations running in parallel, when you compare it to
> the
> > >>amount of DB queries & updates done across other areas of the code for
> pretty
> > >>much every singke API call and background job.
> > >Also note that progress is rounded to the nearest integer. So even if
> the
> > >migration runs all day, there is a maximum of 100 possible changes in
> value
> > >for the progress field, so most of the updates should turn in to no-ops
> at
> > >the database level.
> > >
> > >Regards,
> > >Daniel
> > I agree with Daniel, these rpc and db access ops are a tiny percentage
> > of the overall load on rabbit and mysql and properly configured these
> > subsystems should have no issues with this workload.
> >
> > One correction, unless I'm misreading it, the existing
> > _live_migration_monitor code updates the progress field of the instance
> > record every 5 seconds.  However this value can go up and down so
> > an infinate number of updates are possible?
>
> Oh yes, you are in fact correct. Technically you could have an unbounded
> number of updates if migration goes backwards. Some mitigation against
> this is if we see progress going backwards we'll actually abort the
> migration if it gets stuck for too long. We'll also be progressively
> increasing the permitted downtime. So except in pathelogical scenarios
> I think the number of updates should still be relatively small.
>
> > However, the issue raised here is not with the existing implementation
> > but with the proposed change
> > https://review.openstack.org/#/c/258813/5/nova/virt/libvirt/driver.py
> > This add a save() operation on the migration object every 2 seconds
>
> Ok, that is more heavy weight since it is recording the raw byte values
> and so it is guaranteed to do a database update pretty much every time.
> It still shouldn't be too unreasonable a loading though. FWIW I think
> it is worth being consistent in the update frequency betweeen the
> progress value & the migration object save, so switching to be every
> 5 seconds probably makes more sense, so we know both objects are
> reflecting the same point in time.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

[openstack-dev] [lbaas] [octavia] Proposing Stephen Balukoff as Octavia Core

2016-02-04 Thread Michael Johnson
Octavia Team,

I would like to nominate Stephen Balukoff as a core reviewer for the
OpenStack Octavia project.  His contributions[1] are in line with
other cores and he has been an active member of our community.

Octavia cores please vote by replying to this e-mail.

Michael

[1] http://stackalytics.com/report/contribution/octavia/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2016-02-04 Thread Russell Bryant
On 02/04/2016 05:36 PM, Assaf Muller wrote:
> On Thu, Feb 4, 2016 at 5:55 PM, Sean M. Collins  wrote:
>> On Thu, Feb 04, 2016 at 04:20:50AM EST, Assaf Muller wrote:
>>> I understand you see 'Dragonflow being part of the Neutron stadium'
>>> and 'Dragonflow having high visibility' as tied together. I'm curious,
>>> from a practical perspective, how does being a part of the stadium
>>> give Dragonflow visibility? If it were not a part of the stadium and
>>> you had your own PTL etc, what specifically would change so that
>>> Dragonflow would be less visible.
>>
>>> Currently I don't understand why
>>> being a part of the stadium is good or bad for a networking project,
>>> or why does it matter.
>>
>>
>> I think the issue is of public perception.
> 
> That's what I was trying to point out. But it must be something other
> than perception, otherwise we could remove the inclusion list
> altogether. A project would not be in or out.

There has to be a list somewhere.  That's how OpenStack governance
works.  We have project teams that work together to produce a set of
deliverables, where each deliverable is made up of one or more git
repositories.

The ongoing issue is trying to find the right structure that matches how
our teams are working and what they're willing to own.  The current
approach hasn't worked, so it's time for another iteration.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][aodh] announcing Liusheng as new Aodh liaison

2016-02-04 Thread Zhenyu Zheng
Congratulations Liusheng

On Fri, Feb 5, 2016 at 12:07 AM, gordon chung  wrote:

> hi,
>
> we've been searching for a lead/liaison/lieutenant for Aodh for some
> time. thankfully, we've had a volunteer.
>
> i'd like to announce Liusheng as the new lead of Aodh, the alarming
> service under Telemetry. he will help me with monitor bugs and specs and
> will be another resource for alarming related items. he will also help
> track some of the features we hope to implement[1].
>
> i'll let him mention some of the target goals but for now, i'd like to
> thank him for volunteering to help improve the community.
>
> [1] https://wiki.openstack.org/wiki/Telemetry/RoadMap#Aodh_.28alarming.29
>
> cheers,
> --
> gord
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-04 Thread Armando M.
On 4 February 2016 at 08:22, John Belamaric  wrote:

>
> > On Feb 4, 2016, at 11:09 AM, Carl Baldwin  wrote:
> >
> > On Thu, Feb 4, 2016 at 7:23 AM, Pavel Bondar 
> wrote:
> >> I am trying to bring more attention to [1] to make final decision on
> >> approach to use.
> >> There are a few point that are not 100% clear for me at this point.
> >>
> >> 1) Do we plan to switch all current clouds to pluggable ipam
> >> implementation in Mitaka?
> >
> > I think our plan originally was only to deprecate the non-pluggable
> > implementation in Mitaka and remove it in Newton.  However, this is
> > worth some more consideration.  The pluggable version of the reference
> > implementation should, in theory, be at parity with the current
> > non-pluggable implementation.  We've tested it before and shown
> > parity.  What we're missing is regular testing in the gate to ensure
> > it continues this way.
> >
>
> Yes, it certainly should be at parity, and gate testing to ensure it would
> be best.
>
> >> yes -->
> >> Then data migration can be done as alembic_migration and it is what
> >> currently implemented in [2] PS54.
> >> In this case during upgrade from Liberty to Mitaka all users are
> >> unconditionally switched to reference ipam driver
> >> from built-in ipam implementation.
> >> If operator wants to continue using build-in ipam implementation it can
> >> manually turn off ipam_driver in neutron.conf
> >> immediately after upgrade (data is not deleted from old tables).
> >
> > This has a certain appeal to it.  I think the migration will be
> > straight-forward since the table structure doesn't really change much.
> > Doing this as an alembic migration would be the easiest from an
> > upgrade point of view because it fits seamlessly in to our current
> > upgrade strategy.
> >
> > If we go this way, we should get this in soon so that we can get the
> > gate and others running with this code for the remainder of the cycle.
> >
>
> If we do this, and the operator reverts back to the non-pluggable version,
> then we will leave stale records in the new IPAM tables. At the very least,
> we would need a way to clean those up and to migrate at a later time.
>
> >> no -->
> >> Operator is free to choose whether it will switch to pluggable ipam
> >> implementation
> >> and when. And it leads to no automatic data migration.
> >> In this case operator is supplied with script for migration to pluggable
> >> ipam (and probably from pluggable ipam),
> >> which can be executed by operator during upgrade or at any point after
> >> upgrade is done.
> >> I was testing this approach in [2] PS53 (have unresolved issues in it
> >> for now).
> >
> > If there is some risk in changing over then this should still be
> > considered.  But, the more I think about it, the more I think that we
> > should just make the switch seamlessly for the operator and be done
> > with it.  This approach puts a certain burden on the operator to
> > choose when to do the migration and go through the steps manually to
> > do it.  And, since our intention is to deprecate and remove the
> > non-pluggable implementation, it is inevitable that they will have to
> > eventually switch anyway.
> >
> > This also makes testing much more difficult.  If we go this route, we
> > really should be testing both equally.  Does this mean that we need to
> > set up a whole new job to run the pluggable implementation along side
> > the old implementation?  This kind of feels like a nightmare to me.
> > What do you think?
> >
>
> Originally (as I mentioned in the meeting), I was thinking that we should
> not automatically migrate. However, I see the appeal of your arguments.
> Seamless is best, of course. But if we offer going back to non-pluggable,
> (which I think we need to at this point in the Mitaka cycle), we probably
> need to provide a script as mentioned above. Seems feasible, though.
>
>
>
>
We're tackling more than one issue in this thread and I am having a hard
time wrapping my head around it. Let me try to sum it all up.

a) switching from non-pluggable to pluggable it's a matter of running a
data migration + a config change
b) We can either switch automatically on restart (option b1) or manually on
operator command (b2)
c) Do we make pluggable ipam default and when
d) Testing the migration
e) Deprecating the non-pluggable one.

I hope we are all in agreement on bullet point a), because knowing the
complexity of your problem is halfway to our solution.

as for b) I think that manual migration is best for two reasons: 1) In HA
scenarios, seamless upgrade (ie. on server restart) can be a challenge; 2)
the operator must 'manually' change the driver, so he/she is very conscious
of what he/she is doing and can take enough precautions should something go
astray. Technically we can make this as sophisticated and seamless as we
want, but this is a one-off, once it's done the pain goes away, and we
won't be 

Re: [openstack-dev] [lbaas] [octavia] Proposing Stephen Balukoff as Octavia Core

2016-02-04 Thread Adam Harwell
+1 from me!

From: Michael Johnson 
Sent: Thursday, February 4, 2016 7:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [lbaas] [octavia] Proposing Stephen Balukoff as 
Octavia Core

Octavia Team,

I would like to nominate Stephen Balukoff as a core reviewer for the
OpenStack Octavia project.  His contributions[1] are in line with
other cores and he has been an active member of our community.

Octavia cores please vote by replying to this e-mail.

Michael

[1] http://stackalytics.com/report/contribution/octavia/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Doug Hellmann
Excerpts from Hayes, Graham's message of 2016-02-04 12:54:56 +:
> On 04/02/2016 11:40, Sean Dague wrote:
> > A few issues have crept up recently with the service catalog, API
> > headers, API end points, and even similarly named resources in
> > different resources (e.g. backup), that are all circling around a key
> > problem. Distributed teams and naming collision.
> >
> > Every OpenStack project has a unique name by virtue of having a git
> > tree. Once they claim 'openstack/foo', foo is theirs in the
> > OpenStack universe for all time (or until trademarks say otherwise).
> > Nova in OpenStack will always mean one project.
> >
> > There has also been a desire to replace project names with
> > common/generic names, in the service catalog, API headers, and a few
> > other places. Nova owns 'compute'. Except... that's only because we
> > all know that it does. We don't *actually* have a registry for those
> > values.
> >
> > So the code names are well regulated, the common names, that we
> > encourage use of, are not. Devstack in tree code defines some
> > conventions. However with the big tent, things get kind of squirely
> > pretty quickly. Congress registering 'policy' as their endpoint type
> > is a good example of that -
> > https://github.com/openstack/congress/blob/master/devstack/plugin.sh#L147
> >
> >  Naming is hard. And trying to boil down complicated state machines
> > to one or two word shiboliths means that inevitably we're going to
> > find some words just keep cropping up: policy, flavor, backup, meter.
> > We do however need to figure out a way forward.
> >
> > Lets start with the top level names (resource overlap cascades from
> > there).
> >
> > What options do we have?
> >
> > 1) Use the names we already have: nova, glance, swift, etc.
> >
> > Upside, collision problem is solved. Downside, you need a secret
> > decoder ring to figure out what project does what.
> >
> > 2) Have a registry of "common" names.
> >
> > Upside, we can safely use common names everywhere and not fear
> > collision down the road.
> >
> > Downside, yet another contention point.
> >
> > A registry would clearly be under TC administration, though all the
> > heavy lifting might be handed over to the API working group. I still
> > imagine collision around some areas might be contentious.
> 
> ++ to a central registry. It could easily be added to the projects.yaml
> file, and is a single source of truth.

Although I realized that the projects.yaml file only includes official
projects right now, which would mean new projects wouldn't have a place
to register terms. Maybe that's a feature?

> 
> I imagine collisions are going to be contentious - but having a central
> source makes finding potential collisions much easier.
> 
> >
> > 3) Use either, inconsistently, hope for the best. (aka - status quo)
> >
> > Upside, no long mailing list thread to figure out the answer.
> > Downside, it sucks.
> >
> >
> > Are there other options missing? Where are people leaning at this
> > point?
> >
> > Personally I'm way less partial to any particular answer as long as
> > it's not #3.
> >
> >
> > -Sean
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Hayes, Graham
On 04/02/2016 13:24, Doug Hellmann wrote:
> Excerpts from Hayes, Graham's message of 2016-02-04 12:54:56 +:
>> On 04/02/2016 11:40, Sean Dague wrote:
>>> A few issues have crept up recently with the service catalog, API
>>> headers, API end points, and even similarly named resources in
>>> different resources (e.g. backup), that are all circling around a key
>>> problem. Distributed teams and naming collision.
>>>
>>> Every OpenStack project has a unique name by virtue of having a git
>>> tree. Once they claim 'openstack/foo', foo is theirs in the
>>> OpenStack universe for all time (or until trademarks say otherwise).
>>> Nova in OpenStack will always mean one project.
>>>
>>> There has also been a desire to replace project names with
>>> common/generic names, in the service catalog, API headers, and a few
>>> other places. Nova owns 'compute'. Except... that's only because we
>>> all know that it does. We don't *actually* have a registry for those
>>> values.
>>>
>>> So the code names are well regulated, the common names, that we
>>> encourage use of, are not. Devstack in tree code defines some
>>> conventions. However with the big tent, things get kind of squirely
>>> pretty quickly. Congress registering 'policy' as their endpoint type
>>> is a good example of that -
>>> https://github.com/openstack/congress/blob/master/devstack/plugin.sh#L147
>>>
>>>   Naming is hard. And trying to boil down complicated state machines
>>> to one or two word shiboliths means that inevitably we're going to
>>> find some words just keep cropping up: policy, flavor, backup, meter.
>>> We do however need to figure out a way forward.
>>>
>>> Lets start with the top level names (resource overlap cascades from
>>> there).
>>>
>>> What options do we have?
>>>
>>> 1) Use the names we already have: nova, glance, swift, etc.
>>>
>>> Upside, collision problem is solved. Downside, you need a secret
>>> decoder ring to figure out what project does what.
>>>
>>> 2) Have a registry of "common" names.
>>>
>>> Upside, we can safely use common names everywhere and not fear
>>> collision down the road.
>>>
>>> Downside, yet another contention point.
>>>
>>> A registry would clearly be under TC administration, though all the
>>> heavy lifting might be handed over to the API working group. I still
>>> imagine collision around some areas might be contentious.
>>
>> ++ to a central registry. It could easily be added to the projects.yaml
>> file, and is a single source of truth.
>
> Although I realized that the projects.yaml file only includes official
> projects right now, which would mean new projects wouldn't have a place
> to register terms. Maybe that's a feature?

That is a good point - should we be registering terms for non tent
projects? Or do projects get terms when they get accepted into the tent?

>
>>
>> I imagine collisions are going to be contentious - but having a central
>> source makes finding potential collisions much easier.
>>
>>>
>>> 3) Use either, inconsistently, hope for the best. (aka - status quo)
>>>
>>> Upside, no long mailing list thread to figure out the answer.
>>> Downside, it sucks.
>>>
>>>
>>> Are there other options missing? Where are people leaning at this
>>> point?
>>>
>>> Personally I'm way less partial to any particular answer as long as
>>> it's not #3.
>>>
>>>
>>> -Sean
>>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer] New project: collectd-ceilometer-plugin

2016-02-04 Thread Julien Danjou
On Thu, Feb 04 2016, Foley, Emma L wrote:

> Question is where it should live. 
> Since gnocchi is designed to be standalone, it seem like that's a potential 
> home for it. 
> If not, it also fits in with the existing plugin.

It it's a collectd plugin that talk statsd protocol, I'd say it should
live near collectd, no?

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Sean Dague
On 02/04/2016 09:35 AM, Anne Gentle wrote:
> 
> 
> On Thu, Feb 4, 2016 at 7:33 AM, Sean Dague  > wrote:
> 
> On 02/04/2016 08:18 AM, Doug Hellmann wrote:
> > Excerpts from Hayes, Graham's message of 2016-02-04 12:54:56 +:
> >> On 04/02/2016 11:40, Sean Dague wrote:
> >>> A few issues have crept up recently with the service catalog, API
> >>> headers, API end points, and even similarly named resources in
> >>> different resources (e.g. backup), that are all circling around
> a key
> >>> problem. Distributed teams and naming collision.
> >>>
> >>> Every OpenStack project has a unique name by virtue of having a git
> >>> tree. Once they claim 'openstack/foo', foo is theirs in the
> >>> OpenStack universe for all time (or until trademarks say otherwise).
> >>> Nova in OpenStack will always mean one project.
> >>>
> >>> There has also been a desire to replace project names with
> >>> common/generic names, in the service catalog, API headers, and a few
> >>> other places. Nova owns 'compute'. Except... that's only because we
> >>> all know that it does. We don't *actually* have a registry for those
> >>> values.
> >>>
> >>> So the code names are well regulated, the common names, that we
> >>> encourage use of, are not. Devstack in tree code defines some
> >>> conventions. However with the big tent, things get kind of squirely
> >>> pretty quickly. Congress registering 'policy' as their endpoint type
> >>> is a good example of that -
> >>>
> https://github.com/openstack/congress/blob/master/devstack/plugin.sh#L147
> >>>
> >>>  Naming is hard. And trying to boil down complicated state machines
> >>> to one or two word shiboliths means that inevitably we're going to
> >>> find some words just keep cropping up: policy, flavor, backup,
> meter.
> >>> We do however need to figure out a way forward.
> >>>
> >>> Lets start with the top level names (resource overlap cascades from
> >>> there).
> >>>
> >>> What options do we have?
> >>>
> >>> 1) Use the names we already have: nova, glance, swift, etc.
> >>>
> >>> Upside, collision problem is solved. Downside, you need a secret
> >>> decoder ring to figure out what project does what.
> >>>
> >>> 2) Have a registry of "common" names.
> >>>
> >>> Upside, we can safely use common names everywhere and not fear
> >>> collision down the road.
> >>>
> >>> Downside, yet another contention point.
> >>>
> >>> A registry would clearly be under TC administration, though all the
> >>> heavy lifting might be handed over to the API working group. I still
> >>> imagine collision around some areas might be contentious.
> >>
> >> ++ to a central registry. It could easily be added to the
> projects.yaml
> >> file, and is a single source of truth.
> >
> > Although I realized that the projects.yaml file only includes official
> > projects right now, which would mean new projects wouldn't have a
> place
> > to register terms. Maybe that's a feature?
> 
> It seems like it's a feature.
> 
> That being said, projects.yaml is pretty full right now. And, it's not
> clear that common name <-> project name is a 1 to 1 mapping.
> 
> For instance, Nova is getting very close to exposing a set of scheduler
> resources. For future proofing we would like to do that on a dedicated
> new endpoint from day one so that potential future split out of the code
> would not impact how APIs look in the service catalog or to consumers.
> There would be no need to have proxy apis in Nova for compatibility in
> the future.
> 
> So this feels like a separate registry for service common name, which
> maps N -> 1 to a project.
> 
> 
> Project names should not be exposed to end users.
> 
> Maybe the service names belong in an example, vetted service catalog as
> a place to look to see if your name is already taken. I sense we have to
> first start with endpoints, then move to the resources, and honestly I
> feel lately "let the best API design win." For example, with PayPal and
> Stripe, there are differentiators that would cause a dev to choose one
> over another. PayPal has a /payments resource and Stripe has a /charges
> resource. Those resources are where some of the conflict is starting to
> be seen for us in OpenStack with backups. If we expect end users to use
> the whole cloud then we need to outline the resources that are reserved
> already to avoid end-user confusion. Believe me, I document this stuff,
> and I know it's difficult to understand. We have to advocate for our end
> users now, today, here.
> 
> For the schedule example, is it the Compute endpoint that intakes the
> scheduling operations? Or is there a new endpoint?

The intent is a new dedicated endpoint, to ensure an 

Re: [openstack-dev] [all]Whom should I contact to ask for ATC code

2016-02-04 Thread Thierry Carrez

Qiao, Liyong wrote:

I am sorry for the broadcasting, but I really don’t know whom should I
contact.

Things is:

My friend is a newly developer who already has contribute to openstack
project, and would like to know in what way(or email contactor) he can
get the ATC code for Austin summit.


Hi!

The Summit events staff regularly emits new discount codes to account 
for recent new contributors (or new project teams being recognized as 
official). Your friend should automatically receive a code in a future 
batch.


Regards,

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Patrick East to Cinder Core

2016-02-04 Thread Sean McGinnis
On Sat, Jan 30, 2016 at 01:04:58AM +0100, Sean McGinnis wrote:
> Patrick has been a strong contributor to Cinder over the last few releases, 
> both with great code submissions and useful reviews. He also participates 
> regularly on IRC helping answer questions and providing valuable feedback.
> 
> I would like to add Patrick to the core reviewers for Cinder. Per our 
> governance process [1], existing core reviewers please respond with any 
> feedback within the next five days. Unless there are no objections, I will 
> add Patrick to the group by February 3rd.

The five day feedback period has passed and all respondents have been in
the positive.

Welcome to Cinder core Patrick! Glad to have you on board!

> 
> Thanks!
> 
> Sean (smcginnis)
> 
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ERROR: Could not find user: demo - 401 error

2016-02-04 Thread M Ranga Swami Reddy
Hello,
When I use the devstack, below error seen:

$ cinder list
ERROR: Could not find user: demo (Disable debug mode to suppress these
details.) (HTTP 401)

I tried with admin also..same error

Thanks
Swami

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Anne Gentle
On Thu, Feb 4, 2016 at 7:33 AM, Sean Dague  wrote:

> On 02/04/2016 08:18 AM, Doug Hellmann wrote:
> > Excerpts from Hayes, Graham's message of 2016-02-04 12:54:56 +:
> >> On 04/02/2016 11:40, Sean Dague wrote:
> >>> A few issues have crept up recently with the service catalog, API
> >>> headers, API end points, and even similarly named resources in
> >>> different resources (e.g. backup), that are all circling around a key
> >>> problem. Distributed teams and naming collision.
> >>>
> >>> Every OpenStack project has a unique name by virtue of having a git
> >>> tree. Once they claim 'openstack/foo', foo is theirs in the
> >>> OpenStack universe for all time (or until trademarks say otherwise).
> >>> Nova in OpenStack will always mean one project.
> >>>
> >>> There has also been a desire to replace project names with
> >>> common/generic names, in the service catalog, API headers, and a few
> >>> other places. Nova owns 'compute'. Except... that's only because we
> >>> all know that it does. We don't *actually* have a registry for those
> >>> values.
> >>>
> >>> So the code names are well regulated, the common names, that we
> >>> encourage use of, are not. Devstack in tree code defines some
> >>> conventions. However with the big tent, things get kind of squirely
> >>> pretty quickly. Congress registering 'policy' as their endpoint type
> >>> is a good example of that -
> >>>
> https://github.com/openstack/congress/blob/master/devstack/plugin.sh#L147
> >>>
> >>>  Naming is hard. And trying to boil down complicated state machines
> >>> to one or two word shiboliths means that inevitably we're going to
> >>> find some words just keep cropping up: policy, flavor, backup, meter.
> >>> We do however need to figure out a way forward.
> >>>
> >>> Lets start with the top level names (resource overlap cascades from
> >>> there).
> >>>
> >>> What options do we have?
> >>>
> >>> 1) Use the names we already have: nova, glance, swift, etc.
> >>>
> >>> Upside, collision problem is solved. Downside, you need a secret
> >>> decoder ring to figure out what project does what.
> >>>
> >>> 2) Have a registry of "common" names.
> >>>
> >>> Upside, we can safely use common names everywhere and not fear
> >>> collision down the road.
> >>>
> >>> Downside, yet another contention point.
> >>>
> >>> A registry would clearly be under TC administration, though all the
> >>> heavy lifting might be handed over to the API working group. I still
> >>> imagine collision around some areas might be contentious.
> >>
> >> ++ to a central registry. It could easily be added to the projects.yaml
> >> file, and is a single source of truth.
> >
> > Although I realized that the projects.yaml file only includes official
> > projects right now, which would mean new projects wouldn't have a place
> > to register terms. Maybe that's a feature?
>
> It seems like it's a feature.
>
> That being said, projects.yaml is pretty full right now. And, it's not
> clear that common name <-> project name is a 1 to 1 mapping.
>
> For instance, Nova is getting very close to exposing a set of scheduler
> resources. For future proofing we would like to do that on a dedicated
> new endpoint from day one so that potential future split out of the code
> would not impact how APIs look in the service catalog or to consumers.
> There would be no need to have proxy apis in Nova for compatibility in
> the future.
>
> So this feels like a separate registry for service common name, which
> maps N -> 1 to a project.
>

Project names should not be exposed to end users.

Maybe the service names belong in an example, vetted service catalog as a
place to look to see if your name is already taken. I sense we have to
first start with endpoints, then move to the resources, and honestly I feel
lately "let the best API design win." For example, with PayPal and Stripe,
there are differentiators that would cause a dev to choose one over
another. PayPal has a /payments resource and Stripe has a /charges
resource. Those resources are where some of the conflict is starting to be
seen for us in OpenStack with backups. If we expect end users to use the
whole cloud then we need to outline the resources that are reserved already
to avoid end-user confusion. Believe me, I document this stuff, and I know
it's difficult to understand. We have to advocate for our end users now,
today, here.

For the schedule example, is it the Compute endpoint that intakes the
scheduling operations? Or is there a new endpoint?

API design and developer experience must become our first thought.

Anne


>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
Rackspace
Principal Engineer

Re: [openstack-dev] [Neutron] Evolving the stadium concept

2016-02-04 Thread Kyle Mestery
On Thu, Feb 4, 2016 at 1:33 AM, Gal Sagie  wrote:
> As i have commented on the patch i will also send this to the mailing list:
>
> I really dont see why Dragonflow is not part of this list, given the
> criteria you listed.
>
> Dragonflow is fully developed under Neutron/OpenStack, no other
> repositories. It is fully Open source and already have a community of people
> contributing and interest from various different companies and OpenStack
> deployers. (I can prepare the list of active contributions and of interested
> parties) It also puts OpenStack Neutron APIs and use cases as first class
> citizens and working on being an integral part of OpenStack.
>
> I agree that OVN needs to be part of the list, but you brought up this
> criteria in regards to ODL, so: OVN like ODL is not only Neutron and
> OpenStack and is even running/being implemented on a whole different
> governance model and requirements to it.
>
> I think you also forgot to mention some other projects as well that are
> fully open source with a vibrant and diverse community, i will let them
> comment here by themselves.
>
> Frankly this approach disappoints me, I have honestly worked hard to make
> Dragonflow fully visible and add and support open discussion and follow the
> correct guidelines to work in a project. I think that Dragonflow community
> has already few members from various companies and this is only going to
> grow in the near future. (in addition to deployers that are considering it
> as a solution)  we also welcome anyone that wants to join and be part of the
> process to step in, we are very welcoming
>
> I also think that the correct way to do this is to actually add as reviewers
> all lieutenants of the projects you are now removing from Neutron big
> stadium and letting them comment.
>
Hi Gal:

I don't think it's a completely fair characterize this as anything
other than an attempt to accurately reflect what the Neutron team can
stand behind. Most of these other open source projects (like
Dragonflow, networking-odl, even networking-ovn) can quite easily
apply for Big Tent admission, and would make the grade pretty easily.
This was not done to hurt anyones feelings or anything, and I know
Russell spent a lot of time on this. We knew this conversation would
be difficult, so I applaud him for sticking his neck out here and
moving things forward.

Thanks!
Kyle

> Gal.
>
>
> On Wed, Feb 3, 2016 at 11:48 PM, Russell Bryant  wrote:
>>
>> On 11/30/2015 07:56 PM, Armando M. wrote:
>> > I would like to suggest that we evolve the structure of the Neutron
>> > governance, so that most of the deliverables that are now part of the
>> > Neutron stadium become standalone projects that are entirely
>> > self-governed (they have their own core/release teams, etc).
>>
>> After thinking over the discussion in this thread for a while, I have
>> started the following proposal to implement the stadium renovation that
>> Armando originally proposed in this thread.
>>
>> https://review.openstack.org/#/c/275888
>>
>> --
>> Russell Bryant
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-04 Thread Pavel Bondar
Hi,

I am trying to bring more attention to [1] to make final decision on
approach to use.
There are a few point that are not 100% clear for me at this point.

1) Do we plan to switch all current clouds to pluggable ipam
implementation in Mitaka?

yes -->
Then data migration can be done as alembic_migration and it is what
currently implemented in [2] PS54.
In this case during upgrade from Liberty to Mitaka all users are
unconditionally switched to reference ipam driver
from built-in ipam implementation.
If operator wants to continue using build-in ipam implementation it can
manually turn off ipam_driver in neutron.conf
immediately after upgrade (data is not deleted from old tables).

no -->
Operator is free to choose whether it will switch to pluggable ipam
implementation
and when. And it leads to no automatic data migration.
In this case operator is supplied with script for migration to pluggable
ipam (and probably from pluggable ipam),
which can be executed by operator during upgrade or at any point after
upgrade is done.
I was testing this approach in [2] PS53 (have unresolved issues in it
for now).

Or we could do both, i.e. migrate data during upgrade from built-in to
pluggable ipam implementation
and supply operator with scripts to migrate from/to pluggable ipam at
any time after upgrade.

According to current feedback in [1] it most likely we go with script
approach,
so would like to confirm if that is the case.

2) Do we plan to make ipam implementation default in Mitaka for greenfields?

If answer for this question is the same as for previous (yes/yes,
no/no), then it doesn't introduce additional issues.
But if answer is different from previous then it might complicate stuff.
For example, greyfields might be migrated manually by operator to
pluggable ipam, or continue to work using built-in implementation after
upgrade in Mitaka.
But greenfields might be set to pluggable ipam implementation by default.

Is it what we are going to support?

3) How the script approach should be tested?

Currently if pluggable implementation is set as default, then grenade
test fails.
Data has to be migrated during upgrade automatically to make grenade pass.
In [1] PS53 I was using alembic migration that internally just call
external migrate script.
Is it a valid approach? I expect that better way to test script
execution during upgrade might exist.

[1] https://bugs.launchpad.net/neutron/+bug/1516156
[2] https://review.openstack.org/#/c/181023

Thanks,
Pavel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Third Party CI Deadlines for Mitaka and N

2016-02-04 Thread Anita Kuno
On 02/03/2016 05:51 PM, Mike Perez wrote:
> On 17:00 Nov 30, Mike Perez wrote:
>> On October 28th 2015 at the Ironic Third Party CI summit session [1], there 
>> was
>> consensus by the Ironic core and participating vendors that the set of
>> deadlines will be:
>>
>> * Mitaka-2ː Driver teams will have registered their intent to run CI by 
>> creating
>> system accounts and identifying a point of contact for their CI team in the
>> Third party CI wiki [2].
>> * Mitaka Feature Freezeː All driver systems show the ability to receive 
>> events
>> and post comments in the sandbox.
>> * N release feature freezeː Per patch testing and posting comments.
>>
>> There are requirements set for OpenStack Third Party CI's [3]. In addition
>> Ironic third party CI's must:
>>
>> 1) Test all drivers your company has integrated in Ironic.
>>
>> For example, if your company has two drivers in Ironic, you would need to 
>> have
>> a CI that tests against the two and reports the results for each, for every
>> Ironic upstream patch. The tests come from a Devstack Gate job template [4], 
>> in
>> which you just need to switch the "deploy_driver" to your driver.
>>
>> To get started, read OpenStack's third party testing documentation [5]. There
>> are efforts by OpenStack Infra to allow others to run third party CI similar 
>> to
>> the OpenStack upstream CI using Puppet [6] and instruction are available [7].
>> Don't forget to register your CI in the wiki [2], there is no need to 
>> announce
>> about it on any mailing list.
>>
>> OpenStack Infra also provides third party CI help via meetings [8], and the
>> Ironic team has designated people to answer questions with setting up a third
>> party CI in the #openstack-ironic room [9].
>>
>> If a solution does not have a CI watching for events and posting comments to
>> the sandbox [10] by the Mitaka feature freeze, it'll be assumed the driver is
>> not active, and can be removed from the Ironic repository as of the Mitaka
>> release.
>>
>> If a solution is not being tested in a CI system and reporting to OpenStack
>> gerrit Ironic patches by the deadline of the N release feature freeze, an
>> Ironic driver could be removed from the Ironic repository. Without a CI 
>> system,
>> Ironic core is unable to verify your driver works in the N release of Ironic.
>>
>> If there is something not clear about this email, please email me *directly*
>> with your question. You can also reach me as thingee on Freenode IRC in the
>> #openstack-ironic channel. Again I want you all to be successful in this, and
>> take advantage of this testing you will have with your product. Please
>> communicate with me and reach out to the team for help.
>>
>> [1] - https://etherpad.openstack.org/p/summit-mitaka-ironic-third-party-ci
>> [2] - https://wiki.openstack.org/wiki/ThirdPartySystems
>> [3] - 
>> http://docs.openstack.org/infra/system-config/third_party.html#requirements
>> [4] - 
>> https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/devstack-gate.yaml#L961
>> [5] - http://docs.openstack.org/infra/system-config/third_party.html
>> [6] - https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/
>> [7] - 
>> https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md
>> [8] - https://wiki.openstack.org/wiki/Meetings/ThirdParty
>> [9] - https://wiki.openstack.org/wiki/Ironic/Testing#Questions
>> [10] - https://review.openstack.org/#/q/project:+openstack-dev/sandbox,n,z
> 
> Hi all,
> 
> Just a reminder that M-2 has passed and all Ironic drivers at this point 
> should
> have a service account [1] registered in the third party CI wiki [2] per our
> agreed spec [3] for bringing third party CI support in Ironic.
> 
> If you are being cc'd directly on this email, it's because you're known as
> being a maintainer of a driver, and have been previously contacted on November
> 30th 2016 about this.
> 
> By not having a service account registered for the M-2 deadline, you are
> expressing the driver is inactive in the Ironic project and therefore the team
> will be unable to verify your driver works.
> 
> As expressed in the quoted email, if your driver has no CI reporting in the
> sandbox by Mitaka feature freeze, it can be removed in Mitaka.
> 
> Please use the resources provided by getting help in the third party CI help
> meeting [4] that meets twice a week and different time zones. Also see the
> Ironic third party CI information page [5].  Thanks!

Thanks for sharing the link for the third party meeting, Mike. I'm not
seeing as many new faces at the meetings as I would have thought. A few
new folks but I'm not certain these new people are working on CI systems
for Ironic drivers.

Thank you,
Anita.

> 
> [1] - 
> http://docs.openstack.org/infra/system-config/third_party.html#creating-a-service-account
> [2] - https://wiki.openstack.org/wiki/ThirdPartySystems
> [3] - 
> 

Re: [openstack-dev] [glance][ironic][cinder][nova] 'tar' as an image disk_format

2016-02-04 Thread Brian Rosmaita
Update:

The discussion went in two directions:

(1) Whether supporting OS tarballs in Ironic is a good idea ... continuing
discussion on the Ironic spec, https://review.openstack.org/#/c/248968/

(2) How an OS tarball should be identified in Glance ... continuing
discussion on the Glance spec-lite,
https://bugs.launchpad.net/glance/+bug/1535900

If you have an opinion, please leave a comment in the appropriate place.

thanks,
brian

On 1/23/16, 9:54 AM, "Brian Rosmaita"  wrote:


>Please provide feedback about a proposal to add 'tar' as a new Glance
>disk_format.[0]
>
>The Ironic team is adding support for "OS tarball images" in Mitaka.
>This is a compressed tar archive of a / (root filesystem). These tarballs
>are created by first installing the OS packages in a chroot and then
>compressing the chroot as tar.*.  The proposal
> is to store such images as disk_format == tar and container_format ==
>bare.
>
>Intuitively, 'tar' seems more like a container_format.  The Glance
>developer documentation, however, says that "The container format refers
>to whether the virtual machine image is in a file format that also
>contains metadata about the actual virtual machine."[1]
>  Under this proposal, there is no such metadata included.
>
>The Glance docs say this about disk_format: "The disk format of a virtual
>machine image is the format of the underlying disk image. Virtual
>appliance vendors have different formats for laying out the information
>contained in a virtual machine disk image."[1]
>  Under this definition, 'tar' as used in this proposal [0] does in fact
>seem to be a disk_format.
>
>There is not currently a 'tar' container format defined for Glance.  The
>closest we have now is 'ova' (an OVA tar archive file) and 'docker' (a
>Docker tar archive of the container filesystem).  And, in fact, 'tar' as
>a container format wouldn't be very
> helpful, as it doesn't indicate where in the tarball the metadata should
>be found.
>
>The goal here is to come up with an identifier for an "OS tarball image"
>that's acceptable across projects and isn't confusing for people who are
>creating images.
>
>Thanks in advance for your feedback,
>brian
>
>[0] https://bugs.launchpad.net/glance/+bug/1535900
>[1] https://github.com/openstack/glance/blob/master/doc/source/formats.rst
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2016-02-04 08:33:59 -0500:
> On 02/04/2016 08:18 AM, Doug Hellmann wrote:
> > Excerpts from Hayes, Graham's message of 2016-02-04 12:54:56 +:
> >> On 04/02/2016 11:40, Sean Dague wrote:
> >>> A few issues have crept up recently with the service catalog, API
> >>> headers, API end points, and even similarly named resources in
> >>> different resources (e.g. backup), that are all circling around a key
> >>> problem. Distributed teams and naming collision.
> >>>
> >>> Every OpenStack project has a unique name by virtue of having a git
> >>> tree. Once they claim 'openstack/foo', foo is theirs in the
> >>> OpenStack universe for all time (or until trademarks say otherwise).
> >>> Nova in OpenStack will always mean one project.
> >>>
> >>> There has also been a desire to replace project names with
> >>> common/generic names, in the service catalog, API headers, and a few
> >>> other places. Nova owns 'compute'. Except... that's only because we
> >>> all know that it does. We don't *actually* have a registry for those
> >>> values.
> >>>
> >>> So the code names are well regulated, the common names, that we
> >>> encourage use of, are not. Devstack in tree code defines some
> >>> conventions. However with the big tent, things get kind of squirely
> >>> pretty quickly. Congress registering 'policy' as their endpoint type
> >>> is a good example of that -
> >>> https://github.com/openstack/congress/blob/master/devstack/plugin.sh#L147
> >>>
> >>>  Naming is hard. And trying to boil down complicated state machines
> >>> to one or two word shiboliths means that inevitably we're going to
> >>> find some words just keep cropping up: policy, flavor, backup, meter.
> >>> We do however need to figure out a way forward.
> >>>
> >>> Lets start with the top level names (resource overlap cascades from
> >>> there).
> >>>
> >>> What options do we have?
> >>>
> >>> 1) Use the names we already have: nova, glance, swift, etc.
> >>>
> >>> Upside, collision problem is solved. Downside, you need a secret
> >>> decoder ring to figure out what project does what.
> >>>
> >>> 2) Have a registry of "common" names.
> >>>
> >>> Upside, we can safely use common names everywhere and not fear
> >>> collision down the road.
> >>>
> >>> Downside, yet another contention point.
> >>>
> >>> A registry would clearly be under TC administration, though all the
> >>> heavy lifting might be handed over to the API working group. I still
> >>> imagine collision around some areas might be contentious.
> >>
> >> ++ to a central registry. It could easily be added to the projects.yaml
> >> file, and is a single source of truth.
> > 
> > Although I realized that the projects.yaml file only includes official
> > projects right now, which would mean new projects wouldn't have a place
> > to register terms. Maybe that's a feature?
> 
> It seems like it's a feature.
> 
> That being said, projects.yaml is pretty full right now. And, it's not
> clear that common name <-> project name is a 1 to 1 mapping.
> 
> For instance, Nova is getting very close to exposing a set of scheduler
> resources. For future proofing we would like to do that on a dedicated
> new endpoint from day one so that potential future split out of the code
> would not impact how APIs look in the service catalog or to consumers.
> There would be no need to have proxy apis in Nova for compatibility in
> the future.
> 
> So this feels like a separate registry for service common name, which
> maps N -> 1 to a project.

We already map multiple repos to multiple deliverables to multiple
projects. So I don't think the schema concerns are a reason not to have
it in projects.yaml. That said, it doesn't *have* to be there, it would
just make it convenient to manage publication.

Doug

> 
> -Sean
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Sean Dague
On 02/04/2016 08:18 AM, Doug Hellmann wrote:
> Excerpts from Hayes, Graham's message of 2016-02-04 12:54:56 +:
>> On 04/02/2016 11:40, Sean Dague wrote:
>>> A few issues have crept up recently with the service catalog, API
>>> headers, API end points, and even similarly named resources in
>>> different resources (e.g. backup), that are all circling around a key
>>> problem. Distributed teams and naming collision.
>>>
>>> Every OpenStack project has a unique name by virtue of having a git
>>> tree. Once they claim 'openstack/foo', foo is theirs in the
>>> OpenStack universe for all time (or until trademarks say otherwise).
>>> Nova in OpenStack will always mean one project.
>>>
>>> There has also been a desire to replace project names with
>>> common/generic names, in the service catalog, API headers, and a few
>>> other places. Nova owns 'compute'. Except... that's only because we
>>> all know that it does. We don't *actually* have a registry for those
>>> values.
>>>
>>> So the code names are well regulated, the common names, that we
>>> encourage use of, are not. Devstack in tree code defines some
>>> conventions. However with the big tent, things get kind of squirely
>>> pretty quickly. Congress registering 'policy' as their endpoint type
>>> is a good example of that -
>>> https://github.com/openstack/congress/blob/master/devstack/plugin.sh#L147
>>>
>>>  Naming is hard. And trying to boil down complicated state machines
>>> to one or two word shiboliths means that inevitably we're going to
>>> find some words just keep cropping up: policy, flavor, backup, meter.
>>> We do however need to figure out a way forward.
>>>
>>> Lets start with the top level names (resource overlap cascades from
>>> there).
>>>
>>> What options do we have?
>>>
>>> 1) Use the names we already have: nova, glance, swift, etc.
>>>
>>> Upside, collision problem is solved. Downside, you need a secret
>>> decoder ring to figure out what project does what.
>>>
>>> 2) Have a registry of "common" names.
>>>
>>> Upside, we can safely use common names everywhere and not fear
>>> collision down the road.
>>>
>>> Downside, yet another contention point.
>>>
>>> A registry would clearly be under TC administration, though all the
>>> heavy lifting might be handed over to the API working group. I still
>>> imagine collision around some areas might be contentious.
>>
>> ++ to a central registry. It could easily be added to the projects.yaml
>> file, and is a single source of truth.
> 
> Although I realized that the projects.yaml file only includes official
> projects right now, which would mean new projects wouldn't have a place
> to register terms. Maybe that's a feature?

It seems like it's a feature.

That being said, projects.yaml is pretty full right now. And, it's not
clear that common name <-> project name is a 1 to 1 mapping.

For instance, Nova is getting very close to exposing a set of scheduler
resources. For future proofing we would like to do that on a dedicated
new endpoint from day one so that potential future split out of the code
would not impact how APIs look in the service catalog or to consumers.
There would be no need to have proxy apis in Nova for compatibility in
the future.

So this feels like a separate registry for service common name, which
maps N -> 1 to a project.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Location of Heka Lua plugins

2016-02-04 Thread Jeffrey Zhang
+1 for this. +2 for putting the plugins to its own repo.

On Thu, Feb 4, 2016 at 9:45 PM, Michal Rostecki 
wrote:

> On 02/04/2016 10:55 AM, Eric LEMOINE wrote:
>
>> Hi
>>
>> As discussed yesterday in our IRC meeting we'll need specific Lua
>> plugins for parsing OpenStack, MariaDB and RabbitMQ logs.  We already
>> have these Lua plugins in one of our Fuel plugins [*].  So our plan is
>> to move these Lua plugins in their own Git repo, and distribute them
>> as deb and rpm packages in the future.  This will allow to easily
>> share the Lua plugins between projects, and having a separate Git repo
>> will facilitate testing, documentation, etc.
>>
>> But we may not have time to do that by the 4th of March (for Mitaka
>> 3), so my suggestion is to copy the Lua plugins that we need in Kolla.
>> This would be a temporary thing.  When our Lua plugins are
>> available/distributed as deb and rpm packages we will remove the Lua
>> plugins from the kolla repository and change the Heka Dockerfile to
>> install the Lua plugins from the package.
>>
>> Please tell me if you agree with the approach.  Thank you!
>>
>> [*] <
>> https://github.com/openstack/fuel-plugin-lma-collector/tree/master/deployment_scripts/puppet/modules/lma_collector/files/plugins/decoders
>> >
>>
>>
> +1
> But of course when git repos will be available (even without packaging),
> I'd switch to them immediately.
>
> Cheers,
> Michal
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Location of Heka Lua plugins

2016-02-04 Thread Michał Jastrzębski
TLDR; +1 to have lua in tree of kolla, not sure if we want to switch later

So I'm not so sure about switching. If these git repos are in
/openstack/ namespace, then sure, otherwise I'd be -1 to this, we
don't want to add dependency here. Also we're looking at pretty simple
set of files that won't change anytime soon probably. Also we might
introduce new service that fuel does not have, and while I'm sure we
can push new file to this repo, it's bigger issue than just coding it
in tree.

Cheers,
Michal

On 4 February 2016 at 07:45, Michal Rostecki  wrote:
> On 02/04/2016 10:55 AM, Eric LEMOINE wrote:
>>
>> Hi
>>
>> As discussed yesterday in our IRC meeting we'll need specific Lua
>> plugins for parsing OpenStack, MariaDB and RabbitMQ logs.  We already
>> have these Lua plugins in one of our Fuel plugins [*].  So our plan is
>> to move these Lua plugins in their own Git repo, and distribute them
>> as deb and rpm packages in the future.  This will allow to easily
>> share the Lua plugins between projects, and having a separate Git repo
>> will facilitate testing, documentation, etc.
>>
>> But we may not have time to do that by the 4th of March (for Mitaka
>> 3), so my suggestion is to copy the Lua plugins that we need in Kolla.
>> This would be a temporary thing.  When our Lua plugins are
>> available/distributed as deb and rpm packages we will remove the Lua
>> plugins from the kolla repository and change the Heka Dockerfile to
>> install the Lua plugins from the package.
>>
>> Please tell me if you agree with the approach.  Thank you!
>>
>> [*]
>> 
>>
>
> +1
> But of course when git repos will be available (even without packaging), I'd
> switch to them immediately.
>
> Cheers,
> Michal
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Rodrigo Barbieri for core reviewer team

2016-02-04 Thread Rodrigo Barbieri
Thanks everyone! I am very happy to be part of this community and
contribute to Manila project! :)

On Wed, Feb 3, 2016 at 5:33 AM, Thomas Bechtold  wrote:

> On Tue, Feb 02, 2016 at 12:30:44PM -0500, Ben Swartzlander wrote:
> > Rodrigo (ganso on IRC) joined the Manila project back in the Kilo release
> > and has been working on share migration (an important core feature) for
> the
> > last 2 releases. Since Tokyo he has dedicated himself to reviews and
> > community participation. I would like to nominate him to join the Manila
> > core reviewer team.
>
> +2 . Keep up the good work Rodrigo!
>
> --
> Tom
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rodrigo Barbieri
Computer Scientist
Federal University of São Carlos
(11) 96889 3412
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Thierry Carrez

Hayes, Graham wrote:

On 04/02/2016 13:24, Doug Hellmann wrote:

Excerpts from Hayes, Graham's message of 2016-02-04 12:54:56 +:

On 04/02/2016 11:40, Sean Dague wrote:

2) Have a registry of "common" names.

Upside, we can safely use common names everywhere and not fear
collision down the road.

Downside, yet another contention point.

A registry would clearly be under TC administration, though all the
heavy lifting might be handed over to the API working group. I still
imagine collision around some areas might be contentious.


++ to a central registry. It could easily be added to the projects.yaml
file, and is a single source of truth.


Although I realized that the projects.yaml file only includes official
projects right now, which would mean new projects wouldn't have a place
to register terms. Maybe that's a feature?


That is a good point - should we be registering terms for non tent
projects? Or do projects get terms when they get accepted into the tent?


I don't see why we would register terms for non-official projects. I 
don't see under what authority we would do that, or where it would end. 
So yes, that's a feature.


I think solution 2 is the best. To avoid too much contention, that can 
easily be delegated to the API WG, and escalated to the TC for 
resolution only in case of conflict between projects (or between a 
project and the API WG).


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] virtualenv fails with SSLError: [Errno 20] Not a directory

2016-02-04 Thread Roman Prykhodchenko
Folks,

as some of you may have noticed, there is a high rate of job failures on Fuel 
CI in python-fuelclient. That happens because there are some weird issues with 
virtualenv utility not being able to create new virtual environments. I’ve 
tested that on my local environment and the problem appears to happen 
constantly with rare exceptions:

I’ve tried running $virtualenv test and that’s what I’m getting: 
http://paste.openstack.org/show/485962/
Let’s find out what happened and resolve the issue.


- romcheg


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][Magnum] ways to get CA certificate in make-cert.sh from Magnum

2016-02-04 Thread 王华
Hi all,

Magnum now use a token to get CA certificate in make-cert.sh. Token has a
expiration time. So we should change this method. Here are two proposals.

1. Use trust which I have introduced in [1]. The way has a disadvantage. We
can't limit the access to some APIs. For example, if we want to add a
limitation that some APIs can only be accessed from Bay and can't be
accessed by users outside. We need a way to distinguish these users, from
Bay or from outside.

2. We create a user with the role to access Magnum. The way is used in
Heat. Heat creates a user for each stack to communicate with Heat. We can
add a role to the user which is already introduced in [1]. The user can
directly access Magnum for some limited APIs. With trust id, the user can
access other services.

[1] https://review.openstack.org/#/c/268852/

Regards,
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Migration progress

2016-02-04 Thread Bhandaru, Malini K
I agree with Daniel,  keep the periods consistent 5 - 5 .

Another thought, for such ephemeral/changing data, such as progress, why not 
save the information in the cache (and flush to database at a lower rate), and 
retrieve for display to active listeners/UI from the cache. Once complete or 
aborted, of course flush the cache.

Also should we provide a "verbose flag", that is only capture progress 
information when requested? That is when a human user might be issuing the 
command from the cli or GUI tool.

Regards
Malini

-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com] 
Sent: Wednesday, February 03, 2016 11:46 AM
To: Paul Carlton 
Cc: Feng, Shaohe ; OpenStack Development Mailing List 
(not for usage questions) 
Subject: Re: [openstack-dev] [nova] Migration progress

On Wed, Feb 03, 2016 at 11:27:16AM +, Paul Carlton wrote:
> On 03/02/16 10:49, Daniel P. Berrange wrote:
> >On Wed, Feb 03, 2016 at 10:44:36AM +, Daniel P. Berrange wrote:
> >>On Wed, Feb 03, 2016 at 10:37:24AM +, Koniszewski, Pawel wrote:
> >>>Hello everyone,
> >>>
> >>>On the yesterday's live migration meeting we had concerns that 
> >>>interval of writing migration progress to the database is too short.
> >>>
> >>>Information about migration progress will be stored in the database 
> >>>and exposed through the API (/servers//migrations/). In 
> >>>current proposition [1] migration progress will be updated every 2 
> >>>seconds. It basically means that every 2 seconds a call through RPC 
> >>>will go from compute to conductor to write migration data to the 
> >>>database. In case of parallel live migrations each migration will report 
> >>>progress by itself.
> >>>
> >>>Isn't 2 seconds interval too short for updates if the information 
> >>>is exposed through the API and it requires RPC and DB call to 
> >>>actually save it in the DB?
> >>>
> >>>Our default configuration allows only for 1 concurrent live 
> >>>migration [2], but it might vary between different deployments and 
> >>>use cases as it is configurable. Someone might want to trigger 10 
> >>>(or even more) parallel live migrations and each might take even a 
> >>>day to finish in case of block migration. Also if deployment is big enough 
> >>>rabbitmq might be fully-loaded.
> >>>I'm not sure whether updating each migration every 2 seconds makes 
> >>>sense in this case. On the other hand it might be hard to observe 
> >>>fast enough that migration is stuck if we increase this interval...
> >>Do we have any actual data that this is a real problem. I have a 
> >>pretty hard time believing that a database update of a single field 
> >>every 2 seconds is going to be what pushes Nova over the edge into a 
> >>performance collapse, even if there are 20 migrations running in 
> >>parallel, when you compare it to the amount of DB queries & updates 
> >>done across other areas of the code for pretty much every singke API call 
> >>and background job.
> >Also note that progress is rounded to the nearest integer. So even if 
> >the migration runs all day, there is a maximum of 100 possible 
> >changes in value for the progress field, so most of the updates 
> >should turn in to no-ops at the database level.
> >
> >Regards,
> >Daniel
> I agree with Daniel, these rpc and db access ops are a tiny percentage 
> of the overall load on rabbit and mysql and properly configured these 
> subsystems should have no issues with this workload.
> 
> One correction, unless I'm misreading it, the existing 
> _live_migration_monitor code updates the progress field of the 
> instance record every 5 seconds.  However this value can go up and 
> down so an infinate number of updates are possible?

Oh yes, you are in fact correct. Technically you could have an unbounded number 
of updates if migration goes backwards. Some mitigation against this is if we 
see progress going backwards we'll actually abort the migration if it gets 
stuck for too long. We'll also be progressively increasing the permitted 
downtime. So except in pathelogical scenarios I think the number of updates 
should still be relatively small.

> However, the issue raised here is not with the existing implementation 
> but with the proposed change 
> https://review.openstack.org/#/c/258813/5/nova/virt/libvirt/driver.py
> This add a save() operation on the migration object every 2 seconds

Ok, that is more heavy weight since it is recording the raw byte values and so 
it is guaranteed to do a database update pretty much every time.
It still shouldn't be too unreasonable a loading though. FWIW I think it is 
worth being consistent in the update frequency betweeen the progress value & 
the migration object save, so switching to be every
5 seconds probably makes more sense, so we know both objects are reflecting the 
same point in time.

Regards,
Daniel
-- 
|: http://berrange.com  -o-

Re: [openstack-dev] [bug-smash] Global OpenStack Bug Smash Mitaka

2016-02-04 Thread Wang, Shane
Hi all,

After discussing with TC members and other community guys, we thought March 2-4 
might not be a good timing for bug smash. So we decided to change the dates to 
be March 7 - 9 (Monday - Wednesday) in R4.
Please join our efforts to fix bugs for OpenStack.

Thanks.
--
Shane
From: Wang, Shane [mailto:shane.w...@intel.com]
Sent: Thursday, January 28, 2016 5:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [bug-smash] Global OpenStack Bug Smash Mitaka

Save the Date:
Global OpenStack Bug Smash
Wednesday-Friday, March 7-9, 2016
RSVP by Friday, February 24

How can you help make the OpenStack Mitaka release stable and bug-free while 
having fun with your peers? Join Intel, Rackspace, Mirantis, IBM, HP, Huawei, 
CESI and others in a global bug smash across four continents as we work 
together. Then, join us later in April in Austin, Texas, U.S.A. at the 
OpenStack Summit to get re-acquainted & celebrate our accomplishments!

OUR GOAL
Our key goal is to collaborate round-the-clock and around the world to fix as 
many bugs as possible across the wide range of OpenStack projects. In the 
process, we'll also help onboard and grow the number of OpenStack developers, 
and increase our collective knowledge of OpenStack tools and processes. To ease 
collaboration among all of the participants and ensure that core reviews can be 
conducted promptly, we will use the IRC channel, the mailing list, and Gerrit 
and enlist core reviewers in the event.

GET INVOLVED
Simply choose a place near you-and register by Friday, February 24. 
Registration is free, and we encourage you to invite others who may be 
interested.

* Australia
* China
* India

* Russia
* United Kingdom
* United States


Visit the link below for additional details:
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Mitaka

Come make the Mitaka release a grand success through your contributions, and 
ease the journey for newcomers!

Regards.
--
OpenStack Bug Smash team

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][Magnum] ways to get CA certificate in make-cert.sh from Magnum

2016-02-04 Thread Corey O'Brien
There currently isn't a way to distinguish between user who creates the bay
and the nodes in the bay because the user is root on those nodes. Any
credential that the node uses to communicate with Magnum is going to be
accessible to the user.

Since we already have the trust, that seems like the best way to proceed
for now just to get something working.

Corey

On Thu, Feb 4, 2016 at 10:53 PM 王华  wrote:

> Hi all,
>
> Magnum now use a token to get CA certificate in make-cert.sh. Token has a
> expiration time. So we should change this method. Here are two proposals.
>
> 1. Use trust which I have introduced in [1]. The way has a disadvantage.
> We can't limit the access to some APIs. For example, if we want to add a
> limitation that some APIs can only be accessed from Bay and can't be
> accessed by users outside. We need a way to distinguish these users, from
> Bay or from outside.
>
> 2. We create a user with the role to access Magnum. The way is used in
> Heat. Heat creates a user for each stack to communicate with Heat. We can
> add a role to the user which is already introduced in [1]. The user can
> directly access Magnum for some limited APIs. With trust id, the user can
> access other services.
>
> [1] https://review.openstack.org/#/c/268852/
>
> Regards,
> Wanghua
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Migration progress

2016-02-04 Thread Eli Qiao



On 2016年02月05日 12:02, Bhandaru, Malini K wrote:

I agree with Daniel,  keep the periods consistent 5 - 5 .

Another thought, for such ephemeral/changing data, such as progress, why not 
save the information in the cache (and flush to database at a lower rate), and 
retrieve for display to active listeners/UI from the cache. Once complete or 
aborted, of course flush the cache.

hi Malini
It's good idea to use cache to save the information while doing 
migration, but the problem is how can we access that cache while we use 
CLI (nova-api)?
These information are generated from nova-compute node , there should be 
one method to sync them to nova-conductor(which means DB).

Also should we provide a "verbose flag", that is only capture progress 
information when requested? That is when a human user might be issuing the command from 
the cli or GUI tool.

I am +1 on this, yeah, some of other service may help.

--
Best Regards, Eli(Li Yong)Qiao
Intel OTC China

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] gate issues

2016-02-04 Thread Corey O'Brien
So as we're all aware, the gate is a mess right now. I wanted to sum up
some of the issues so we can figure out solutions.

1. The functional-api job sometimes fails because bays timeout building
after 1 hour. The logs look something like this:
magnum.tests.functional.api.v1.test_bay.BayTest.test_create_list_and_delete_bays
[3733.626171s] ... FAILED
I can reproduce this hang on my devstack with etcdctl 2.0.10 as described
in this bug (https://bugs.launchpad.net/magnum/+bug/1541105), but
apparently either my fix with using 2.2.5 (
https://review.openstack.org/#/c/275994/) is incomplete or there is another
intermittent problem because it happened again even with that fix: (
http://logs.openstack.org/94/275994/1/check/gate-functional-dsvm-magnum-api/32aacb1/console.html
)

2. The k8s job has some sort of intermittent hang as well that causes a
similar symptom as with swarm.
https://bugs.launchpad.net/magnum/+bug/1541964

3. When the functional-api job runs, it frequently destroys the VM causing
the jenkins slave agent to die. Example:
http://logs.openstack.org/03/275003/6/check/gate-functional-dsvm-magnum-api/a9a0eb9//console.html
When this happens, zuul re-queues a new build from the start on a new VM.
This can happen many times in a row before the job completes.
I chatted with openstack-infra about this and after taking a look at one of
the VMs, it looks like memory over consumption leading to thrashing was a
possible culprit. The sshd daemon was also dead but the console showed
things like "INFO: task kswapd0:77 blocked for more than 120 seconds". A
cursory glance and following some of the jobs seems to indicate that this
doesn't happen on RAX VMs which have swap devices unlike the OVH VMs as
well.

4. In general, even when things work, the gate is really slow. The
sequential master-then-node build process in combination with underpowered
VMs makes bay builds take 25-30 minutes when they do succeed. Since we're
already close to tipping over a VM, we run functional tests with
concurrency=1, so 2 bay builds means almost the entire allotted devstack
testing time (generally 75 minutes of actual test time available it seems).

Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] DHCP port problem

2016-02-04 Thread Zhipeng Huang
CC'ed Li Ma who might have insight on this problem

On Thu, Feb 4, 2016 at 3:51 PM, Vega Cai  wrote:

> Hi all,
>
> When implementing L3 north-south networking functionality, I meet the DHCP
> port problem again.
>
> First let me briefly explain the DHCP port problem. In Tricircle, we have
> a Neutron server using Tricircle plugin in top pod to take control all the
> Neutron servers in bottom pods. The strategy of Tricircle to avoid IP
> address conflict is that IP address allocation is done on top and we create
> port with IP address specified in bottom pod. However, the behavior of
> Neutron to create DHCP port has been changed. Neutron no longer waits for
> the creation of the first VM to schedule DHCP agent, but schedule DHCP
> agent when subnet is created, then the bound DHCP agent will automatically
> create DHCP port. So we have no chance to specify the IP address of DHCP
> port. Since the IP address of DHCP port is not reserved in top pod, we have
> risk to encounter IP address conflict.
>
> How we solve this problem for VM creation is that we still create a DHCP
> port first, then use the IP address of the port to create DHCP port in
> bottom pod. If we get an IP address conflict exception, we check if the
> bottom port is a DHCP port, if so, we directly use this bottom port and
> build a id mapping. If we successfully create the bottom DHCP port, we
> check if there are other DHCP ports in bottom pod in the same subnet and
> remove them.
>
> Now let's go back to the L3 north-south networking functionality
> implementation. If user creates a port and then associates it with a
> floating IP before booting a VM, Tricircle plugin needs to create the
> bottom internal port first in order to setup bottom floating IP. So again
> we have risk that the IP address of the internal port conflicts with the IP
> address of a bottom DHCP port.
>
> Below I list some choices to solve this problem:
> (1) Always create internal port in Nova gateway so we can directly use the
> codes handling DHCP problem in Nova gateway. This will also leave floating
> IP stuff to Nova gateway.
>
> (2) Transplant the codes handling DHCP problem from Nova gateway to
> Tricircle plugin. Considering there are already a lot of things to do when
> associating floating IP, this will make floating IP association more
> complex.
>
> (3) Anytime we need to create a bottom subnet, we disable DHCP in this
> subnet first so bottom DHCP port will not be created automatically. When we
> are going to boot a VM, we create DHCP port in top and bottom pod then
> enable DHCP in bottom subnet. When a DHCP agent is scheduled, it will check
> if there exists a port whose device_id is "reserved_dhcp_port" and use it
> as the DHCP port. By creating a bottom DHCP port with device_id set to
> "reserved_dhcp_port", we can guide DHCP agent to use the port we create.
>
> I think this problem can be solved in a separate patch and I will add a
> TODO in the patch for L3 north-south networking functionality.
>
> Any comments or suggestions?
>
> BR
> Zhiyuan
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] compatibility of puppet upstream modules

2016-02-04 Thread Emilien Macchi


On 02/03/2016 04:03 PM, Ptacek, MichalX wrote:
> Hi all,
> 
>  
> 
> I have one general question,
> 
> currently I am deploying liberty openstack as described in
> https://wiki.openstack.org/wiki/Puppet/Deploy
> 
> Unfortunately puppet modules specified in
> puppet-openstack-integration/Puppetfile are not compatible

Did you take the file from stable/liberty branch?
https://github.com/openstack/puppet-openstack-integration/tree/stable/liberty

> 
> and some are also missing as visible from following output of “puppet
> module list”
> 
>  
> 
> Warning: Setting templatedir is deprecated. See
> http://links.puppetlabs.com/env-settings-deprecations
> 
>(at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in
> `issue_deprecation_warning')
> 
> Warning: Module 'openstack-openstacklib' (v7.0.0) fails to meet some
> dependencies:
> 
>   'openstack-barbican' (v0.0.1) requires 'openstack-openstacklib'
> (>=6.0.0 <7.0.0)
> 
>   'openstack-zaqar' (v0.0.1) requires 'openstack-openstacklib' (>=6.0.0
> <7.0.0)
> 
> Warning: Module 'puppetlabs-postgresql' (v4.4.2) fails to meet some
> dependencies:
> 
>   'openstack-openstacklib' (v7.0.0) requires 'puppetlabs-postgresql'
> (>=3.3.0 <4.0.0)
> 
> Warning: Missing dependency 'deric-storm':
> 
>   'openstack-monasca' (v1.0.0) requires 'deric-storm' (>=0.0.1 <1.0.0)
> 
> Warning: Missing dependency 'deric-zookeeper':
> 
>   'openstack-monasca' (v1.0.0) requires 'deric-zookeeper' (>=0.0.1 <1.0.0)
> 
> Warning: Missing dependency 'dprince-qpid':
> 
>   'openstack-cinder' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)
> 
>   'openstack-manila' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)
> 
>   'openstack-nova' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)
> 
> Warning: Missing dependency 'jdowning-influxdb':
> 
>   'openstack-monasca' (v1.0.0) requires 'jdowning-influxdb' (>=0.3.0 <1.0.0)
> 
> Warning: Missing dependency 'opentable-kafka':
> 
>   'openstack-monasca' (v1.0.0) requires 'opentable-kafka' (>=1.0.0 <2.0.0)
> 
> Warning: Missing dependency 'puppetlabs-stdlib':
> 
>   'antonlindstrom-powerdns' (v0.0.5) requires 'puppetlabs-stdlib' (>= 0.0.0)
> 
> Warning: Missing dependency 'puppetlabs-corosync':
> 
>   'openstack-openstack_extras' (v7.0.0) requires 'puppetlabs-corosync'
> (>=0.1.0 <1.0.0)
> 
> /etc/puppet/modules
> 
> ├──antonlindstrom-powerdns (v0.0.5)
> 
> ├──duritong-sysctl (v0.0.11)
> 
> ├──nanliu-staging (v1.0.4)
> 
> ├──openstack-barbican (v0.0.1)
> 
> ├──openstack-ceilometer (v7.0.0)
> 
> ├──openstack-cinder (v7.0.0)
> 
> ├──openstack-designate (v7.0.0)
> 
> ├──openstack-glance (v7.0.0)
> 
> ├──openstack-gnocchi (v7.0.0)
> 
> ├──openstack-heat (v7.0.0)
> 
> ├──openstack-horizon (v7.0.0)
> 
> ├──openstack-ironic (v7.0.0)
> 
> ├──openstack-keystone (v7.0.0)
> 
> ├──openstack-manila (v7.0.0)
> 
> ├──openstack-mistral (v0.0.1)
> 
> ├──openstack-monasca (v1.0.0)
> 
> ├──openstack-murano (v7.0.0)
> 
> ├──openstack-neutron (v7.0.0)
> 
> ├──openstack-nova (v7.0.0)
> 
> ├──openstack-openstack_extras (v7.0.0)
> 
> ├──openstack-openstacklib (v7.0.0)  invalid
> 
> ├──openstack-sahara (v7.0.0)
> 
> ├──openstack-swift (v7.0.0)
> 
> ├──openstack-tempest (v7.0.0)
> 
> ├──openstack-trove (v7.0.0)
> 
> ├──openstack-tuskar (v7.0.0)
> 
> ├──openstack-vswitch (v3.0.0)
> 
> ├──openstack-zaqar (v0.0.1)
> 
> ├──openstack_integration (???)
> 
> ├──puppet-aodh (v7.0.0)
> 
> ├──puppet-corosync (v0.8.0)
> 
> ├──puppetlabs-apache (v1.4.1)
> 
> ├──puppetlabs-apt (v2.1.1)
> 
> ├──puppetlabs-concat (v1.2.5)
> 
> ├──puppetlabs-firewall (v1.6.0)
> 
> ├──puppetlabs-inifile (v1.4.3)
> 
> ├──puppetlabs-mongodb (v0.11.0)
> 
> ├──puppetlabs-mysql (v3.6.2)
> 
> ├──puppetlabs-postgresql (v4.4.2)  invalid
> 
> ├──puppetlabs-rabbitmq (v5.2.3)
> 
> ├──puppetlabs-rsync (v0.4.0)
> 
> ├──puppetlabs-stdlib (v4.6.0)
> 
> ├──puppetlabs-vcsrepo (v1.3.2)
> 
> ├──puppetlabs-xinetd (v1.5.0)
> 
> ├──qpid (???)
> 
> ├──saz-memcached (v2.8.1)
> 
> ├──stankevich-python (v1.8.0)
> 
> └── theforeman-dns (v3.0.0)
> 
>  
> 
>  
> 
> Most of the warning can be probably ignored, e.g I assume that latest
> barbican & zaqar are compatible with liberty (7.0) version of
> openstack-openstacklib
> 
>   'openstack-barbican' (v0.0.1) requires 'openstack-openstacklib'
> (>=6.0.0 <7.0.0)
> 
>   'openstack-zaqar' (v0.0.1) requires 'openstack-openstacklib' (>=6.0.0
> <7.0.0)
> 
>  
> 
> Am I right or I need to get rid of all of these compatibility warnings
> before proceeding further ?
> 

If you look at our CI jobs, we also have some warnings:
http://logs.openstack.org/36/275836/1/gate/gate-puppet-openstack-integration-scenario001-tempest-dsvm-centos7/15a5ead/console.html#_2016-02-03_21_56_40_945

> 
> I tried both,  but during subsequent deployments I reached some
> intermediate issue with number of parallel mysql connections  
> 
>  
> 
> 2016-02-03 00:01:03.326 90406 DEBUG oslo_db.api [-] Loading backend
> 'sqlalchemy' from 'nova.db.sqlalchemy.api' _load_backend
> 

Re: [openstack-dev] [Neutron] Evolving the stadium concept

2016-02-04 Thread Assaf Muller
On Thu, Feb 4, 2016 at 8:33 AM, Gal Sagie  wrote:
> As i have commented on the patch i will also send this to the mailing list:
>
> I really dont see why Dragonflow is not part of this list, given the
> criteria you listed.
>
> Dragonflow is fully developed under Neutron/OpenStack, no other
> repositories. It is fully Open source and already have a community of people
> contributing and interest from various different companies and OpenStack
> deployers. (I can prepare the list of active contributions and of interested
> parties) It also puts OpenStack Neutron APIs and use cases as first class
> citizens and working on being an integral part of OpenStack.
>
> I agree that OVN needs to be part of the list, but you brought up this
> criteria in regards to ODL, so: OVN like ODL is not only Neutron and
> OpenStack and is even running/being implemented on a whole different
> governance model and requirements to it.
>
> I think you also forgot to mention some other projects as well that are
> fully open source with a vibrant and diverse community, i will let them
> comment here by themselves.
>
> Frankly this approach disappoints me, I have honestly worked hard to make
> Dragonflow fully visible and add and support open discussion and follow the
> correct guidelines to work in a project. I think that Dragonflow community
> has already few members from various companies and this is only going to
> grow in the near future. (in addition to deployers that are considering it
> as a solution)  we also welcome anyone that wants to join and be part of the
> process to step in, we are very welcoming
>
> I also think that the correct way to do this is to actually add as reviewers
> all lieutenants of the projects you are now removing from Neutron big
> stadium and letting them comment.
>
> Gal.

I understand you see 'Dragonflow being part of the Neutron stadium'
and 'Dragonflow having high visibility' as tied together. I'm curious,
from a practical perspective, how does being a part of the stadium
give Dragonflow visibility? If it were not a part of the stadium and
you had your own PTL etc, what specifically would change so that
Dragonflow would be less visible. Currently I don't understand why
being a part of the stadium is good or bad for a networking project,
or why does it matter. Looking at Russell's patch, it's concerned with
placing projects (e.g. ODL, OVN, Dragonflow) either in or out of the
stadium and the criteria for doing so, I'm just asking how do you
(Gal) perceive the practical effect of that decision.

>
>
> On Wed, Feb 3, 2016 at 11:48 PM, Russell Bryant  wrote:
>>
>> On 11/30/2015 07:56 PM, Armando M. wrote:
>> > I would like to suggest that we evolve the structure of the Neutron
>> > governance, so that most of the deliverables that are now part of the
>> > Neutron stadium become standalone projects that are entirely
>> > self-governed (they have their own core/release teams, etc).
>>
>> After thinking over the discussion in this thread for a while, I have
>> started the following proposal to implement the stadium renovation that
>> Armando originally proposed in this thread.
>>
>> https://review.openstack.org/#/c/275888
>>
>> --
>> Russell Bryant
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2016-02-04 Thread Assaf Muller
On Thu, Feb 4, 2016 at 10:20 AM, Assaf Muller  wrote:
> On Thu, Feb 4, 2016 at 8:33 AM, Gal Sagie  wrote:
>> As i have commented on the patch i will also send this to the mailing list:
>>
>> I really dont see why Dragonflow is not part of this list, given the
>> criteria you listed.
>>
>> Dragonflow is fully developed under Neutron/OpenStack, no other
>> repositories. It is fully Open source and already have a community of people
>> contributing and interest from various different companies and OpenStack
>> deployers. (I can prepare the list of active contributions and of interested
>> parties) It also puts OpenStack Neutron APIs and use cases as first class
>> citizens and working on being an integral part of OpenStack.
>>
>> I agree that OVN needs to be part of the list, but you brought up this
>> criteria in regards to ODL, so: OVN like ODL is not only Neutron and
>> OpenStack and is even running/being implemented on a whole different
>> governance model and requirements to it.
>>
>> I think you also forgot to mention some other projects as well that are
>> fully open source with a vibrant and diverse community, i will let them
>> comment here by themselves.
>>
>> Frankly this approach disappoints me, I have honestly worked hard to make
>> Dragonflow fully visible and add and support open discussion and follow the
>> correct guidelines to work in a project. I think that Dragonflow community
>> has already few members from various companies and this is only going to
>> grow in the near future. (in addition to deployers that are considering it
>> as a solution)  we also welcome anyone that wants to join and be part of the
>> process to step in, we are very welcoming
>>
>> I also think that the correct way to do this is to actually add as reviewers
>> all lieutenants of the projects you are now removing from Neutron big
>> stadium and letting them comment.
>>
>> Gal.
>
> I understand you see 'Dragonflow being part of the Neutron stadium'
> and 'Dragonflow having high visibility' as tied together. I'm curious,
> from a practical perspective, how does being a part of the stadium
> give Dragonflow visibility? If it were not a part of the stadium and
> you had your own PTL etc, what specifically would change so that
> Dragonflow would be less visible. Currently I don't understand why
> being a part of the stadium is good or bad for a networking project,
> or why does it matter. Looking at Russell's patch, it's concerned with
> placing projects (e.g. ODL, OVN, Dragonflow) either in or out of the
> stadium and the criteria for doing so, I'm just asking how do you
> (Gal) perceive the practical effect of that decision.

Allow me to expand:
It seems to me like there is no significance to who is 'in or out'.
However, people, including potential customers, look at the list of
the Neutron stadium and deduce that project X is better than Y because
X is in but Y is out, and *that* in itself is the value of being in or
out, even though it has no meaning. Maybe we should explain what
exactly does it mean being in or out. It's just a governance decision,
it doesn't reflect in any way of the quality or appeal of a project
(For example some of the open source Neutron drivers out of the
stadium are much more mature, stable and feature full than other
drivers in the stadium).

>
>>
>>
>> On Wed, Feb 3, 2016 at 11:48 PM, Russell Bryant  wrote:
>>>
>>> On 11/30/2015 07:56 PM, Armando M. wrote:
>>> > I would like to suggest that we evolve the structure of the Neutron
>>> > governance, so that most of the deliverables that are now part of the
>>> > Neutron stadium become standalone projects that are entirely
>>> > self-governed (they have their own core/release teams, etc).
>>>
>>> After thinking over the discussion in this thread for a while, I have
>>> started the following proposal to implement the stadium renovation that
>>> Armando originally proposed in this thread.
>>>
>>> https://review.openstack.org/#/c/275888
>>>
>>> --
>>> Russell Bryant
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>> Best Regards ,
>>
>> The G.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ensuring accurate versions while testing clients and controllers

2016-02-04 Thread Amrith Kumar
Since this is a long email, I think this quick summary is in order. The
gate/check jobs in Trove are facing a problem [1] which was fixed [2].
The fix has issues and I am looking for input from other projects who
may have faced this problem.

How does a project with a controller and a client ensure that the
version of client and backend are in sync, not only in master but also
in stable branches?

The current approach in Trove has problems, and the generally
recommended approach leads to some issues. So I am looking for input
from other projects to learn how they address it, and to get opinions
on a proposal for how I intend to fix this issue in Trove.

Excruciating details follow.

--

The project openstack/trove has a client component
openstack/python-troveclient just like many other projects do. When a
change is made that straddles the controller and the client, they need
to be merged together. With the new cross project dependencies, this has
become much easier. The reviews for the change to the client and the
controller are linked (depends-on) and therefore they both go in.

Tests in the controller require the use of the client and exercise the
client as well.

Once the changes merge, tests got the right client because the
test-requirements.txt file had a line like this:

http://tarballs.openstack.org/python-troveclient/python-troveclient-master.tar.gz#egg=python-troveclient

A recent change to the client exposed a problem; the
test-requirements.txt file(s) in stable/kilo and stable/liberty also had
the same line! Therefore they were (had been) using the wrong
requirements.txt file.

So, fixes [3] and [4] were proposed to address the issue in the stable
branches. Basically with the proposed changes, stable branches would pin
to a tarball of the appropriate vintage.

In speaking with Doug Hellmann, I learned that this wasn't the right
approach and the correct thing would be for the client to be released
more often (to pypi) and then the test-requirements.txt would have to
just pin specific releases.

This approach requires that we weould have to release the client more
often, and there would be a period of time between when the client was
merged and submitted to pypi, and the time when the controller merged
when there would be functionality in the client with no back-end support.

Question: How do other projects handle this?

If you have experience doing this for other projects, I would appreciate
your thoughts on this approach. If you have pointers about
LIBS_FROM_GIT, please do send them along, other than [5], that is.

I would also like input on the proposal below, for how Trove would
address this.

1.  Master would use LIBS_FROM_GIT and therefore would always get the
tip of master. I still need to figure out how exactly to use this but
I'm told (thanks again to Doug Hellmann) that this is something that
some other projects use.

2. When a stable branch is cut, test-requirements.txt will have a
specific version of the python-troveclient specified.

This will eliminate the hack of specifying a tarfile and the attendant
problems.

Thanks,

-amrith

[1] https://bugs.launchpad.net/trove/+bug/1539818
[2] https://review.openstack.org/#/c/274345/
[3] https://review.openstack.org/#/c/274797/
[4] https://review.openstack.org/#/c/252584/6/test-requirements.txt
[5] http://docs.openstack.org/developer/devstack/configuration.html

--
Amrith Kumar, CTO   | amr...@tesora.com
Tesora, Inc | @amrithkumar
125 CambridgePark Drive, Suite 400  | http://www.tesora.com
Cambridge, MA. 02140| GPG: 0x5e48849a9d21a29b 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all] Software design in openstack

2016-02-04 Thread Clint Byrum
Excerpts from Nick Yeates's message of 2016-02-03 21:18:08 -0800:
> Josh, thanks for pointing this out and in being hospitable to an outsider.
> 
> Oslo is definitely some of what I was looking for. As you stated, the fact 
> that there is an extensive review system with high participation, that this 
> alone organically leads to particular trends in sw design. I will have to 
> read more about ‘specs', as I don’t quite get what they are and how they are 
> different from blueprints.
> 
> When I said "What encourages or describes good design in OpenStack?”, I 
> meant, what mechanism's/qualities/artifact's/whatever create code that is 
> well-received, well-used, efficient. effective, secure… basically: successful 
> from a wider-ecosystem standpoint. It sounds to me like much is built into 1) 
> the detailed system of reviews, 2) an informal hierarchy of wise technicians, 
> and now 3) modularization efforts like this Oslo. Did I summarize this 
> adequately?
> 
> What artifacts were you going to send me at?
> I have still yet to find a good encompassing architecture diagram or white 
> paper.
> 

Hi Nick.

The specification process is pretty mature at this point, but it varies
a bit from project to project. You may want to browse around this:

http://specs.openstack.org/

And look at ongoing reviews for the various -specs repositories here:

https://review.openstack.org/

These contain high level specs for features and refactoring work going
on in OpenStack. They are as close to we have as a "good design" process
in OpenStack. Note that we get together in a physical meeting space
every 6 months to discuss these specs face to face.

https://www.openstack.org/summit/

Note that many of us have lamented the lack of an agreed upon
"architecture" in OpenStack. While Josh is right that Oslo often
facilitates many of our agreed upon technology choices, it doesn't really
shape the overall picture. We also have recently starting doing some
detailed cross-project sessions to discuss overall themes, but these
aren't necessarily architectural decisions, they're just optimizations
of similar concerns.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-02-04 Thread Walter A. Boring IV
My plan was to store the connector object at attach_volume time.   I was 
going to add an additional column to the cinder volume attachment table 
that stores the connector that came from nova.   The problem is live 
migration. After live migration the connector is out of date.  Cinder 
doesn't have an existing API to update attachment.  That will have to be 
added, so that the connector info can be updated.

We have needed this for force detach for some time now.

It's on my list, but most likely not until N, or at least not until the 
microversions land in Cinder.

Walt



Hi all,
I was wondering if there was any way to cleanly detach volumes from 
failed nodes.  In the case where the node is up nova-compute will call 
Cinder's terminate_connection API with a "connector" that includes 
information about the node - e.g., hostname, IP, iSCSI initiator name, 
FC WWPNs, etc.
If the node has died, this information is no longer available, and so 
the attachment cannot be cleaned up properly.  Is there any way to 
handle this today?  If not, does it make sense to save the connector 
elsewhere (e.g., DB) for cases like these?


Thanks,
Avishay

--
*Avishay Traeger, PhD*
/System Architect/

Mobile:+972 54 447 1475
E-mail: avis...@stratoscale.com 



Web  | Blog 
 | Twitter 
 | Google+ 
 | 
Linkedin 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2016-02-04 Thread Sean M. Collins
On Thu, Feb 04, 2016 at 04:20:50AM EST, Assaf Muller wrote:
> I understand you see 'Dragonflow being part of the Neutron stadium'
> and 'Dragonflow having high visibility' as tied together. I'm curious,
> from a practical perspective, how does being a part of the stadium
> give Dragonflow visibility? If it were not a part of the stadium and
> you had your own PTL etc, what specifically would change so that
> Dragonflow would be less visible. 

> Currently I don't understand why
> being a part of the stadium is good or bad for a networking project,
> or why does it matter. 


I think the issue is of public perception. As others have stated, the
issue is the "in" vs. "out" problem. We had a similar situation
with 3rd party CI, where we had a list of drivers that were "nice" and
had CI running vs drivers that were "naughty" and didn't. Prior to the
vendor decomposition effort, We had a multitude of drivers that were
in-tree, with the public perception that drivers that were in Neutron's
tree were "sanctioned" by the Neutron project. 

That may not have been the intention, but that's what I think happened.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Morgan Fainberg
On Thu, Feb 4, 2016 at 4:51 AM, Doug Hellmann  wrote:

> Excerpts from Sean Dague's message of 2016-02-04 06:38:26 -0500:
> > A few issues have crept up recently with the service catalog, API
> > headers, API end points, and even similarly named resources in different
> > resources (e.g. backup), that are all circling around a key problem.
> > Distributed teams and naming collision.
> >
> > Every OpenStack project has a unique name by virtue of having a git
> > tree. Once they claim 'openstack/foo', foo is theirs in the OpenStack
> > universe for all time (or until trademarks say otherwise). Nova in
> > OpenStack will always mean one project.
> >
> > There has also been a desire to replace project names with
> > common/generic names, in the service catalog, API headers, and a few
> > other places. Nova owns 'compute'. Except... that's only because we all
> > know that it does. We don't *actually* have a registry for those values.
> >
> > So the code names are well regulated, the common names, that we
> > encourage use of, are not. Devstack in tree code defines some
> > conventions. However with the big tent, things get kind of squirely
> > pretty quickly. Congress registering 'policy' as their endpoint type is
> > a good example of that -
> >
> https://github.com/openstack/congress/blob/master/devstack/plugin.sh#L147
> >
> > Naming is hard. And trying to boil down complicated state machines to
> > one or two word shiboliths means that inevitably we're going to find
> > some words just keep cropping up: policy, flavor, backup, meter. We do
> > however need to figure out a way forward.
> >
> > Lets start with the top level names (resource overlap cascades from
> there).
> >
> > What options do we have?
> >
> > 1) Use the names we already have: nova, glance, swift, etc.
> >
> > Upside, collision problem is solved. Downside, you need a secret decoder
> > ring to figure out what project does what.
> >
> > 2) Have a registry of "common" names.
> >
> > Upside, we can safely use common names everywhere and not fear collision
> > down the road.
> >
> > Downside, yet another contention point.
> >
> > A registry would clearly be under TC administration, though all the
> > heavy lifting might be handed over to the API working group. I still
> > imagine collision around some areas might be contentious.
> >
> > 3) Use either, inconsistently, hope for the best. (aka - status quo)
> >
> > Upside, no long mailing list thread to figure out the answer. Downside,
> > it sucks.
> >
> >
> > Are there other options missing? Where are people leaning at this point?
> >
> > Personally I'm way less partial to any particular answer as long as it's
> > not #3.
> >
> >
> > -Sean
> >
>
> This feels like something that should be designed with end-users
> in mind, and that means making choices about descriptive words
> rather than quirky in-jokes.  As much as I hate to think about the
> long threads some of the contention is likely to introduce, not to
> mention the bikeshedding over the terms themselves, I have become
> convinced that our best long-term solution is a term/name registry
> (option 2). We already have that pattern in the governance repository
> where official projects describe their service type.
>
> To reduce contention, we could agree in advance to support multi-word
> names ("block storage" and "object storage", "block backup" and
> "file backup", etc.). Decisions about noun-verb vs. verb-noun,
> punctuation, etc. can be dealt with by the group that takes on the
> task of setting standards.
>
> As I said in the TC meeting, this seems like something the API working
> group could do, if they wanted to take on the responsibility. If not,
> then we should establish a new group with a mandate from the TC. Since
> we have something like a product catalog already in the governance repo,
> we can keep the new data there.
>
> Doug
>

I am a fan of option #2. I also want to point out that os-client-config has
encoded some of these names as well[1], which is pushing us in the
direction of #2.  I 100% agree that the end user perspective also leans us
towards option #2.

I am very against "hope for the best options".

[1]
https://github.com/openstack/os-client-config/blob/master/os_client_config/constructors.json

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread michael mccune

On 02/04/2016 08:33 AM, Thierry Carrez wrote:

Hayes, Graham wrote:

On 04/02/2016 13:24, Doug Hellmann wrote:

Excerpts from Hayes, Graham's message of 2016-02-04 12:54:56 +:

On 04/02/2016 11:40, Sean Dague wrote:

2) Have a registry of "common" names.

Upside, we can safely use common names everywhere and not fear
collision down the road.

Downside, yet another contention point.

A registry would clearly be under TC administration, though all the
heavy lifting might be handed over to the API working group. I still
imagine collision around some areas might be contentious.


++ to a central registry. It could easily be added to the projects.yaml
file, and is a single source of truth.


Although I realized that the projects.yaml file only includes official
projects right now, which would mean new projects wouldn't have a place
to register terms. Maybe that's a feature?


That is a good point - should we be registering terms for non tent
projects? Or do projects get terms when they get accepted into the tent?


I don't see why we would register terms for non-official projects. I
don't see under what authority we would do that, or where it would end.
So yes, that's a feature.



i have a question about this, as new, non-official, projects start to 
spin up there will be questions about the naming conventions they will 
use within the project as to headers and the like. given that the 
current guidance trend in the api-wg is towards using "service type" in 
these cases, how would these projects proceed?


(i'm not suggesting these projects should be registered, just curious)


I think solution 2 is the best. To avoid too much contention, that can
easily be delegated to the API WG, and escalated to the TC for
resolution only in case of conflict between projects (or between a
project and the API WG).



i'm +1 for solution 2 as well. as to the api-wg participation in the 
name registration side of things , i don't have an objection but i am 
very curious to hear Everett's and Chris' opinions.


regards,
mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-04 Thread gordon chung


On 03/02/2016 10:38 AM, Sam Yaple wrote:
On Wed, Feb 3, 2016 at 2:52 PM, Jeremy Stanley 
<fu...@yuggoth.org> wrote:
On 2016-02-03 14:32:36 + (+), Sam Yaple wrote:
[...]
> Luckily, digging into it it appears cinder already has all the
> infrastructure in place to handle what we had talked about in a
> separate email thread Duncan. It is very possible Ekko can
> leverage the existing features to do it's backup with no change
> from Cinder.
[...]

If Cinder's backup facilities already do most of
what you want from it and there's only a little bit of development
work required to add the missing feature, why jump to implementing
this feature in a completely separate project instead rather than
improving Cinder's existing solution so that people who have been
using that can benefit directly?

Backing up Cinder was never the initial goal, just a potential feature on the 
roadmap. Nova is the main goal.

i'll extend fungi's question, are the backup framework/mechanisms common 
whether it be Nova or Cinder or anything else? or are they unique but only 
grouped together as a service because they backup something. it seems the 
problem is we've imagined the service as tackling a horizontal issue when 
really it is just a vertical story that appears across many silos.

cheers,

--
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Recent integration tests failures

2016-02-04 Thread Timur Sufiev
That has been a hard week for integration tests, as soon as API-breaking
change in xvfbwrapper had been worked around, we have been hit by a new
Selenium release, see https://bugs.launchpad.net/horizon/+bug/1541876

Investigation of a root cause is still in progress.

On Mon, Feb 1, 2016 at 11:31 PM Richard Jones 
wrote:

> Ugh, dependencies with breaking API changes in minor point releases :/
>
> On 2 February 2016 at 04:53, Timur Sufiev  wrote:
>
>> Maintainers of outside dependencies continue to break our stuff :(. New
>> issue is https://bugs.launchpad.net/horizon/+bug/1540495 patch is
>> currently being checked by Jenkins
>>
>> On Sat, Jan 30, 2016 at 2:28 PM Timur Sufiev 
>> wrote:
>>
>>> Problematic Selenium versions have been successfully excluded from
>>> Horizon test-requirements, if you still experiencing the error described
>>> above, rebase your patch onto the latest master.
>>> On Fri, 29 Jan 2016 at 12:36, Itxaka Serrano Garcia 
>>> wrote:
>>>
 Can confirm, had the same issue locally, was fixed after a downgrade to
 selenium 2.48.


 Good catch!

 Itxaka

 On 01/28/2016 10:08 PM, Timur Sufiev wrote:
 > According to the results at
 > https://review.openstack.org/#/c/273697/1 capping Selenium to be not
 > greater than 2.49 fixes broken tests. Patch to global-requirements is
 > here: https://review.openstack.org/#/c/273750/
 >
 > On Thu, Jan 28, 2016 at 9:22 PM Timur Sufiev  > wrote:
 >
 > Hello, Horizoneers
 >
 > You may have noticed recent integration tests failures seemingly
 > unrelated to you patches, with a stacktrace like:
 > http://paste2.org/2Hk9138U I've already filed a bug for that,
 > https://bugs.launchpad.net/horizon/+bug/1539197 Appears to be a
 > Selenium issue, currently investigating it.
 >
 >
 >
 >
 >
 __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] virtualenv fails with SSLError: [Errno 20] Not a directory

2016-02-04 Thread Roman Prykhodchenko
UPD: on Fuel-CI it fails with SSLError: [Errno 185090050] _ssl.c:344: 
error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib as 
could be found in [1].

This is very important issue to fix because it blocks development of 
python-fuelclient. It requires attention ASAP.

References:
1. https://ci.fuel-infra.org/job/verify-python-fuelclient/1324/console

> 4 лют. 2016 р. о 15:50 Roman Prykhodchenko  написав(ла):
> 
> Folks,
> 
> as some of you may have noticed, there is a high rate of job failures on Fuel 
> CI in python-fuelclient. That happens because there are some weird issues 
> with virtualenv utility not being able to create new virtual environments. 
> I’ve tested that on my local environment and the problem appears to happen 
> constantly with rare exceptions:
> 
> I’ve tried running $virtualenv test and that’s what I’m getting: 
> http://paste.openstack.org/show/485962/
> Let’s find out what happened and resolve the issue.
> 
> 
> - romcheg
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread gordon chung


On 04/02/2016 9:04 AM, Morgan Fainberg wrote:


On Thu, Feb 4, 2016 at 4:51 AM, Doug Hellmann 
> wrote:
Excerpts from Sean Dague's message of 2016-02-04 06:38:26 -0500:
> A few issues have crept up recently with the service catalog, API
> headers, API end points, and even similarly named resources in different
> resources (e.g. backup), that are all circling around a key problem.
> Distributed teams and naming collision.
>
> Every OpenStack project has a unique name by virtue of having a git
> tree. Once they claim 'openstack/foo', foo is theirs in the OpenStack
> universe for all time (or until trademarks say otherwise). Nova in
> OpenStack will always mean one project.
>
> There has also been a desire to replace project names with
> common/generic names, in the service catalog, API headers, and a few
> other places. Nova owns 'compute'. Except... that's only because we all
> know that it does. We don't *actually* have a registry for those values.
>
> So the code names are well regulated, the common names, that we
> encourage use of, are not. Devstack in tree code defines some
> conventions. However with the big tent, things get kind of squirely
> pretty quickly. Congress registering 'policy' as their endpoint type is
> a good example of that -
> https://github.com/openstack/congress/blob/master/devstack/plugin.sh#L147
>
> Naming is hard. And trying to boil down complicated state machines to
> one or two word shiboliths means that inevitably we're going to find
> some words just keep cropping up: policy, flavor, backup, meter. We do
> however need to figure out a way forward.
>
> Lets start with the top level names (resource overlap cascades from there).
>
> What options do we have?
>
> 1) Use the names we already have: nova, glance, swift, etc.
>
> Upside, collision problem is solved. Downside, you need a secret decoder
> ring to figure out what project does what.
>
> 2) Have a registry of "common" names.
>
> Upside, we can safely use common names everywhere and not fear collision
> down the road.
>
> Downside, yet another contention point.
>
> A registry would clearly be under TC administration, though all the
> heavy lifting might be handed over to the API working group. I still
> imagine collision around some areas might be contentious.
>
> 3) Use either, inconsistently, hope for the best. (aka - status quo)
>
> Upside, no long mailing list thread to figure out the answer. Downside,
> it sucks.
>
>
> Are there other options missing? Where are people leaning at this point?
>
> Personally I'm way less partial to any particular answer as long as it's
> not #3.
>
>
> -Sean
>

This feels like something that should be designed with end-users
in mind, and that means making choices about descriptive words
rather than quirky in-jokes.  As much as I hate to think about the
long threads some of the contention is likely to introduce, not to
mention the bikeshedding over the terms themselves, I have become
convinced that our best long-term solution is a term/name registry
(option 2). We already have that pattern in the governance repository
where official projects describe their service type.

To reduce contention, we could agree in advance to support multi-word
names ("block storage" and "object storage", "block backup" and
"file backup", etc.). Decisions about noun-verb vs. verb-noun,
punctuation, etc. can be dealt with by the group that takes on the
task of setting standards.

As I said in the TC meeting, this seems like something the API working
group could do, if they wanted to take on the responsibility. If not,
then we should establish a new group with a mandate from the TC. Since
we have something like a product catalog already in the governance repo,
we can keep the new data there.

Doug

I am a fan of option #2. I also want to point out that os-client-config has 
encoded some of these names as well[1], which is pushing us in the direction of 
#2.  I 100% agree that the end user perspective also leans us towards option #2.

I am very against "hope for the best options".
i'm inclined to say #2 as well since the code names, based on my experience, 
lead to assumptions of what the project covers/does based on an elevator pitch 
description someone heard one time.

i definitely agree that we shouldn't concern ourselves with non big tent 
projects.

my concern with #2 is, we will just end up going to thesaurus.com and searching 
for alternate words that mean the same general thing and this will be equally 
confusing. with the big tent, we essentially agreed that duplication is 
possible, so no matter how granular we make the scope, i'm not sure there's a 
way for any project to own a domain anymore. it seems this question is better 
addressed by addressing how the TC should handle big tent first?

cheers,

--
gord
__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [grenade][keystone] Keystone multinode grenade

2016-02-04 Thread Sean Dague
On 02/04/2016 10:25 AM, Grasza, Grzegorz wrote:
> Hi Sean,
> 
> we are looking into testing online schema migrations in keystone.
> 
>  
> 
> The idea is to run grenade with multinode support, but it would be
> something different than the current implementation.
> 
>  
> 
> Currently, the two nodes which are started run with different roles, one
> is a controller the other is a compute.
> 
>  
> 
> Keystone is just one service, but we want to run a test, in which it is
> setup in HA – two services running at different versions, using the same DB.

Let me understand the scenario correctly.

There would be Keystone Liberty and Keystone Mitaka, both talking to a
Liberty DB?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry][aodh] announcing Liusheng as new Aodh liaison

2016-02-04 Thread gordon chung
hi,

we've been searching for a lead/liaison/lieutenant for Aodh for some 
time. thankfully, we've had a volunteer.

i'd like to announce Liusheng as the new lead of Aodh, the alarming 
service under Telemetry. he will help me with monitor bugs and specs and 
will be another resource for alarming related items. he will also help 
track some of the features we hope to implement[1].

i'll let him mention some of the target goals but for now, i'd like to 
thank him for volunteering to help improve the community.

[1] https://wiki.openstack.org/wiki/Telemetry/RoadMap#Aodh_.28alarming.29

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] re-ask about Global OpenStack Bug Smash Mitaka

2016-02-04 Thread Sergey Kraynev
Hi Heaters.

I want to attract your attention on mail [1] about Bug Smash Day and ask
question related with it:
Does anybody plan to visit Moscow during this event?
If yes, we can meet there and work together :)

It will be really cool to create group of people in different locations and
discuss some important bugs/changes.

P.S. I will try  to communicate constantly with each group in other places
too.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-January/085196.html

-- 
Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Rodrigo Barbieri for core reviewer team

2016-02-04 Thread Ben Swartzlander

On 02/02/2016 12:30 PM, Ben Swartzlander wrote:

Rodrigo (ganso on IRC) joined the Manila project back in the Kilo
release and has been working on share migration (an important core
feature) for the last 2 releases. Since Tokyo he has dedicated himself
to reviews and community participation. I would like to nominate him to
join the Manila core reviewer team.


We announced at the weekly meeting today that Rodrigo has joined the 
core reviewer team. Welcome Rodrigo!


-Ben



-Ben Swartzlander
Manila PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer] New project: collectd-ceilometer-plugin

2016-02-04 Thread Foley, Emma L
> > So, metrics are grouped by the type of resource they use, and each metric
> has to be listed.
> > Grouping isn't a problem, but creating an exhaustive list might be, since
> there are 100+ plugins [1] in collectd which can provide statistics, although
> not all of these are useful, and some require extensive configuration. The
> plugins each provide multiple metrics, and each metric can be duplicated for
> a number of instances, examples: [2].
> >
> > Collectd data is minimal: timestamp and volume, so there's little room to
> find interesting meta data.
> > It would be nice to see this support integrated, but it might be very
> > tedious to list all the metric names and group by resource type without any
> form of Do the resource definitions support wildcards? Collectd can provide
> A LOT of metrics.
> >
> > Regards,
> > Emma
> >
> > [1] https://collectd.org/wiki/index.php/Table_of_Plugins
> > [2] https://collectd.org/wiki/index.php/Naming_schema
> 
> gnocchi is strongly typed when compared to classical ceilometer db where
> you can dump anything and everything. we don't support wildcards as is but i
> believe it's something we can aim to support?

> Mehdi is currently in process of implementing dynamic resources which
> would give more flexiblity on what type of data we can store in Gnocchi.
> i believe from ceilometer pov, we can add support to allow wildcard support
> in regards to adding new metrics.
> 

It makes sense to support wildcards, if someone is introducing a huge source of 
 meters.
I can help with that, if needed.

Regards,
Emma

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][barbican]TLS container could not be found

2016-02-04 Thread Adam Harwell
Could you provide your neutron-lbaas.conf? Depending on what version you're 
using, barbican may not be the default secret backend (I believe this has been 
fixed). Alternatively, it depends on what user accounts are involved -- this 
should definitely work if you are using only the single admin account, but we 
haven't done a lot of testing around the ACLs yet to make sure they are working 
(and I believe there is still an outstanding bug in Barbican that would cause 
the ACLs to not function properly in our use-case).


?--Adam



From: Jiahao Liang 
Sent: Thursday, January 28, 2016 12:18 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][LBaaS][barbican]TLS container could not be 
found

Hi community,

I was going through 
https://wiki.openstack.org/wiki/Network/LBaaS/docs/how-to-create-tls-loadbalancer
 with devstack. I was stuck at a point when I tried to create a listener within 
a loadbalancer with this command:

neutron lbaas-listener-create --loadbalancer lb1 --protocol-port 443 --protocol 
TERMINATED_HTTPS --name listener1 --default-tls-container=$(barbican secret 
container list | awk '/ tls_container / {print $2}')

But the command failed with output:

TLS container 
http://192.168.100.149:9311/v1/containers/d8b25d56-4fc5-406d-8b2d-5a85de2a1e34 
could not be found

When I run:

barbican secret container list

I was able to see the corresponding container in the list and the status is 
active.
(Sorry, the format is a little bit ugly.)
+++---++-+-+---+
| Container href
 | Name   | Created   | Status | Type| Secrets  
   
| Consumers |
+++---++-+-+---+
| 
http://192.168.100.149:9311/v1/containers/d8b25d56-4fc5-406d-8b2d-5a85de2a1e34 
| tls_container  | 2016-01-28 04:58:42+00:00 | ACTIVE | certificate | 
private_key=http://192.168.100.149:9311/v1/secrets/1bbe33fc-ecd2-43e5-82ce-34007b9f6bfd
 | None  |
|   
 ||   || | 
certificate=http://192.168.100.149:9311/v1/secrets/6d0211c6-8515-4e55-b1cf-587324a79abe
 |   |
| 
http://192.168.100.149:9311/v1/containers/31045466-bf7b-426f-9ba8-135c260418ee 
| tls_container2 | 2016-01-28 04:59:05+00:00 | ACTIVE | certificate | 
private_key=http://192.168.100.149:9311/v1/secrets/dba18cbc-9bfe-499e-931e-90574843ca10
 | None  |
|   
 ||   || | 
certificate=http://192.168.100.149:9311/v1/secrets/23e11441-d119-4b24-a288-9ddc963cb698
 |   |
+++---++-+-+---+


Also, if I did a GET method from a RESTful client with correct X-Auth-Token to 
the url: 
http://192.168.100.149:9311/v1/containers/d8b25d56-4fc5-406d-8b2d-5a85de2a1e3, 
I was able to receive the JSON information of the TLS container.


Anybody could give some advice on how to fix this problem?

Thank you in advance!

Best,
Jiahao Liang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Ronald Bradford
While we all consider to look at this problem from within the perspective
of OpenStack, the consumer of OpenStack is somebody that is wanting to run
a cloud, and not develop a cloud (granted there is overlap in said
implementation).  They are also comparing clouds and features (e.g.
compute, storage), reading about the new features of cloud providers etc.
 The service catalog, the UI (i.e. names inside Horizon like Compute) need
to present generic names.

It would make it easier for consumers of API's to not need a translation
registry to know nova means compute.   This means that the translation is
internal with us developers. We should be able to live with that.

#2  IMO opinion serves best what OpenStack software is used for, and who it
is designed for.



Ronald Bradford

Web Site: http://ronaldbradford.com
LinkedIn:  http://www.linkedin.com/in/ronaldbradford
Twitter:@RonaldBradford 
Skype: RonaldBradford
GTalk: Ronald.Bradford
IRC: rbradfor


On Thu, Feb 4, 2016 at 10:45 AM, Sean Dague  wrote:

> On 02/04/2016 10:31 AM, Nick Chase wrote:
> > What about using a combination of two word names, and generic names. For
> > example, you might have
> >
> > cinder-blockstorage
> >
> > and
> >
> > foo-blockstorage
> >
> > The advantage there is that we don't need to do the thesaurus.com
> >  thing, but also, it enables to specify just
> >
> > blockstorage
> >
> > via a registry.  The advantage of THAT is that if a user wants to change
> > out the "default" blockstorage engine (for example) we could provide
> > them with a way to do that.  The non-default would have to support the
> > same API, of course, but it definitely fits with the "pluggable" nature
> > of OpenStack.
>
> This feels a bit like all the downsides of #1 (people have to know about
> codenames, and make projects know about the codenames of other projects)
> + all the downsides of #2 (we still need a naming registry).
>
> I do agree it is a 4th option, but the downsides seem higher than either
> #1 or #2.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Hayes, Graham
On 04/02/2016 15:40, Ryan Brown wrote:
> On 02/04/2016 09:32 AM, michael mccune wrote:
>> On 02/04/2016 08:33 AM, Thierry Carrez wrote:
>>> Hayes, Graham wrote:
 On 04/02/2016 13:24, Doug Hellmann wrote:
> Excerpts from Hayes, Graham's message of 2016-02-04 12:54:56 +:
>> On 04/02/2016 11:40, Sean Dague wrote:
>>> 2) Have a registry of "common" names.
>>>
>>> Upside, we can safely use common names everywhere and not fear
>>> collision down the road.
>>>
>>> Downside, yet another contention point.
>>>
>>> A registry would clearly be under TC administration, though all the
>>> heavy lifting might be handed over to the API working group. I still
>>> imagine collision around some areas might be contentious.
>>
>> ++ to a central registry. It could easily be added to the
>> projects.yaml
>> file, and is a single source of truth.
>
> Although I realized that the projects.yaml file only includes official
> projects right now, which would mean new projects wouldn't have a place
> to register terms. Maybe that's a feature?

 That is a good point - should we be registering terms for non tent
 projects? Or do projects get terms when they get accepted into the tent?
>>>
>>> I don't see why we would register terms for non-official projects. I
>>> don't see under what authority we would do that, or where it would end.
>>> So yes, that's a feature.
>>>
>>
>> i have a question about this, as new, non-official, projects start to
>> spin up there will be questions about the naming conventions they will
>> use within the project as to headers and the like. given that the
>> current guidance trend in the api-wg is towards using "service type" in
>> these cases, how would these projects proceed?
>>
>> (i'm not suggesting these projects should be registered, just curious)
>
> This isn't a perfect solution, but maybe instead of projects.yml there
> could be a `registry.yml` project that would (of course) have all the
> project.yml "in-tent" projects, but also merge in external project
> requests for namespaces?

Where ever it is stored, could this be a solid place for the api-wg to
codify the string that should be shown in the catalog / headers /
other places by services?

> Say there's an LDAP aaS project, it could ask to reserve "directory" or
> whatever and have a reasonable hope that when they're tented they'll be
> able to use it. This would help avoid having multiple projects expecting
> to use the same name, while also not meaning we force anyone to use or
> not use some name.
>
> Effectively, it's a gerrit-backed version of "dibs".
>
>>> I think solution 2 is the best. To avoid too much contention, that can
>>> easily be delegated to the API WG, and escalated to the TC for
>>> resolution only in case of conflict between projects (or between a
>>> project and the API WG).
>>>
>>
>> i'm +1 for solution 2 as well. as to the api-wg participation in the
>> name registration side of things , i don't have an objection but i am
>> very curious to hear Everett's and Chris' opinions.
>>
>> regards,
>> mike
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Fencing configuration

2016-02-04 Thread Michele Baldessari
Hi all,

currently in order to enable automatic fencing on the controllers we
need to pass something like the following yaml to an overcloud deploy:
"""
EnableFencing: true 
FencingConfig: 
  {  
"devices": [   
  {  
"agent": "fence_xvm",  
"host_mac": "52:54:00:2d:bb:38",   
"params": {
  "multicast_address": "225.0.0.12", 
  "port": "osp8-node1"   
}  
  }, 
  {  
"agent": "fence_xvm",  
"host_mac": "52:54:00:e9:f4:a8",   
"params": {
  "multicast_address": "225.0.0.12", 
  "port": "osp8-node2"   
}  
  }, 
   
"""

the problem with this approach is two-fold:
1) The stonith resources will be named something like 
"stonith-xvm-5254002dbb38".
This is rather suboptimal for a sysadmin as it is really important to
know which node is actually behind that stonith device without looking
at a db containing mac addresses<->node-name. Both for troubleshooting
purposes and for monitoring the health of the cluster.

2) While trying to build a template to configure instance HA, which also
requires the computing nodes to be fenced, the current implementation is
not really workable, because it assumes that each node has the pcs
command and it will basically check that the node where puppet runs
matches a macaddress in the FencingConfig table and will create the
stonith class in such a case. Compute nodes cannot invoke the pcs command
so the stonith devices for them need to be created on a controller and
they need the fencing information (node name + fencing info)

Jiri Stransky and I discussed this a bit and thought that it would be
best to bring it on the ML first to see if other people have opinions on
how to tackle this problem. Ideally we would have the FencingConfig info
above amended with the hostname of the node and then we could implement
the fencing for controllers + compute nodes in one of the steps in
overcloud_controller_pacemaker.pp.

Right now, one approach I am toying with is by tweaking
extraconfig/all_nodes/mac_hostname.yaml and then on a controller
parse the macaddresses + hostname and execute the pcs stonith commands
on a controller. It's quite hacky though, so am looking for other input
on this.

cheers,
Michele
-- 
Michele Baldessari
C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-04 Thread Carl Baldwin
On Thu, Feb 4, 2016 at 7:23 AM, Pavel Bondar  wrote:
> I am trying to bring more attention to [1] to make final decision on
> approach to use.
> There are a few point that are not 100% clear for me at this point.
>
> 1) Do we plan to switch all current clouds to pluggable ipam
> implementation in Mitaka?

I think our plan originally was only to deprecate the non-pluggable
implementation in Mitaka and remove it in Newton.  However, this is
worth some more consideration.  The pluggable version of the reference
implementation should, in theory, be at parity with the current
non-pluggable implementation.  We've tested it before and shown
parity.  What we're missing is regular testing in the gate to ensure
it continues this way.

> yes -->
> Then data migration can be done as alembic_migration and it is what
> currently implemented in [2] PS54.
> In this case during upgrade from Liberty to Mitaka all users are
> unconditionally switched to reference ipam driver
> from built-in ipam implementation.
> If operator wants to continue using build-in ipam implementation it can
> manually turn off ipam_driver in neutron.conf
> immediately after upgrade (data is not deleted from old tables).

This has a certain appeal to it.  I think the migration will be
straight-forward since the table structure doesn't really change much.
Doing this as an alembic migration would be the easiest from an
upgrade point of view because it fits seamlessly in to our current
upgrade strategy.

If we go this way, we should get this in soon so that we can get the
gate and others running with this code for the remainder of the cycle.

> no -->
> Operator is free to choose whether it will switch to pluggable ipam
> implementation
> and when. And it leads to no automatic data migration.
> In this case operator is supplied with script for migration to pluggable
> ipam (and probably from pluggable ipam),
> which can be executed by operator during upgrade or at any point after
> upgrade is done.
> I was testing this approach in [2] PS53 (have unresolved issues in it
> for now).

If there is some risk in changing over then this should still be
considered.  But, the more I think about it, the more I think that we
should just make the switch seamlessly for the operator and be done
with it.  This approach puts a certain burden on the operator to
choose when to do the migration and go through the steps manually to
do it.  And, since our intention is to deprecate and remove the
non-pluggable implementation, it is inevitable that they will have to
eventually switch anyway.

This also makes testing much more difficult.  If we go this route, we
really should be testing both equally.  Does this mean that we need to
set up a whole new job to run the pluggable implementation along side
the old implementation?  This kind of feels like a nightmare to me.
What do you think?

> Or we could do both, i.e. migrate data during upgrade from built-in to
> pluggable ipam implementation
> and supply operator with scripts to migrate from/to pluggable ipam at
> any time after upgrade.
>
> According to current feedback in [1] it most likely we go with script
> approach,
> so would like to confirm if that is the case.
>
> 2) Do we plan to make ipam implementation default in Mitaka for greenfields?

I'll wait to respond to the remainder of this email until after we get
more clarity on your first question.  I'd like to hear from anyone in
the community but especially would like to hear from Salvatore, as the
author of the new implementation, and Armax, as our fearless and
beloved PTL.

Carl

> If answer for this question is the same as for previous (yes/yes,
> no/no), then it doesn't introduce additional issues.
> But if answer is different from previous then it might complicate stuff.
> For example, greyfields might be migrated manually by operator to
> pluggable ipam, or continue to work using built-in implementation after
> upgrade in Mitaka.
> But greenfields might be set to pluggable ipam implementation by default.
>
> Is it what we are going to support?
>
> 3) How the script approach should be tested?
>
> Currently if pluggable implementation is set as default, then grenade
> test fails.
> Data has to be migrated during upgrade automatically to make grenade pass.
> In [1] PS53 I was using alembic migration that internally just call
> external migrate script.
> Is it a valid approach? I expect that better way to test script
> execution during upgrade might exist.
>
> [1] https://bugs.launchpad.net/neutron/+bug/1516156
> [2] https://review.openstack.org/#/c/181023

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Location of Heka Lua plugins

2016-02-04 Thread Jeff Peeler
On Thu, Feb 4, 2016 at 9:22 AM, Michał Jastrzębski  wrote:
> TLDR; +1 to have lua in tree of kolla, not sure if we want to switch later
>
> So I'm not so sure about switching. If these git repos are in
> /openstack/ namespace, then sure, otherwise I'd be -1 to this, we
> don't want to add dependency here. Also we're looking at pretty simple
> set of files that won't change anytime soon probably. Also we might
> introduce new service that fuel does not have, and while I'm sure we
> can push new file to this repo, it's bigger issue than just coding it
> in tree.

I wouldn't think there'd be any opposition to having additional
contributed Lua scripts for additional services that Fuel doesn't yet
have. It's always easier to commit to one tree, but distinct
components like this separated out encourages code sharing (done
properly). I did assume that the new repo would be in the OpenStack
namespace, but even if it weren't I'd still think a separate repo is
best. Sure the files are small, but given the purpose of these files
small changes could potentially have a huge impact on the logs.

In summary, +1 to temporarily having the Lua scripts in tree until
they can be properly migrated to a new repository.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Patrick East to Cinder Core

2016-02-04 Thread yang, xing
A late +1.  Congrats Patrick!  Welcome to the core team.

Xing

> On Feb 4, 2016, at 9:31 AM, Sean McGinnis  wrote:
> 
>> On Sat, Jan 30, 2016 at 01:04:58AM +0100, Sean McGinnis wrote:
>> Patrick has been a strong contributor to Cinder over the last few releases, 
>> both with great code submissions and useful reviews. He also participates 
>> regularly on IRC helping answer questions and providing valuable feedback.
>> 
>> I would like to add Patrick to the core reviewers for Cinder. Per our 
>> governance process [1], existing core reviewers please respond with any 
>> feedback within the next five days. Unless there are no objections, I will 
>> add Patrick to the group by February 3rd.
> 
> The five day feedback period has passed and all respondents have been in
> the positive.
> 
> Welcome to Cinder core Patrick! Glad to have you on board!
> 
>> 
>> Thanks!
>> 
>> Sean (smcginnis)
>> 
>> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-04 Thread John Belamaric

> On Feb 4, 2016, at 11:09 AM, Carl Baldwin  wrote:
> 
> On Thu, Feb 4, 2016 at 7:23 AM, Pavel Bondar  wrote:
>> I am trying to bring more attention to [1] to make final decision on
>> approach to use.
>> There are a few point that are not 100% clear for me at this point.
>> 
>> 1) Do we plan to switch all current clouds to pluggable ipam
>> implementation in Mitaka?
> 
> I think our plan originally was only to deprecate the non-pluggable
> implementation in Mitaka and remove it in Newton.  However, this is
> worth some more consideration.  The pluggable version of the reference
> implementation should, in theory, be at parity with the current
> non-pluggable implementation.  We've tested it before and shown
> parity.  What we're missing is regular testing in the gate to ensure
> it continues this way.
> 

Yes, it certainly should be at parity, and gate testing to ensure it would be 
best.

>> yes -->
>> Then data migration can be done as alembic_migration and it is what
>> currently implemented in [2] PS54.
>> In this case during upgrade from Liberty to Mitaka all users are
>> unconditionally switched to reference ipam driver
>> from built-in ipam implementation.
>> If operator wants to continue using build-in ipam implementation it can
>> manually turn off ipam_driver in neutron.conf
>> immediately after upgrade (data is not deleted from old tables).
> 
> This has a certain appeal to it.  I think the migration will be
> straight-forward since the table structure doesn't really change much.
> Doing this as an alembic migration would be the easiest from an
> upgrade point of view because it fits seamlessly in to our current
> upgrade strategy.
> 
> If we go this way, we should get this in soon so that we can get the
> gate and others running with this code for the remainder of the cycle.
> 

If we do this, and the operator reverts back to the non-pluggable version,
then we will leave stale records in the new IPAM tables. At the very least,
we would need a way to clean those up and to migrate at a later time.

>> no -->
>> Operator is free to choose whether it will switch to pluggable ipam
>> implementation
>> and when. And it leads to no automatic data migration.
>> In this case operator is supplied with script for migration to pluggable
>> ipam (and probably from pluggable ipam),
>> which can be executed by operator during upgrade or at any point after
>> upgrade is done.
>> I was testing this approach in [2] PS53 (have unresolved issues in it
>> for now).
> 
> If there is some risk in changing over then this should still be
> considered.  But, the more I think about it, the more I think that we
> should just make the switch seamlessly for the operator and be done
> with it.  This approach puts a certain burden on the operator to
> choose when to do the migration and go through the steps manually to
> do it.  And, since our intention is to deprecate and remove the
> non-pluggable implementation, it is inevitable that they will have to
> eventually switch anyway.
> 
> This also makes testing much more difficult.  If we go this route, we
> really should be testing both equally.  Does this mean that we need to
> set up a whole new job to run the pluggable implementation along side
> the old implementation?  This kind of feels like a nightmare to me.
> What do you think?
> 

Originally (as I mentioned in the meeting), I was thinking that we should not 
automatically migrate. However, I see the appeal of your arguments. Seamless is 
best, of course. But if we offer going back to non-pluggable, (which I think we 
need to at this point in the Mitaka cycle), we probably need to provide a 
script as mentioned above. Seems feasible, though.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Nick Chase
What about using a combination of two word names, and generic names. For
example, you might have

cinder-blockstorage

and

foo-blockstorage

The advantage there is that we don't need to do the thesaurus.com thing,
but also, it enables to specify just

blockstorage

via a registry.  The advantage of THAT is that if a user wants to change
out the "default" blockstorage engine (for example) we could provide them
with a way to do that.  The non-default would have to support the same API,
of course, but it definitely fits with the "pluggable" nature of OpenStack.

  Nick
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer] New project: collectd-ceilometer-plugin

2016-02-04 Thread Foley, Emma L

> On Thu, Feb 04 2016, Foley, Emma L wrote:
> 
> > Question is where it should live.
> > Since gnocchi is designed to be standalone, it seem like that's a potential
> home for it.
> > If not, it also fits in with the existing plugin.
> 
> It it's a collectd plugin that talk statsd protocol, I'd say it should live 
> near
> collectd, no?
> 

If it's in Python, collectd doesn't take it. 
I'll house it with the existing plugin. 

Emma



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] gate-install-dsvm-kuryr is failed

2016-02-04 Thread Liping Mao (limao)
Hi Kuryr all,

I notice the 
gate-install-dsvm-kuryr
 is failed since afternoon. All patchs are failed in this jenkins check.
And I see the error log seems like run install_docker.sh failed, any idea?

Log:
http://logs.openstack.org/55/276255/1/check/gate-install-dsvm-kuryr/d6ecbf2/logs/devstacklog.txt.gz

2016-02-04 
15:23:16.410
 | 2016-02-04 15:23:16 (263 MB/s) - 'install_docker.sh' saved [12865/12865]
2016-02-04 
15:23:16.410
 |
2016-02-04 
15:23:16.610
 | apparmor is enabled in the kernel and apparmor utils were already installed
2016-02-04 
15:23:16.611
 | install_docker.sh: 358: install_docker.sh: Bad substitution
2016-02-04 
15:23:16.613
 | exit_trap: cleaning up child processes
2016-02-04 
15:23:16.613
 | ./stack.sh: line 481: kill: (1439) - No such process


Regards,

Liping Mao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [grenade][keystone] Keystone multinode grenade

2016-02-04 Thread Grasza, Grzegorz
Hi Sean,
we are looking into testing online schema migrations in keystone.

The idea is to run grenade with multinode support, but it would be something 
different than the current implementation.

Currently, the two nodes which are started run with different roles, one is a 
controller the other is a compute.

Keystone is just one service, but we want to run a test, in which it is setup 
in HA - two services running at different versions, using the same DB.

They could be joined by running HAProxy in round-robin mode on one of the 
nodes. We could then run tempest against the HAProxy endpoint.

Can you help me with the implementation or give some pointers on where to make 
the change?

Specifically, do you think a new DEVSTACK_GATE_GRENADE or a new 
DEVSTACK_GATE_TOPOLOGY would be needed?

/ Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Ryan Brown

On 02/04/2016 09:32 AM, michael mccune wrote:

On 02/04/2016 08:33 AM, Thierry Carrez wrote:

Hayes, Graham wrote:

On 04/02/2016 13:24, Doug Hellmann wrote:

Excerpts from Hayes, Graham's message of 2016-02-04 12:54:56 +:

On 04/02/2016 11:40, Sean Dague wrote:

2) Have a registry of "common" names.

Upside, we can safely use common names everywhere and not fear
collision down the road.

Downside, yet another contention point.

A registry would clearly be under TC administration, though all the
heavy lifting might be handed over to the API working group. I still
imagine collision around some areas might be contentious.


++ to a central registry. It could easily be added to the
projects.yaml
file, and is a single source of truth.


Although I realized that the projects.yaml file only includes official
projects right now, which would mean new projects wouldn't have a place
to register terms. Maybe that's a feature?


That is a good point - should we be registering terms for non tent
projects? Or do projects get terms when they get accepted into the tent?


I don't see why we would register terms for non-official projects. I
don't see under what authority we would do that, or where it would end.
So yes, that's a feature.



i have a question about this, as new, non-official, projects start to
spin up there will be questions about the naming conventions they will
use within the project as to headers and the like. given that the
current guidance trend in the api-wg is towards using "service type" in
these cases, how would these projects proceed?

(i'm not suggesting these projects should be registered, just curious)


This isn't a perfect solution, but maybe instead of projects.yml there 
could be a `registry.yml` project that would (of course) have all the 
project.yml "in-tent" projects, but also merge in external project 
requests for namespaces?


Say there's an LDAP aaS project, it could ask to reserve "directory" or 
whatever and have a reasonable hope that when they're tented they'll be 
able to use it. This would help avoid having multiple projects expecting 
to use the same name, while also not meaning we force anyone to use or 
not use some name.


Effectively, it's a gerrit-backed version of "dibs".


I think solution 2 is the best. To avoid too much contention, that can
easily be delegated to the API WG, and escalated to the TC for
resolution only in case of conflict between projects (or between a
project and the API WG).



i'm +1 for solution 2 as well. as to the api-wg participation in the
name registration side of things , i don't have an objection but i am
very curious to hear Everett's and Chris' opinions.

regards,
mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] StoryBoard Midcycle Meetup

2016-02-04 Thread Zara Zaimeche

Hi, all,

The StoryBoard[1] team is excited[2] to announce that we're hosting a 
meetup on the 17th of February, 2016, in Manchester, UK[3]! It's free to 
attend, and there will be cake.


Wiki Page: https://wiki.openstack.org/wiki/StoryBoard/Midcycle_Meetup

Etherpad: https://etherpad.openstack.org/p/StoryBoard_Mitaka_Midcycle


This is just after the ops meetup [4], in the same city, so it should be 
convenient for anyone going there. Alternatively, why not come out on a 
spontaneous trip to Manchester? Actually, don't answer that. Anyway, 
please add yourself to the etherpad if interested, and/or specify a 
cake! :) And thanks to Codethink for sponsoring us.  We're happy to 
answer any questions in this thread or in #storyboard on freenode.


Best Wishes,

Zara

[1] https://storyboard.openstack.org/#!/page/about
[2] We're even more excited to have finally gotten round to writing this 
email
[3] Exact location: 
https://www.google.co.uk/maps/place/Ducie+House,+37+Ducie+St,+Manchester+M1+2JW/@53.4805451,-2.2308359,17z/data=!3m1!4b1!4m2!3m1!1s0x487bb1bc54a57815:0x5a08e799278b60b

[4] https://etherpad.openstack.org/p/MAN-ops-meetup

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Sean Dague
On 02/04/2016 10:31 AM, Nick Chase wrote:
> What about using a combination of two word names, and generic names. For
> example, you might have 
> 
> cinder-blockstorage
> 
> and
> 
> foo-blockstorage
> 
> The advantage there is that we don't need to do the thesaurus.com
>  thing, but also, it enables to specify just
> 
> blockstorage
> 
> via a registry.  The advantage of THAT is that if a user wants to change
> out the "default" blockstorage engine (for example) we could provide
> them with a way to do that.  The non-default would have to support the
> same API, of course, but it definitely fits with the "pluggable" nature
> of OpenStack.

This feels a bit like all the downsides of #1 (people have to know about
codenames, and make projects know about the codenames of other projects)
+ all the downsides of #2 (we still need a naming registry).

I do agree it is a 4th option, but the downsides seem higher than either
#1 or #2.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] compatibility of puppet upstream modules

2016-02-04 Thread Ptacek, MichalX
Hi Emilien,

It seems that keystone database is not populated, because of something, which 
happened on previous runs (e.g. some packages installation),

Following rows are visible just in log from first attempt
Debug: Executing '/usr/bin/mysql -e CREATE USER 'keystone'@'127.0.0.1' 
IDENTIFIED BY PASSWORD '*936E8F7AB2E21B47F6C9A7E5D9FE14DBA2255E5A''
Debug: Executing '/usr/bin/mysql -e GRANT USAGE ON *.* TO 
'keystone'@'127.0.0.1' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 
MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'
….
….
I tried to clean databases & uninstall packages installed during deployment, 
but maybe I miss something as it simply doesn’t  work ☺

Is there any procedure, how can I restore system to “vanilla state” before 
puppet modules installation ?
It looks to me that when deployment failed, it’s very difficult to “unstack” it

Thanks in advance,
Michal

From: Ptacek, MichalX
Sent: Thursday, February 04, 2016 11:14 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: RE: [openstack-dev] [puppet] compatibility of puppet upstream modules






-Original Message-
From: Emilien Macchi [mailto:emil...@redhat.com]
Sent: Thursday, February 04, 2016 10:06 AM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [puppet] compatibility of puppet upstream modules







On 02/03/2016 04:03 PM, Ptacek, MichalX wrote:

> Hi all,

>

>

>

> I have one general question,

>

> currently I am deploying liberty openstack as described in

> https://wiki.openstack.org/wiki/Puppet/Deploy

>

> Unfortunately puppet modules specified in

> puppet-openstack-integration/Puppetfile are not compatible



Did you take the file from stable/liberty branch?

https://github.com/openstack/puppet-openstack-integration/tree/stable/liberty



[Michal Ptacek]  I am deploying scenario003 with stable/liberty

>

> and some are also missing as visible from following output of “puppet

> module list”

>

>

>

> Warning: Setting templatedir is deprecated. See

> http://links.puppetlabs.com/env-settings-deprecations

>

>(at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in

> `issue_deprecation_warning')

>

> Warning: Module 'openstack-openstacklib' (v7.0.0) fails to meet some

> dependencies:

>

>   'openstack-barbican' (v0.0.1) requires 'openstack-openstacklib'

> (>=6.0.0 <7.0.0)

>

>   'openstack-zaqar' (v0.0.1) requires 'openstack-openstacklib'

> (>=6.0.0

> <7.0.0)

>

> Warning: Module 'puppetlabs-postgresql' (v4.4.2) fails to meet some

> dependencies:

>

>   'openstack-openstacklib' (v7.0.0) requires 'puppetlabs-postgresql'

> (>=3.3.0 <4.0.0)

>

> Warning: Missing dependency 'deric-storm':

>

>   'openstack-monasca' (v1.0.0) requires 'deric-storm' (>=0.0.1 <1.0.0)

>

> Warning: Missing dependency 'deric-zookeeper':

>

>   'openstack-monasca' (v1.0.0) requires 'deric-zookeeper' (>=0.0.1

> <1.0.0)

>

> Warning: Missing dependency 'dprince-qpid':

>

>   'openstack-cinder' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)

>

>   'openstack-manila' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)

>

>   'openstack-nova' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)

>

> Warning: Missing dependency 'jdowning-influxdb':

>

>   'openstack-monasca' (v1.0.0) requires 'jdowning-influxdb' (>=0.3.0

> <1.0.0)

>

> Warning: Missing dependency 'opentable-kafka':

>

>   'openstack-monasca' (v1.0.0) requires 'opentable-kafka' (>=1.0.0

> <2.0.0)

>

> Warning: Missing dependency 'puppetlabs-stdlib':

>

>   'antonlindstrom-powerdns' (v0.0.5) requires 'puppetlabs-stdlib' (>=

> 0.0.0)

>

> Warning: Missing dependency 'puppetlabs-corosync':

>

>   'openstack-openstack_extras' (v7.0.0) requires 'puppetlabs-corosync'

> (>=0.1.0 <1.0.0)

>

> /etc/puppet/modules

>

> ├──antonlindstrom-powerdns (v0.0.5)

>

> ├──duritong-sysctl (v0.0.11)

>

> ├──nanliu-staging (v1.0.4)

>

> ├──openstack-barbican (v0.0.1)

>

> ├──openstack-ceilometer (v7.0.0)

>

> ├──openstack-cinder (v7.0.0)

>

> ├──openstack-designate (v7.0.0)

>

> ├──openstack-glance (v7.0.0)

>

> ├──openstack-gnocchi (v7.0.0)

>

> ├──openstack-heat (v7.0.0)

>

> ├──openstack-horizon (v7.0.0)

>

> ├──openstack-ironic (v7.0.0)

>

> ├──openstack-keystone (v7.0.0)

>

> ├──openstack-manila (v7.0.0)

>

> ├──openstack-mistral (v0.0.1)

>

> ├──openstack-monasca (v1.0.0)

>

> ├──openstack-murano (v7.0.0)

>

> ├──openstack-neutron (v7.0.0)

>

> ├──openstack-nova (v7.0.0)

>

> ├──openstack-openstack_extras (v7.0.0)

>

> ├──openstack-openstacklib (v7.0.0)  invalid

>

> ├──openstack-sahara (v7.0.0)

>

> ├──openstack-swift (v7.0.0)

>

> ├──openstack-tempest (v7.0.0)

>

> ├──openstack-trove (v7.0.0)

>

> ├──openstack-tuskar (v7.0.0)

>

> ├──openstack-vswitch (v3.0.0)

>

> ├──openstack-zaqar (v0.0.1)

>

> ├──openstack_integration (???)

>

> 

Re: [openstack-dev] [OpenStack-Ansible] Mid Cycle Sprint

2016-02-04 Thread Jesse Pretorius
Hi everyone,

As discussed in the community meeting today [1] we will be able to include
remote participants in the Mid Cycle via Video Conference. In order to
facilitate this I need to ensure that we have an attendance list for me to
send the Video Conference invitations to, so please get a Remote
Partitipation ticket in Eventbrite [2] if you intend to join us through
this facility.

The agenda is still open for discussion at this stage, so proposals on the
Etherpad [3] are welcome.

As always, please feel free to ping me with any questions/comments.

Thanks,

Jesse
IRC: odyssey4me

[1]
http://eavesdrop.openstack.org/meetings/openstack_ansible/2016/openstack_ansible.2016-02-04-16.04.html
[2]
https://www.eventbrite.com/e/openstack-ansible-mid-cycle-tickets-20966167371
[3] https://etherpad.openstack.org/p/openstack-ansible-mitaka-midcycle
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Ansible] Mid Cycle Sprint

2016-02-04 Thread Major Hayden
On 02/04/2016 12:41 PM, Jesse Pretorius wrote:
> As discussed in the community meeting today [1] we will be able to include 
> remote participants in the Mid Cycle via Video Conference. In order to 
> facilitate this I need to ensure that we have an attendance list for me to 
> send the Video Conference invitations to, so please get a Remote 
> Partitipation ticket in Eventbrite [2] if you intend to join us through this 
> facility.

Thanks for getting the remote participation put together for the event! :)

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] openstack/requirements repo has a stable/icehouse branch...

2016-02-04 Thread Jeremy Stanley
On 2016-02-04 18:17:07 + (+), Jeremy Stanley wrote:
> I was getting around to taking care of this just now, and it looks
> like someone has deleted the stable/icehouse branch without tagging
> it icehouse-eol first?
[...]

Nevermind--it's still there. Perhaps I need glasses (or afternoon
coffee).

Taking care of it now.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][doc][fuel] Removal of logging use_syslog_rfc_format configuration option

2016-02-04 Thread Ronald Bradford
In Mitaka we are planning on removing the deprecated logging configuration
option use_syslog_rfc_format [1]

Matches in openstack-manuals [2] and fuel-library puppet modules [3] should
be removed to avoid any dated documentation or errors in operation
respectively.

This option can also be found as a comment in sample configuration files of
multiple projects which should be removed automatically if these files are
generated via Sphinx.
In other projects using legacy Oslo Incubator code, this option is in
place.  These projects should consider migrating to the olso.log library.

Regards

Ronald

[1] https://review.openstack.org/#/c/263785/
[2]
http://codesearch.openstack.org/?q=use_syslog_rfc_format=nope==openstack-manuals
[3]
http://codesearch.openstack.org/?q=use_syslog_rfc_format=nope==fuel-library
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] compatibility of puppet upstream modules

2016-02-04 Thread Matt Fischer
If you can't isolate the exact thing you need to get cleaned up here it can
be difficult to unwind. You'll either need to read the code to see what's
triggering the db setup (which is probably the package installs) or start
on a clean box. I'd recommend the latter.

On Thu, Feb 4, 2016 at 10:35 AM, Ptacek, MichalX 
wrote:

> Hi Emilien,
>
>
>
> It seems that keystone database is not populated, because of something,
> which happened on previous runs (e.g. some packages installation),
>
>
>
> Following rows are visible just in log from first attempt
>
> Debug: Executing '/usr/bin/mysql -e CREATE USER 'keystone'@'127.0.0.1'
> IDENTIFIED BY PASSWORD '*936E8F7AB2E21B47F6C9A7E5D9FE14DBA2255E5A''
>
> Debug: Executing '/usr/bin/mysql -e GRANT USAGE ON *.* TO 
> 'keystone'@'127.0.0.1'
> WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR
> 0 MAX_UPDATES_PER_HOUR 0'
>
> ….
>
> ….
>
> I tried to clean databases & uninstall packages installed during
> deployment, but maybe I miss something as it simply doesn’t  work J
>
>
>
> Is there any procedure, how can I restore system to “vanilla state” before
> puppet modules installation ?
>
> It looks to me that when deployment failed, it’s very difficult to
> “unstack” it
>
>
>
> Thanks in advance,
>
> Michal
>
>
>
> *From:* Ptacek, MichalX
> *Sent:* Thursday, February 04, 2016 11:14 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* RE: [openstack-dev] [puppet] compatibility of puppet upstream
> modules
>
>
>
>
>
>
>
> -Original Message-
> From: Emilien Macchi [mailto:emil...@redhat.com ]
> Sent: Thursday, February 04, 2016 10:06 AM
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [puppet] compatibility of puppet upstream
> modules
>
>
>
>
>
>
>
> On 02/03/2016 04:03 PM, Ptacek, MichalX wrote:
>
> > Hi all,
>
> >
>
> >
>
> >
>
> > I have one general question,
>
> >
>
> > currently I am deploying liberty openstack as described in
>
> > https://wiki.openstack.org/wiki/Puppet/Deploy
>
> >
>
> > Unfortunately puppet modules specified in
>
> > puppet-openstack-integration/Puppetfile are not compatible
>
>
>
> Did you take the file from stable/liberty branch?
>
>
> https://github.com/openstack/puppet-openstack-integration/tree/stable/liberty
>
>
>
> *[Michal Ptacek]*  I am deploying scenario003 with stable/liberty
>
> >
>
> > and some are also missing as visible from following output of “puppet
>
> > module list”
>
> >
>
> >
>
> >
>
> > Warning: Setting templatedir is deprecated. See
>
> > http://links.puppetlabs.com/env-settings-deprecations
>
> >
>
> >(at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in
>
> > `issue_deprecation_warning')
>
> >
>
> > Warning: Module 'openstack-openstacklib' (v7.0.0) fails to meet some
>
> > dependencies:
>
> >
>
> >   'openstack-barbican' (v0.0.1) requires 'openstack-openstacklib'
>
> > (>=6.0.0 <7.0.0)
>
> >
>
> >   'openstack-zaqar' (v0.0.1) requires 'openstack-openstacklib'
>
> > (>=6.0.0
>
> > <7.0.0)
>
> >
>
> > Warning: Module 'puppetlabs-postgresql' (v4.4.2) fails to meet some
>
> > dependencies:
>
> >
>
> >   'openstack-openstacklib' (v7.0.0) requires 'puppetlabs-postgresql'
>
> > (>=3.3.0 <4.0.0)
>
> >
>
> > Warning: Missing dependency 'deric-storm':
>
> >
>
> >   'openstack-monasca' (v1.0.0) requires 'deric-storm' (>=0.0.1 <1.0.0)
>
> >
>
> > Warning: Missing dependency 'deric-zookeeper':
>
> >
>
> >   'openstack-monasca' (v1.0.0) requires 'deric-zookeeper' (>=0.0.1
>
> > <1.0.0)
>
> >
>
> > Warning: Missing dependency 'dprince-qpid':
>
> >
>
> >   'openstack-cinder' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)
>
> >
>
> >   'openstack-manila' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)
>
> >
>
> >   'openstack-nova' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)
>
> >
>
> > Warning: Missing dependency 'jdowning-influxdb':
>
> >
>
> >   'openstack-monasca' (v1.0.0) requires 'jdowning-influxdb' (>=0.3.0
>
> > <1.0.0)
>
> >
>
> > Warning: Missing dependency 'opentable-kafka':
>
> >
>
> >   'openstack-monasca' (v1.0.0) requires 'opentable-kafka' (>=1.0.0
>
> > <2.0.0)
>
> >
>
> > Warning: Missing dependency 'puppetlabs-stdlib':
>
> >
>
> >   'antonlindstrom-powerdns' (v0.0.5) requires 'puppetlabs-stdlib' (>=
>
> > 0.0.0)
>
> >
>
> > Warning: Missing dependency 'puppetlabs-corosync':
>
> >
>
> >   'openstack-openstack_extras' (v7.0.0) requires 'puppetlabs-corosync'
>
> > (>=0.1.0 <1.0.0)
>
> >
>
> > /etc/puppet/modules
>
> >
>
> > ├──antonlindstrom-powerdns (v0.0.5)
>
> >
>
> > ├──duritong-sysctl (v0.0.11)
>
> >
>
> > ├──nanliu-staging (v1.0.4)
>
> >
>
> > ├──openstack-barbican (v0.0.1)
>
> >
>
> > ├──openstack-ceilometer (v7.0.0)
>
> >
>
> > ├──openstack-cinder (v7.0.0)
>
> >
>
> > ├──openstack-designate (v7.0.0)
>
> >
>
> > ├──openstack-glance (v7.0.0)
>
> >
>
> > 

Re: [openstack-dev] [infra] openstack/requirements repo has a stable/icehouse branch...

2016-02-04 Thread Jeremy Stanley
On 2015-12-16 15:35:23 -0600 (-0600), Matt Riedemann wrote:
> That should be deleted, right? Or are there random projects that
> still have stable/icehouse branches in projects.txt and we care
> about them?

I was getting around to taking care of this just now, and it looks
like someone has deleted the stable/icehouse branch without tagging
it icehouse-eol first? Unfortunately I don't know who, nor what the
last commit on that branch was to be able to add the tag now
(assuming it's not been garbage-collected away in the undefined
period of time since that was done).

If anyone has any details about this, please give me a heads up so
we can try to correct the situation.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2016-02-04 Thread Armando M.
On 4 February 2016 at 04:05, Gal Sagie  wrote:

> Hi Assaf,
>
> I think that if we define a certain criteria we need to make sure that it
> applies to everyone equally.
> and it is well understood.
>

I must admit I am still waking up and going through the entire logs etc.
However I cannot help but point out that one criteria that Russell and
other TC people are behind (me included) is the significant 'team overlap'
(and I would add it to be for a prolonged amount of time). This doesn't
mean just drop the accidental bug fix or enhancement to enable the
subproject to work with Neutron or address the odd regression that sneaks
in from time to time, but it means driving Neutron forward so that it is
beneficial for the project as a whole.

If you look at yourself, can you candidly say that you are making an impact
to the core of Neutron? You seem you have dropped off the radar in the
Mitaka timeframe, and haven't made a lasting impact in the Liberty
timeframe. I applaud your Kuryr initiative and your specs proposals, but
both are not enough to warrant Dragonflow for inclusion.

If the team overlap changes, then great, we'll reassess.

That said, I'll continue my discussion on the patch...


> I have contributed and still am to both OVN and Dragonflow and hope to
> continue do so in the future,
> i want to see both of these solutions become a great production grade open
> source alternatives.
>
> I have less experience in open source and in this community from most of
> you, but from what i saw users
> do take these things into consideration, its hard for a new user and even
> not so new to understand the possibilities correctly
> specially if we cant even define them ourselves
>
> Instead of spending time on technology and on solving the problems for our
> users we are concentrating
> on this conversation, we haven't even talked about production maturity,
> feature richness and stability as you say
> and by doing this move, we are signaling something else for our users
> without actually discussing about all the
> former ourselves.
>
> I will be ok with what ever the Neutron team decide on this, as they can
> define the criteria as they please.
> Just shared my opinion on this process and my disappointment from it as
> someone who values open source
> a lot.
>
> Gal.
>
>
> On Thu, Feb 4, 2016 at 11:31 AM, Assaf Muller  wrote:
>
>> On Thu, Feb 4, 2016 at 10:20 AM, Assaf Muller  wrote:
>> > On Thu, Feb 4, 2016 at 8:33 AM, Gal Sagie  wrote:
>> >> As i have commented on the patch i will also send this to the mailing
>> list:
>> >>
>> >> I really dont see why Dragonflow is not part of this list, given the
>> >> criteria you listed.
>> >>
>> >> Dragonflow is fully developed under Neutron/OpenStack, no other
>> >> repositories. It is fully Open source and already have a community of
>> people
>> >> contributing and interest from various different companies and
>> OpenStack
>> >> deployers. (I can prepare the list of active contributions and of
>> interested
>> >> parties) It also puts OpenStack Neutron APIs and use cases as first
>> class
>> >> citizens and working on being an integral part of OpenStack.
>> >>
>> >> I agree that OVN needs to be part of the list, but you brought up this
>> >> criteria in regards to ODL, so: OVN like ODL is not only Neutron and
>> >> OpenStack and is even running/being implemented on a whole different
>> >> governance model and requirements to it.
>> >>
>> >> I think you also forgot to mention some other projects as well that are
>> >> fully open source with a vibrant and diverse community, i will let them
>> >> comment here by themselves.
>> >>
>> >> Frankly this approach disappoints me, I have honestly worked hard to
>> make
>> >> Dragonflow fully visible and add and support open discussion and
>> follow the
>> >> correct guidelines to work in a project. I think that Dragonflow
>> community
>> >> has already few members from various companies and this is only going
>> to
>> >> grow in the near future. (in addition to deployers that are
>> considering it
>> >> as a solution)  we also welcome anyone that wants to join and be part
>> of the
>> >> process to step in, we are very welcoming
>> >>
>> >> I also think that the correct way to do this is to actually add as
>> reviewers
>> >> all lieutenants of the projects you are now removing from Neutron big
>> >> stadium and letting them comment.
>> >>
>> >> Gal.
>> >
>> > I understand you see 'Dragonflow being part of the Neutron stadium'
>> > and 'Dragonflow having high visibility' as tied together. I'm curious,
>> > from a practical perspective, how does being a part of the stadium
>> > give Dragonflow visibility? If it were not a part of the stadium and
>> > you had your own PTL etc, what specifically would change so that
>> > Dragonflow would be less visible. Currently I don't understand why
>> > being a part of the stadium is good or bad for a networking 

Re: [openstack-dev] [infra] openstack/requirements repo has a stable/icehouse branch...

2016-02-04 Thread Dean Troyer
On Thu, Feb 4, 2016 at 12:21 PM, Jeremy Stanley  wrote:

> Nevermind--it's still there. Perhaps I need glasses (or afternoon
> coffee).
>

Nah, the fastest way to find something like that is to send a note to a
public mail list.  :)

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Location of Heka Lua plugins

2016-02-04 Thread Steven Dake (stdake)
I agree with Michael.  If the LUA plugins are going to be a separate
repository, they need to be in the OpenStack git namespace.  I am +2 to
the idea assuming after extraction of the LUA plugins they hit the
OpenStack git namespace.

Regards
-steve


On 2/4/16, 7:22 AM, "Michał Jastrzębski"  wrote:

>TLDR; +1 to have lua in tree of kolla, not sure if we want to switch later
>
>So I'm not so sure about switching. If these git repos are in
>/openstack/ namespace, then sure, otherwise I'd be -1 to this, we
>don't want to add dependency here. Also we're looking at pretty simple
>set of files that won't change anytime soon probably. Also we might
>introduce new service that fuel does not have, and while I'm sure we
>can push new file to this repo, it's bigger issue than just coding it
>in tree.
>
>Cheers,
>Michal
>
>On 4 February 2016 at 07:45, Michal Rostecki 
>wrote:
>> On 02/04/2016 10:55 AM, Eric LEMOINE wrote:
>>>
>>> Hi
>>>
>>> As discussed yesterday in our IRC meeting we'll need specific Lua
>>> plugins for parsing OpenStack, MariaDB and RabbitMQ logs.  We already
>>> have these Lua plugins in one of our Fuel plugins [*].  So our plan is
>>> to move these Lua plugins in their own Git repo, and distribute them
>>> as deb and rpm packages in the future.  This will allow to easily
>>> share the Lua plugins between projects, and having a separate Git repo
>>> will facilitate testing, documentation, etc.
>>>
>>> But we may not have time to do that by the 4th of March (for Mitaka
>>> 3), so my suggestion is to copy the Lua plugins that we need in Kolla.
>>> This would be a temporary thing.  When our Lua plugins are
>>> available/distributed as deb and rpm packages we will remove the Lua
>>> plugins from the kolla repository and change the Heka Dockerfile to
>>> install the Lua plugins from the package.
>>>
>>> Please tell me if you agree with the approach.  Thank you!
>>>
>>> [*]
>>> 
>>>>>oyment_scripts/puppet/modules/lma_collector/files/plugins/decoders>
>>>
>>
>> +1
>> But of course when git repos will be available (even without
>>packaging), I'd
>> switch to them immediately.
>>
>> Cheers,
>> Michal
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][midcycle] last notification regarding Kolla Midcycle February 9th and February 10th

2016-02-04 Thread Steven Dake (stdake)
Hey folks,

The Kolla midcycle is prepared for next week Tuesday February 9th 9:00 am - 
5:00 pm and Wednesday February 10th 9:00 am - 3:30 pm. Breakfast, lunch, 
coffee, soda, water are provided both days, and dinner is provided Tuesday 
evening.

The sprint wiki page is available here:
https://wiki.openstack.org/wiki/Sprints/KollaMitakaSprint

Please RSVP via eventbrite if you plan to attend.  If you don't RSVP its a-ok, 
we can accommodate that as well :)

Thanks!
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >