Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Carl Baldwin
On Tue, Sep 15, 2015 at 3:09 PM, Fox, Kevin M  wrote:
> DNS is preferable, since humans don't remember ip's very well. IPv6 is much 
> harder to remember then v4 too.
>
> DNS has its own issues, mostly, its usually not very quick to get a DNS entry 
> updated.  At our site (and I'm sure, others), I'm afraid to say in some cases 
> it takes as long as 24 hours to get updates to happen. Even if that was 
> fixed, caching can bite you too.

We also have work going on now to automate the addition and update of
DNS entries as VMs come and go [1].  Please have a look and provide
feedback.

[1] https://review.openstack.org/#/q/topic:bp/external-dns-resolution,n,z

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Remove Tuskar from tripleo-common and python-tripleoclient

2015-09-15 Thread Ben Nemec
On 09/15/2015 11:33 AM, Dougal Matthews wrote:
> Hi all,
> 
> This is partly a heads up for everyone, but also seeking feedback on the
> direction.
> 
> We are starting to move to a more general Heat workflow without the need for
> Tuskar. The CLI is already in a position to do this as we can successfully
> deploy without Tuskar.
> 
> Moving forward it will be much easier for us to progress if we don't need to
> take Tuskar into account in tripleo-common. This will be particularly useful
> when working on the overcloud deployment library and API spec [1].
> 
> Tuskar UI doesn't currently use tripleo-common (or tripleoclient) and
> thus it
> is safe to make this change from the UI's point of view.
> 
> I have started the process of doing this removal and posted three WIP
> reviews
> [2][3][4] to assess how much change was needed, I plan to tidy them up over
> the next day or two. There is one for tripleo-common, python-tripleoclient
> and tripleo-docs. The documentation one only removes references to Tuskar on
> the CLI and doesn't remove Tuskar totally - so Tuskar UI is still covered
> until it has a suitable replacement.
> 
> I don't anticipate any impact for CI as I understand that all the current CI
> has migrated from deploying with Tuskar to deploying the templates directly
> (Using `openstack overcloud deploy --templates` rather than --plan). I
> believe it is safe to remove from python-tripleoclient as that repo is so
> new. I am however unsure about the TripleO deprecation policy for tripleo-
> common?

I think I'd file this under "oops, didn't work" and go ahead with the
compatibility break.  This is all going to have to get ripped out to
make room for the new Tuskar, so I don't think there's any point in
jumping through hoops to notify all the people not using Tuskar that
it's going away. :-)

> 
> Thanks,
> Dougal
> 
> 
> [1]: https://review.openstack.org/219754
> [2]: https://review.openstack.org/223527
> [3]: https://review.openstack.org/223535
> [4]: https://review.openstack.org/223605
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Fox, Kevin M
I am not a neutron developer, but an operator and a writer of cloud apps.

Yes, it is sort of a philosophical issue, and I have stated my side of why I 
think the extra complexity is worth it. Feel free to disagree.

But either way I don't think we can ignore the complexity. There are three 
different ways to resolve it:

* Simple use case, no naas at all. The "simplest" solution, some app developers 
and some users suffer that actually need naas. users/app developers have to add 
it on top themselves.
* always naas. Ops (and perhaps users) have to deal with the extra complexity 
if they feel they don't need it. but its simpler in that you can always rely on 
it being there.
* No naas and naas are both supported. Ops get it easy. they pick which one 
they want, users suffer a little if they work on multiple clouds that differ. 
app developers suffer a lot since they have to write either two sets of 
software or pick the lowest common denominator.

Its an optimization problem. who do you shift the difficulty to?

My personal opinion again is that I'd rather suffer a little more as an Op and 
always deploy naas, rather then have to deal with the app developer pain of not 
being able to rely on it. The users/ops benefit the most if a strong app 
ecosystem can be developed on top.

Again, my personal opinion. Feel free to differ.

Thanks,
Kevin



From: Mathieu Gagné [mga...@internap.com]
Sent: Tuesday, September 15, 2015 3:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' 
network model

On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
> We run several clouds where there are multiple external networks. the "just 
> run it in on THE public network" doesn't work. :/
>
> I also strongly recommend to users to put vms on a private network and use 
> floating ip's/load balancers. For many reasons. Such as, if you don't, the ip 
> that gets assigned to the vm helps it become a pet. you can't replace the vm 
> and get the same IP. Floating IP's and load balancers can help prevent pets. 
> It also prevents security issues with DNS and IP's. Also, for every floating 
> ip/lb I have, I usually have 3x or more the number of instances that are on 
> the private network. Sure its easy to put everything on the public network, 
> but it provides much better security if you only put what you must on the 
> public network. Consider the internet. would you want to expose every device 
> in your house directly on the internet? No. you put them in a private network 
> and poke holes just for the stuff that does. we should be encouraging good 
> security practices. If we encourage bad ones, then it will bite us later when 
> OpenStack gets a reputation for being associated with compromises.
>

Sorry but I feel this kind of reply explains why people are still using
nova-network over Neutron. People want simplicity and they are denied it
at every corner because (I feel) Neutron thinks it knows better.

The original statement by Monty Taylor is clear to me:

I wish to boot an instance that is on a public network and reachable
without madness.

As of today, you can't unless you implement a deployer/provider specific
solution (to scale said network). Just take a look at what actual public
cloud providers are doing:

- Rackspace has a "magic" public network
- GoDaddy has custom code in their nova-scheduler (AFAIK)
- iWeb (which I work for) has custom code in front of nova-api.

We are all writing our own custom code to implement what (we feel)
Neutron should be providing right off the bat.

By reading the openstack-dev [1], openstack-operators [2] lists, Neutron
specs [3] and the Large Deployment Team meeting notes [4], you will see
that what is suggested here (a scalable public shared network) is an
objective we wish but are struggling hard to achieve.

People keep asking for simplicity and Neutron looks to not be able to
offer it due to philosophical conflicts between Neutron developers and
actual public users/operators. We can't force our users to adhere to ONE
networking philosophy: use NAT, floating IPs, firewall, routers, etc.
They just don't buy it. Period. (see monty's list of public providers
attaching VMs to public network)

If we can accept and agree that not everyone wishes to adhere to the
"full stack of networking good practices" (TBH, I don't know how to call
this thing), it will be a good start. Otherwise I feel we won't be able
to achieve anything.

What Monty is explaining and suggesting is something we (my team) have
been struggling with for *years* and just didn't have bandwidth (we are
operators, not developers) or public charisma to change.

I'm glad Monty brought up this subject so we can officially address it.


[1] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070028.html
[2]
http://lists.openstack.org/pipermail/openstack-operators/2015-August/007857.html
[3]

Re: [openstack-dev] [Policy][Group-based-policy]

2015-09-15 Thread Sagar Pradhan
Thanks Sumit.
Actually I also did the same thing using --debug option with CLI to find
the REST requests and responses.
Thanks for the help . Will get back to you if required.

Regards,
Sagar
On Sep 15, 2015 11:59 PM, "Sumit Naiksatam" 
wrote:

> Hi Sagar,
>
> GBP has a single REST API interface. The CLI, Horizon and Heat are
> merely clients of the same REST API.
>
> There was a similar question on this which I had responded to in a
> different mailer:
> http://lists.openstack.org/pipermail/openstack/2015-September/013952.html
>
> and I believe you are cc'ed on that thread. I have provided more
> information on how you can run the CLI in the verbose mode to explore
> the REST request and responses. Hope that will be helpful, and we are
> happy to guide you through this exercise (catch us on #openstack-gbp
> for real time help).
>
> Thanks,
> ~Sumit.
>
> On Tue, Sep 15, 2015 at 3:45 AM, Sagar Pradhan 
> wrote:
> >
> >  Hello ,
> >
> > We were exploring group based policy for some project.We could find CLI
> and
> > REST API documentation for GBP.
> > Do we have separate REST API for GBP which can be called separately ?
> > From documentation it seems that we can only use CLI , Horizon and Heat.
> > Please point us to CLI or REST API documentation for GBP.
> >
> >
> > Regards,
> > Sagar
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Mathieu Gagné
Hi Kevin,

On 2015-09-15 8:33 PM, Fox, Kevin M wrote:
> I am not a neutron developer, but an operator and a writer of cloud apps.

So far, I'm only an operator and heavy cloud apps user (if you can call
Vagrant an app). =)


> Yes, it is sort of a philosophical issue, and I have stated my side
> of why I think the extra complexity is worth it. Feel free to
> disagree.

I don't disagree with the general idea of NaaS or SDN. We are looking to
offer this stuff in the future so customers wishing to have more control
over their networks can have it.

I would however like for other solutions (which doesn't require
mandatory NATing, floating IPs, and routers) to be accepted and fully
supported as first class citizen.


> But either way I don't think we can ignore the complexity. There are
> three different ways to resolve it:
> 
> * No naas and naas are both supported. Ops get it easy. they pick
> which one they want, users suffer a little if they work on multiple
> clouds that differ. app developers suffer a lot since they have to
> write either two sets of software or pick the lowest common
> denominator.
> 
> Its an optimization problem. who do you shift the difficulty to?
> 
> My personal opinion again is that I'd rather suffer a little more as
> an Op and always deploy naas, rather then have to deal with the app
> developer pain of not being able to rely on it. The users/ops benefit
> the most if a strong app ecosystem can be developed on top.


So far, I'm aiming for "No naas and naas are both supported.":

- No NaaS: A public shared and routable provider network.
- NaaS: All the goodness of SDN and private networks.

While NaaS is a very nice feature for cloud apps writer, we found that
our type of users actually don't ask for it (yet) and are looking for
simplicity instead.

BTW, let me know if I got my understanding of "No naas" (very?) wrong.


As Monty Taylor said [3], we should be able to "nova boot" or "nova boot
--network public" just fine.

So lets assume I don't have NaaS yet. I only have 1 no-NaaS network
named "public" available to all tenants.

With this public shared network from no-NaaS, you should be able to boot
just fine. Your instance ends up on a public shared network with a
public IP address without NATing/Floating IPs and such. (Note that we
implemented anti-spoofing on those networks)


Now you wish to use NaaS. So you create this network named "private" or
whatever you feel naming it.

You should be fine too with "nova boot --network private" by making sure
the network name doesn't conflict with the public shared network.
Otherwise you can provide the network UUID just like before. I agree
that you loose the ability to "nova boot" without "--network". See below.


The challenge I see here is with both "no-NaaS" and "NaaS". Now you
could end up with 2 or more networks to choose from and "nova boot"
alone will get confused.


My humble suggestions are:

- Create a new client-side config to tell which network name to choose
(OS_NETWORK_NAME?) so you don't have to type it each time.
- Create a tenant specific server-side config (stored somewhere in
Nova?) to tell which network name to choose from by default.

This will restore the coolness of "nova boot" without specifying
"--network".

If you application requires a lot of networks (and complexity), I'm sure
all these "nova boot" is non-sense to you anyway and that you will
provide the actual list of networks to boot on.


Regarding the need and wish of users to keep their public IPs, you can
still use floating IPs in both cases. It's a matter of educating the
users that public IPs on the no-NaaS network aren't preserved on
destruction. I'm planning to use routes instead of NATing for the public
shared network.


So far, what I'm missing to create a truly scalable public shared
network is what is described here [2] and here [3] as you just can't
scale your L2 network infinitely. (same for private NaaS networks but
that's an other story)


Note that I'm fully aware that it creates a lot of challenges on the
Nova side related to scheduling, resizing, live migrations, evacuate,
etc. But I'm confident that those challenges aren't impossible to overcome.


Kevin, let me know if I missed a known use case you might be actively
using. I never fully used the NaaS part of Neutron so I can't tell for
sure. Or maybe I'm just stating obvious stuff and completely missing the
point of this thread. :D


[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074618.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070028.html
[3] https://review.openstack.org/#/c/196812/


-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Help with stable/juno branches / releases

2015-09-15 Thread Tony Breeds
On Tue, Sep 15, 2015 at 12:52:40PM -0400, Doug Hellmann wrote:

> I've created the branches for oslo.utils and oslotest, as requested.
> There are patches up for each to update the .gitreview file, which will
> make it easier to land the patches to update whatever requirements
> settings need to be adjusted.
> 
> Since these are managed libraries, you can request releases by
> submitting patches to the openstack/releases repository (see the
> README and ping me in #openstack-relmgr-office if you need a hand
> the first time, I'll be happy to walk you through it).

Thanks Doug!

Yours Tony.


pgpsVajcaxTXE.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] PTL Candidacy

2015-09-15 Thread David Lyle
I would like to announce my candidacy for Horizon PTL for Mitaka.

I've been contributing to Horizon since the Grizzly cycle and I've had the
honor of serving as PTL for the past four cycles.

Over the past couple of releases, our main goal has been to position Horizon
for the future while maintaining a stable, extensible project for current
installations and providing a smooth path forward for those installations.
Which is proving a delicate balancing act. In Kilo, we added a great deal
of toolkit for AngularJS based content and took a first pass at some AngularJS
driven content in Horizon. Much of the Liberty cycle was spent applying the
lessons we learned from the Kilo work and correcting architectural issues.
While the amount of AngularJS based content is not growing quickly in Horizon,
we have created a framework that plugins are building on.

We've had several successes in the Liberty cycle.
We have a more complete plugin framework to allow for an increasing number
of projects in the big tent to create Horizon content. The plugin framework
works for both Django based and AngularJS based plugins.

Theming improvements have continued and is now far more powerful.

Many improvements in the AngularJS tooling. Including: sensible
localization support for AngularJS code; a more coherent foundation for
JavaScript code; better testing support; and an implemented JS coding
style.

Areas of focus for the Mitaka cycle:
Stability. Continue to balance progress and stability.

Finding a better way to allow forward progress on AngularJS content inside
of Horizon. I've been advocating the use of feature branches for some time
and will look to push work there to help establish the patterns for
Angular in Horizon.

Continue progress in moving separable content out of the Horizon source
tree. This will benefit both service teams to make faster progress, while
reducing the overall scope of the Horizon project.

Focus work on areas of high benefit. There are a several reasons we chose
to adopt AngularJS. Most were around scaling, usability and access to
data. Let's focus on the areas with the greatest upside first.

Provide better guidance for plugins in the form of testing and style
guidelines.

I'm still driven to continue the challenging work the Horizon community has
undertaken to improve and look forward. If you'll have me, I'd like to continue
enabling the talented folks doing the heavy lifting while balancing the needs
of existing users. I believe if we continue to work through some of these
transitional pains, we'll make significant progress in Mitaka.

Thanks for your consideration,
David Lyle

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] LVM snapshot performance issue -- why isn't thin provisioning the default?

2015-09-15 Thread Philipp Marek
> > I'm currently trying to work around an issue where activating LVM
> > snapshots created through cinder takes potentially a long time. 
[[ thick LVM snapshot performance problem ]]
> > Given the above, is there any reason why we couldn't make thin
> > provisioning the default?
> 
> My intention is to move toward thin-provisioned LVM as the default -- it
> is definitely better suited to our use of LVM.
...
> The other issue preventing using thin by default is that we default the
> max oversubscription ratio to 20.  IMO that isn't a safe thing to do for
> the reference implementation, since it means that people who deploy
> Cinder LVM on smaller storage configurations can easily fill up their
> volume group and have things grind to halt.  I think we want something
> closer to the semantics of thick LVM for the default case.
The DRBDmanage backend has to deal with the same problem.

We decided to provide 3 different storage strategies:

 * Use Thick LVs - with the known performance implications when using
   snapshots.
 * Use one Thin Pool for the volumes - this uses the available space
   "optimally", but gives the oversubscription problem mentioned above.
 * Use multiple Thin Pools, one for each volume.
   This provides efficient snapshots *and* space reservation for each
   volume.
   
The last strategy is no panacea, though - something needs to check the free 
space in the pool, because the snapshots can still fill it up...
Without impacting the other volumes, though.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Doug Wiegley


> On Sep 15, 2015, at 4:11 PM, Mathieu Gagné  wrote:
> 
>> On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
>> We run several clouds where there are multiple external networks. the "just 
>> run it in on THE public network" doesn't work. :/
>> 
>> I also strongly recommend to users to put vms on a private network and use 
>> floating ip's/load balancers. For many reasons. Such as, if you don't, the 
>> ip that gets assigned to the vm helps it become a pet. you can't replace the 
>> vm and get the same IP. Floating IP's and load balancers can help prevent 
>> pets. It also prevents security issues with DNS and IP's. Also, for every 
>> floating ip/lb I have, I usually have 3x or more the number of instances 
>> that are on the private network. Sure its easy to put everything on the 
>> public network, but it provides much better security if you only put what 
>> you must on the public network. Consider the internet. would you want to 
>> expose every device in your house directly on the internet? No. you put them 
>> in a private network and poke holes just for the stuff that does. we should 
>> be encouraging good security practices. If we encourage bad ones, then it 
>> will bite us later when OpenStack gets a reputation for being associated 
>> with compromises.
> 
> Sorry but I feel this kind of reply explains why people are still using
> nova-network over Neutron. People want simplicity and they are denied it
> at every corner because (I feel) Neutron thinks it knows better.

Please stop painting such generalizations.  Go to the third or fourth email in 
this thread and you will find a spec, worked on by neutron and nova, that 
addresses exactly this use case.

It is a valid use case, and neutron does care about it. It has wrinkles. That 
has not stopped work on it for the common cases.

Thanks,
Doug 


> 
> The original statement by Monty Taylor is clear to me:
> 
> I wish to boot an instance that is on a public network and reachable
> without madness.
> 
> As of today, you can't unless you implement a deployer/provider specific
> solution (to scale said network). Just take a look at what actual public
> cloud providers are doing:
> 
> - Rackspace has a "magic" public network
> - GoDaddy has custom code in their nova-scheduler (AFAIK)
> - iWeb (which I work for) has custom code in front of nova-api.
> 
> We are all writing our own custom code to implement what (we feel)
> Neutron should be providing right off the bat.
> 
> By reading the openstack-dev [1], openstack-operators [2] lists, Neutron
> specs [3] and the Large Deployment Team meeting notes [4], you will see
> that what is suggested here (a scalable public shared network) is an
> objective we wish but are struggling hard to achieve.
> 
> People keep asking for simplicity and Neutron looks to not be able to
> offer it due to philosophical conflicts between Neutron developers and
> actual public users/operators. We can't force our users to adhere to ONE
> networking philosophy: use NAT, floating IPs, firewall, routers, etc.
> They just don't buy it. Period. (see monty's list of public providers
> attaching VMs to public network)
> 
> If we can accept and agree that not everyone wishes to adhere to the
> "full stack of networking good practices" (TBH, I don't know how to call
> this thing), it will be a good start. Otherwise I feel we won't be able
> to achieve anything.
> 
> What Monty is explaining and suggesting is something we (my team) have
> been struggling with for *years* and just didn't have bandwidth (we are
> operators, not developers) or public charisma to change.
> 
> I'm glad Monty brought up this subject so we can officially address it.
> 
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070028.html
> [2]
> http://lists.openstack.org/pipermail/openstack-operators/2015-August/007857.html
> [3]
> http://specs.openstack.org/openstack/neutron-specs/specs/liberty/get-me-a-network.html
> [4]
> http://lists.openstack.org/pipermail/openstack-operators/2015-June/007427.html
> 
> -- 
> Mathieu
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3][QA] DVR job failure rate and maintainability

2015-09-15 Thread shihanzhang
Sean, 
Thank you very much for writing this, DVR indeed need to get more attention, 
it's a very cool and usefull feature, especially in large-scale. In Juno, it 
firstly lands to Neutron, through the development of Kilo and Liberty, it's 
getting better and better, we have used it in our production,
in the process of use, we found the following bugs have not been fixed, we have 
filed bug on launchpad:
1. every time we create a VM, it will trigger router scheduling, in 
large-scale, if there are lage l3 agents bind to a DVR router, scheduling 
router consume much time, but scheduling action is not necessary.[1]
2. every time we bind a VM with floatingIP, it also trigger router scheduling, 
and send this floatingIP to all bound
l3 agents.[2]
3. Bulk delete VMs from a compute node which has no VM on this router, for most 
part, the router namespace will remain.[3]
4. Updating router_gateway trigger reschedule_router, during reschedule_router, 
the communication is broken related to this router, for DVR router, why router 
need to reschedule_router? it reschedule which l3 agents? [4]
5. Stale fip namespaces are not cleaned up on compute nodes. [5]


I very agree with that we need a group of contributors that
can help with the DVR feature in the immediate term to fix the current bugs.
I am very glad to join this group.


Neutroner, let's start to do the great things!


Thanks,
Hanzhang,Shi


[1] https://bugs.launchpad.net/neutron/+bug/1486795
[2]https://bugs.launchpad.net/neutron/+bug/1486828
[3] https://bugs.launchpad.net/neutron/+bug/1496201
[4] https://bugs.launchpad.net/neutron/+bug/1496204
[5] https://bugs.launchpad.net/neutron/+bug/1470909







At 2015-09-15 06:01:03, "Sean M. Collins"  wrote:
>[adding neutron tag to subject and resending]
>
>Hi,
>
>Carl Baldwin, Doug Wiegley, Matt Kassawara, Ryan Moats, and myself are
>at the QA sprint in Fort Collins. Earlier today there was a discussion
>about the failure rate about the DVR job, and the possible impact that
>it is having on the gate.
>
>Ryan has a good patch up that shows the failure rates over time:
>
>https://review.openstack.org/223201
>
>To view the graphs, you go over into your neutron git repo, and open the
>.html files that are present in doc/dashboards - which should open up
>your browser and display the Graphite query.
>
>Doug put up a patch to change the DVR job to be non-voting while we
>determine the cause of the recent spikes:
>
>https://review.openstack.org/223173
>
>There was a good discussion after pushing the patch, revolving around
>the need for Neutron to have DVR, to fit operational and reliability
>requirements, and help transition away from Nova-Network by providing
>one of many solutions similar to Nova's multihost feature.  I'm skipping
>over a huge amount of context about the Nova-Network and Neutron work,
>since that is a big and ongoing effort. 
>
>DVR is an important feature to have, and we need to ensure that the job
>that tests DVR has a high pass rate.
>
>One thing that I think we need, is to form a group of contributors that
>can help with the DVR feature in the immediate term to fix the current
>bugs, and longer term maintain the feature. It's a big task and I don't
>believe that a single person or company can or should do it by themselves.
>
>The L3 group is a good place to start, but I think that even within the
>L3 team we need dedicated and diverse group of people who are interested
>in maintaining the DVR feature. 
>
>Without this, I think the DVR feature will start to bit-rot and that
>will have a significant impact on our ability to recommend Neutron as a
>replacement for Nova-Network in the future.
>
>-- 
>Sean M. Collins
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Barbican : Unable to run barbican CURL commands after starting/restarting barbican using the service file

2015-09-15 Thread Asha Seshagiri
Hi All,

I am Unable to run barbican CURL commands after starting/restarting
barbican using the service file.

Used below command to run  barbican service file :
(wheel)[root@controller-01 service]# systemctl restart barbican-api.service

When I tried executing the command to create the secret , I did not get any
response from the server .

(wheel)[root@controller-01 service]# ps -ef | grep barbican
barbican  1104 1  0 22:56 ?00:00:00 /opt/barbican/bin/uwsgi
--master --emperor /etc/barbican/vassals
barbican  1105  1104  0 22:56 ?00:00:00 /opt/barbican/bin/uwsgi
--master --emperor /etc/barbican/vassals
barbican  1106  1105  0 22:56 ?00:00:00 /opt/barbican/bin/uwsgi
--ini barbican-api.ini
root  3195 28132  0 23:03 pts/000:00:00 grep --color=auto barbican

Checked the status of the barbican-api.service file and got the following
response :

(wheel)[root@controller-01 service]# systemctl status  barbican-api.service
-l
barbican-api.service - Barbican Key Management API server
   Loaded: loaded (/usr/lib/systemd/system/barbican-api.service; enabled)
   Active: active (running) since Tue 2015-09-15 22:56:12 UTC; 2min 17s ago
 Main PID: 1104 (uwsgi)
   Status: "The Emperor is governing 1 vassals"
   CGroup: /system.slice/barbican-api.service
   ├─1104 /opt/barbican/bin/uwsgi --master --emperor
/etc/barbican/vassals
   ├─1105 /opt/barbican/bin/uwsgi --master --emperor
/etc/barbican/vassals
   └─1106 /opt/barbican/bin/uwsgi --ini barbican-api.ini

*Sep 15 22:58:30 controller-01 uwsgi[1104]: APP, pipeline[-1], global_conf)*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: File
"/opt/barbican/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line
458, in get_context*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: section)*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: File
"/opt/barbican/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line
517, in _context_from_explicit*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: value =
import_string(found_expr)*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: File
"/opt/barbican/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line
22, in import_string*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: return
pkg_resources.EntryPoint.parse("x=" + s).load(False)*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: File
"/opt/barbican/lib/python2.7/site-packages/pkg_resources.py", line 2265, in
load*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: raise ImportError("%r has no %r
attribute" % (entry,attr))*
*Sep 15 22:58:30 controller-01 uwsgi[1104]: ImportError:  has no
'create_main_app_v1' attribute*


Please find contents of  barbican-api. service file :

[Unit]
Description=Barbican Key Management API server
After=syslog.target network.target

[Service]
Type=simple
NotifyAccess=all
User=barbican
KillSignal=SIGINT
ExecStart={{ barbican_virtualenv_path }}/bin/uwsgi --master --emperor
/etc/barbican/vassals
PrivateTmp=true

[Install]
WantedBy=multi-user.target

Even though barbican is running successfully , we are unable to run the
CURL command . Would like to know if the  "I*mportError:  has no
'create_main_app_v1' attribute" *is the cause for not being able to execute
the CURL commands.

How do we debug I*mportError:  has no
'create_main_app_v1' attribute"*
And also I think that barbican restart is not successful.

Any help would highly be appreciated .

But I am able tor run the command  "/bin/uwsgi --master --emperor
/etc/barbican/vassals" manually and was able to run the CURL commands .


-- 
*Thanks and Regards,*
*Asha Seshagiri*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] PTL Non-Candidacy

2015-09-15 Thread Jay S. Bryant

Mike,

We were enabled to do some great things under your leadership. Cinder 
has greatly benefited from your time as a PTL.


Thanks for all you have done!

Jay

On 09/14/2015 11:15 AM, Mike Perez wrote:

Hello all,

I will not be running for Cinder PTL this next cycle. Each cycle I ran
was for a reason [1][2], and the Cinder team should feel proud of our
accomplishments:

* Spearheading the Oslo work to allow *all* OpenStack projects to have
their database being independent of services during upgrades.
* Providing quality to OpenStack operators and distributors with over
60 accepted block storage vendor drivers with reviews and enforced CI
[3].
* Helping other projects with third party CI for their needs.
* Being a welcoming group to new contributors. As a result we grew greatly [4]!
* Providing documentation for our work! We did it for Kilo [5], and I
was very proud to see the team has already started doing this on their
own to prepare for Liberty.

I would like to thank this community for making me feel accepted in
2010. I would like to thank John Griffith for starting the Cinder
project, and empowering me to lead the project through these couple of
cycles.

With the community's continued support I do plan on continuing my
efforts, but focusing cross project instead of just Cinder. The
accomplishments above are just some of the things I would like to help
others with to make OpenStack as a whole better.


[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046788.html
[2] - http://lists.openstack.org/pipermail/openstack-dev/2015-April/060530.html
[3] - 
http://superuser.openstack.org/articles/what-you-need-to-know-about-openstack-cinder
[4] - http://thing.ee/cinder/active_contribs.png
[5] - https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Key_New_Features_7

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Jeremy Stanley
On 2015-09-15 18:00:03 + (+), Fox, Kevin M wrote:
> We run several clouds where there are multiple external networks.
> the "just run it in on THE public network" doesn't work. :/

Is this for a public servive provider? If so, do you expect your
users to have some premonition which tells them the particular
public network they should be choosing?

> I also strongly recommend to users to put vms on a private network
> and use floating ip's/load balancers. For many reasons. Such as,
> if you don't, the ip that gets assigned to the vm helps it become
> a pet.

I like pets just fine. Often I want pet servers not cattle servers.
Why should you (the service provider) make this choice for me?

> you can't replace the vm and get the same IP.

No? Well, you can `nova rebuild` in at least some environments, but
regardless it's not that hard to change a couple of DNS records when
replacing a server (virtual or physical).

> Floating IP's and load balancers can help prevent pets.

They can help prevent lots of things, some good, some bad. I'd
rather address translation were the exception, not the rule. NAT has
created a broken Internet.

> It also prevents security issues with DNS and IP's.

This is the first I've heard about DNS and IP addresses being
insecure. Please elaborate, and also explain your alternative
Internet which relies on neither of these.

> Also, for every floating ip/lb I have, I usually have 3x or more
> the number of instances that are on the private network.

Out of some misguided assumption that NAT is a security panacea from
the sound of it.

> Sure its easy to put everything on the public network, but it
> provides much better security if you only put what you must on the
> public network.

I highly recommend a revolutionary new technology called "packet
filtering."

> Consider the internet. would you want to expose every device in
> your house directly on the internet? No.

On the contrary, I actually would (depending on what you mean by
"expose", but I assume from context you mean assign individual
global addresses directly to the network interfaces of each). With
IPv6 I do and would with v4 as well if my local provider routed me
more than a /32 assignment.

> you put them in a private network and poke holes just for the
> stuff that does.

No, I put them in a globally-routed network (the terms "private" and
"public" are misleading in the context of these sorts of
discussions) and poke holes just for the stuff that people need to
reach from outside that network.

> we should be encouraging good security practices. If we encourage
> bad ones, then it will bite us later when OpenStack gets a
> reputation for being associated with compromises.

Here we agree, we just disagree on what those security practices
are. Address translation is no substitute for good packet filtering,
and allowing people to ignorantly assume so does them a great
disservice. We should be educating them on how to properly protect
their systems while at the same time showing them how much better
the Internet works without the distasteful workarounds brought about
by unnecessary layers of address-translating indirection.

And before this turns into a defense-in-depth debate, adding NAT to
your filtering doesn't really increase security it just increases
complexity.

> I do consider making things as simple as possible very important.
> but that is, make them as simple as possible, but no simpler.
> There's danger here of making things too simple.

Complexity is the enemy of security, so I find your reasoning
internally inconsistent. Proponents of NAT are suffering from some
manner of mass-induced Stockholm Syndrome. It's a hack to deal with
our oversubscription of the IPv4 address space and in some cases
solve address conflicts between multiple networks. Its unfortunate
ubiquity has confused lots of people into thinking it's there for
better reasons than it actually is.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Monty Taylor

On 09/15/2015 06:30 PM, Armando M. wrote:



On 15 September 2015 at 08:04, Monty Taylor > wrote:

Hey all!

If any of you have ever gotten drunk with me, you'll know I hate
floating IPs more than I hate being stabbed in the face with a very
angry fish.

However, that doesn't really matter. What should matter is "what is
the most sane thing we can do for our users"

As you might have seen in the glance thread, I have a bunch of
OpenStack public cloud accounts. Since I wrote that email this
morning, I've added more - so we're up to 13.

auro
citycloud
datacentred
dreamhost
elastx
entercloudsuite
hp
ovh
rackspace
runabove
ultimum
unitedstack
vexxhost

Of those public clouds, 5 of them require you to use a floating IP
to get an outbound address, the others directly attach you to the
public network. Most of those 8 allow you to create a private
network, to boot vms on the private network, and ALSO to create a
router with a gateway and put floating IPs on your private ip'd
machines if you choose.

Which brings me to the suggestion I'd like to make.

Instead of having our default in devstack and our default when we
talk about things be "you boot a VM and you put a floating IP on it"
- which solves one of the two usage models - how about:

- Cloud has a shared: True, external:routable: True neutron network.
I don't care what it's called  ext-net, public, whatever. the
"shared" part is the key, that's the part that lets someone boot a
vm on it directly.

- Each person can then make a private network, router, gateway, etc.
and get floating-ips from the same public network if they prefer
that model.

Are there any good reasons to not push to get all of the public
networks marked as "shared"?


The reason is simple: not every cloud deployment is the same: private is
different from public and even within the same cloud model, the network
topology may vary greatly.


Yes. Many things may be different.


Perhaps Neutron fails in the sense that it provides you with too much
choice, and perhaps we have to standardize on the type of networking
profile expected by a user of OpenStack public clouds before making
changes that would fragment this landscape even further.

If you are advocating for more flexibility without limiting the existing
one, we're only making the problem worse.


I am not. I am arguing for a different arbitrary 'default' deployment. 
Right now the verbiage around things is "floating IPs is the 'right' way 
to get access to public networks"


I'm not arguing for code changes, or more options, or new features.

I'm saying that there a set of public clouds that provide a default 
experience out of the box that is pleasing with neutron today, and we 
should have the "I don't know what I want tell me what to do" option 
behave like those clouds.


Yes. You can do other things.
Yes. You can get fancy.
Yes. You can express all of the things.

Those are things I LOVE about neutron and one of the reasons I think 
that the arguments around neutron and nova-net are insane.


I'm just saying that "I want a computer on the externally facing network 
from this cloud" is almost never well served by floating-ips unless you 
know what you're doing, so rather than leading people down the road 
towards that as the default behavior, since it's the HARDER thing to 
deal with - let's lead them to the behavior which makes the simple thing 
simple and then clearly open the door to them to increasingly complex 
and powerful things over time.


OH - well, one thing - that's that once there are two networks in an
account you have to specify which one. This is really painful in
nova clent. Say, for instance, you have a public network called
"public" and a private network called "private" ...

You can't just say "nova boot --network=public" - nope, you need to
say "nova boot --nics net-id=$uuid_of_my_public_network"

So I'd suggest 2 more things;

a) an update to python-novaclient to allow a named network to be
passed to satisfy the "you have more than one network" - the nics
argument is still useful for more complex things

b) ability to say "vms in my cloud should default to being booted on
the public network" or "vms in my cloud should default to being
booted on a network owned by the user"

Thoughts?


As I implied earlier, I am not sure how healthy this choice is. As a
user of multiple clouds I may end up having a different user experience
based on which cloud I am using...I thought you were partially
complaining about lack of consistency?


I am a user of multiple clouds. I am complaining about the current lack 
of consistency.


More than that though, I'm complaining that we lead people to select a 
floating-ip model when having them flip the boolean value 

Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Fox, Kevin M
Yup. And Designate works very well. :)

But DNSaaS is not always an option to "the powers that be" floating ip's 
are a much easier sell.

Also Designate does have a restriction of wanting to manage a whole domain 
itself. When you have existing infrastructure you want your vms to merge into, 
its a problem.

Thanks,
Kevin

From: Assaf Muller [amul...@redhat.com]
Sent: Tuesday, September 15, 2015 2:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' 
network model



On Tue, Sep 15, 2015 at 5:09 PM, Fox, Kevin M 
> wrote:
Unfortunately, I haven't had enough chance to play with ipv6 yet.

I still think ipv6 with floating ip's probably makes sense though.

In ipv4, the floating ip's solve one particular problem:

End Users want to be able to consume a service provided by a VM. They have two 
options:
1. contact the ip directly
2. use DNS.

DNS is preferable, since humans don't remember ip's very well. IPv6 is much 
harder to remember then v4 too.

DNS has its own issues, mostly, its usually not very quick to get a DNS entry 
updated.  At our site (and I'm sure, others), I'm afraid to say in some cases 
it takes as long as 24 hours to get updates to happen. Even if that was fixed, 
caching can bite you too.

I'm curious if you tried out Designate / DNSaaS.


So, when you register a DNS record, the ip that its pointing at, kind of 
becomes a set of state. If it can't be separated from a VM its a bad thing. You 
can move it from VM to VM and your VM is not a pet. But, if your IP is 
allocated to the VM specifically, as non Floating IP's are, you run into 
problems if your VM dies and you have to replace it. If your unlucky, it dies, 
and someone else gets allocated the fixed ip, and now someone else's server is 
sitting on your DNS entry! So you are very unlikely to want to give up your VM, 
turning it into a pet.

I'd expect v6 usage to have the same issues.

The floating ip is great in that its an abstraction of a contactable address, 
separate from any VM it may currently be bound to.

You allocate a floating ip. You can then register it with DNS, and another 
tenant can not get accidentally assigned it. You can move it from vm to vm 
until your done with it. You can Unregister it from DNS, and then it is safe to 
return to others to use.

To me, the NAT aspect of it is a secondary thing. Its primary importance is in 
enabling things to be more cattleish and helping with dns security.

Thanks,
Kevin







From: Clark Boylan [cboy...@sapwetik.org]
Sent: Tuesday, September 15, 2015 1:06 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' 
network model

On Tue, Sep 15, 2015, at 11:00 AM, Fox, Kevin M wrote:
> We run several clouds where there are multiple external networks. the
> "just run it in on THE public network" doesn't work. :/
Maybe this would be better expressed as "just run it on an existing
public network" then?
>
> I also strongly recommend to users to put vms on a private network and
> use floating ip's/load balancers. For many reasons. Such as, if you
> don't, the ip that gets assigned to the vm helps it become a pet. you
> can't replace the vm and get the same IP. Floating IP's and load
> balancers can help prevent pets. It also prevents security issues with
> DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or
> more the number of instances that are on the private network. Sure its
> easy to put everything on the public network, but it provides much better
> security if you only put what you must on the public network. Consider
> the internet. would you want to expose every device in your house
> directly on the internet? No. you put them in a private network and poke
> holes just for the stuff that does. we should be encouraging good
> security practices. If we encourage bad ones, then it will bite us later
> when OpenStack gets a reputation for being associated with compromises.
There are a few issues with this. Neutron IPv6 does not support floating
IPs. So now you have to use two completely different concepts for
networking on a single dual stacked VM. IPv4 goes on a private network
and you attach a floating IP. IPv6 is publicly routable. If security and
DNS and not making pets were really the driving force behind floating
IPs we would see IPv6 support them too. These aren't the reasons
floating IPs exist, they exist because we are running out of IPv4
addresses and NAT is everyones preferred solution to that problem. But
that doesn't make it a good default for a cloud; use them if you are
affected by an IP shortage.

Nothing prevents you from load balancing against public IPs to address
the DNS and firewall rule 

[openstack-dev] [searchlight] PTL Candidacy

2015-09-15 Thread Tripp, Travis S
Hello friends,

We are now going into our first official PTL election for Searchlight and I
would be honored if you’ll allow me to continue serving in the PTL role.

Searchlight became a new project in Liberty after being split out from its
initial experimental days in Glance. Since then we’ve been we’ve been moving
at a relatively fast pace towards fulfilling our mission: To provide advanced
and scalable indexing and search across multi-tenant cloud resources.

It would be a huge understatement to say that accomplishing this mission is
important to me. I believe that search is critical to enabling a better
experience for OpenStack users and operators alike.

In Liberty, we have made tremendous progress thanks to a small, but
passionate team who believes in the vision of Searchlight. At the Mitaka
summit we’ll be able to demonstrate a Horizon panel plugin able to search
across Glance, Nova, and Designate.

I believe that the PTL role is a commitment to the community to act as a
steward for the project. As PTL for an early stage project like searchlight,
I believe that some of these responsibilities are to:

* Evangelize the project across the OpenStack ecosystem
* Provide technical guidance and contribution
* Facilitate collaboration across the community
* Enable all developers to contribute effectively
* Grow a community of potential future project PTLs
* And perhaps most importantly, enable the project to release

I believe software must be developed with a clear demonstration of its
value. With Searchlight, I do believe that a UI is one of the most effective
ways to bring the value of Searchlight to OpenStack users. This is why
from day 1, I have been actively evangelizing Searchlight with Horizon. Once
users are able to actually take advantage of Searchlight, I believe
Searchlight will become a must have component of any OpenStack deployment.

From a feature standpoint, I believe all of the following are great
candidate goals for us to pursue in Mitaka and I look forward to working
with the community as establish priorities.

* (Obviously) Extend search indexing to as many projects as possible
** With a priority on the original integrated release projects
* Provide reference deployment architectures and deployment tooling as needed
* Establish performance testing
* Work towards cross-region search support
* Enable pre-defined quick queries for the most common searches
* Release the horizon search panel either in Horizon master or on its own
* Enable horizon top nav search to become a reality [0]

Thank you for your consideration in allowing me to continue serving as PTL
for the Mitaka cycle.

Thank you,
Travis

[0] https://invis.io/6Z3T72NXW
[1] https://review.openstack.org/#/c/223805/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Armando M.
On 15 September 2015 at 16:01, Monty Taylor  wrote:

> On 09/15/2015 06:16 PM, Armando M. wrote:
>
>>
>>
>> On 15 September 2015 at 08:27, Mike Spreitzer > > wrote:
>>
>> Monty Taylor >
>> wrote on 09/15/2015 11:04:07 AM:
>>
>> > a) an update to python-novaclient to allow a named network to be
>> passed
>> > to satisfy the "you have more than one network" - the nics
>> argument is
>> > still useful for more complex things
>>
>> I am not using the latest, but rather Juno.  I find that in many
>> places the Neutron CLI insists on a UUID when a name could be used.
>> Three cheers for any campaign to fix that.
>>
>>
>> The client is not particularly tied to a specific version of the server,
>> so we don't have a Juno version, or a Kilo version, etc. (even though
>> they are aligned, see [1] for more details).
>>
>> Having said that, you could use names in place of uuids pretty much
>> anywhere. If your experience says otherwise, please consider filing a
>> bug against the client [2] and we'll get it fixed.
>>
>
> May just be a help-text bug in novaclient then:
>
>   --nic
> 

[openstack-dev] [all][elections] Last hours for PTL candidate announcements

2015-09-15 Thread Tony Breeds
A quick reminder that we are in the last hours for PTL candidate announcements.

If you want to stand for PTL, don't delay, follow the instructions on the
wikipage and make sure we know your intentions:
  https://wiki.openstack.org/wiki/PTL_Elections_September_2015

Make sure your candidacy have been submitted to the openstack/election
repository and approved by election officials.

Some statistics:
Nominations started   @ 2015-09-11 05:59:00 UTC
Nominations end   @ 2015-09-17 05:59:00 UTC
Nominations duration  : 6 days, 0:00:00
Nominations remaining : 1 day, 5:59:00
Nominations progress  :  79.18%
---
Projects  :43
Projects with candidates  :22 ( 51.16%)
Projects with election: 5 ( 11.63%)
===
Stats gathered@ 2015-09-16 00:00:00 UTC

This means that with slightly more that 1 day left nearly 50% of projects will
be deemed leaderless.

In this case the TC will be bound by [1].

Yours Tony.

[1] 
http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html



pgpLpElADG7E0.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Monty Taylor

On 09/15/2015 06:16 PM, Armando M. wrote:



On 15 September 2015 at 08:27, Mike Spreitzer > wrote:

Monty Taylor >
wrote on 09/15/2015 11:04:07 AM:

> a) an update to python-novaclient to allow a named network to be passed
> to satisfy the "you have more than one network" - the nics  argument is
> still useful for more complex things

I am not using the latest, but rather Juno.  I find that in many
places the Neutron CLI insists on a UUID when a name could be used.
Three cheers for any campaign to fix that.


The client is not particularly tied to a specific version of the server,
so we don't have a Juno version, or a Kilo version, etc. (even though
they are aligned, see [1] for more details).

Having said that, you could use names in place of uuids pretty much
anywhere. If your experience says otherwise, please consider filing a
bug against the client [2] and we'll get it fixed.


May just be a help-text bug in novaclient then:

  --nic 

Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Monty Taylor

On 09/15/2015 08:28 PM, Matt Riedemann wrote:



On 9/15/2015 10:27 AM, Mike Spreitzer wrote:

Monty Taylor  wrote on 09/15/2015 11:04:07 AM:

 > a) an update to python-novaclient to allow a named network to be
passed
 > to satisfy the "you have more than one network" - the nics argument is
 > still useful for more complex things

I am not using the latest, but rather Juno.  I find that in many places
the Neutron CLI insists on a UUID when a name could be used.  Three
cheers for any campaign to fix that.


It's my understanding that network names in neutron, like security
groups, are not unique, that's why you have to specify a UUID.


Yah.

EXCEPT - we already error when the user does not specify the network 
specifically enough, so there is nothing stopping us from trying the 
obvious thing and the moving on. Such as:


nova boot

ERROR: There are more than one network, please specify one

nova boot --network public

\o/

OR

nova boot

ERROR: There are more than one network, please specify one

nova boot --network public

ERROR: There are more than one network named 'public', please specify one

nova boot --network ecc967b6-5c01-11e5-b218-4c348816caa1

\o/

These are successive attempts at a simple operation that should be 
simple, and as the situation becomes increasingly complex, so does the 
necessity of the user's response.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] PTL Candidacy

2015-09-15 Thread Armando M.
I would like to propose my candidacy for the Neutron PTL.

If you are reading this and you know me, then you probably know what I have
been up to up until now, what I have done for the project, and what I may
continue to do. If you do not know me, and you are still interested in
reading, then I will try not bore you.

As member of this project, I have been involved with it since the early
days, and I have served as core developer since Havana. If you are
wondering whether I am partially to blame for the issues that affect
Neutron, well you may have a point, but keep reading...

I believe that Neutron itself is a unique project and as such has unique
challenges. We have grown tremendously mostly propelled by a highly
opinionated vendor perspective. This has caused us some problems and we set
foot a cycle or so ago to fix these, but at the same time stay true to the
nature of our mission: define logical abstractions, and related
implementations to provide on-demand, cloud oriented networking services.

As any other project in OpenStack, we are software and we mostly implement
'stuff' in software, and because of that we are prone to all the issues
that a software project may have. To this aim, going forward I would like
us to improve the following:


   - Stability is the priority: new features are important, but complete
   and well tested existing features are more important; we gotta figure out a
   way to bring the number of bugs down to a manageable number, just like
   nations are asked to keep their sovereign debt below a certain healthy
   threshold.
   - Narrow the focus: now that the Neutron 'stadium' is here with us,
   external plugins and drivers can integrate with Neutron in a loosely
   manner, giving the core the opportunity to be more razor focus at getting
   better at what we do: logical abstractions and pluggability.
   - Consistency is paramount: having grown the review team drastically
   over the past cycle, it is easy to skew quality in one area over an other.
   We need to start defining common development and reviewer practices so
   that, even though we deal are made of many sub-projects and modules, we
   operate, feel and look like one...just like OpenStack :)
   - Define long term strategy: we need to have an idea where Neutron start
   and where Neutron end. At some point, this project will reach enough
   maturity where we feel like we are 'done' and that's okay. Some of us will
   move on to the next big thing.
   - Keep developers and reviewers _aware_: we all have to work
   collectively towards a common set of goals, defined by the release cycle.
   We will have to learn to push back on _random_ forces that keep distracting
   us.
   - I would like to promote a 'you merge it, you own it' type of
   mentality: even though we are pretty good at it already, we need a better
   balance between reviews and contributions. If you bless a patch, you got to
   be prepared to dive into the issues that it may potentially causes. If you
   bless a patch, you got to be prepared to improve the code around it, and so
   on. You will be a better reviewer if you learn to live with the pain of
   your mistakes. This is he only way to establish a virtuous cycle where
   quality improves time over time.

And last but not least:


   - Improve the relationships with other projects: Nova and QA primarily.
   We should allocate enough bandwidth to address integration issues with Nova
   and the other emerging projects, so that we stay plugged with them. QA is
   also paramount so that no-one is gonna hate us because we send the gate
   belly up. As for nova-network, I must admit I am highly skeptical by now:
   if our community were a commercial enterprise trying to solve that problem
   we would have ran out of money long time ago. We tried time and time again
   to crack this nut open, and even though we made progress in a number of
   areas, we haven't really budged where some people felt it mattered. We need
   to recognize that the problem is not just technical...it is social; no-one,
   starting from the developers and the employers behind them, seems to be
   genuinely concerned with the need of making nova-network a thing of the
   past. They have other priorities, they are chasing new customers, they want
   to disrupt Amazon. None of this nova-network deprecation drama fits with
   their agendas and furthermore, even if we found non-corporate sponsored
   developers willing to work on it, let's face it migration is a problem that
   is really not that interesting to solve. So where do we go from here? I do
   not have a clear answer yet. However, I think we all agree that the Neutron
   team wants to make Neutron a better product, more aligned with the needs of
   our users, but we must recognize that _better_ does not mean *like*
   nova-network, because the two products are not the same and they never will
   be.

Ok, now that you read this, you are ready to know whether you may want 

Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Fox, Kevin M
Let me rephrase that to be more explicit. Certain organizations, for various 
reasons, require people in the process chain to actually make dns changes... No 
amount of automation can easily address that issue. Hopefully that can change 
in time. But stuff as simple as letting users launch vm's can be a hard enough 
enough sell, without requiring automated access to change dns.

That being said, thats a cool patch. I wish I could use it. :) Hopefully some 
day.

Thanks,
Kevin

From: Carl Baldwin [c...@ecbaldwin.net]
Sent: Tuesday, September 15, 2015 3:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' 
network model

On Tue, Sep 15, 2015 at 3:09 PM, Fox, Kevin M  wrote:
> DNS is preferable, since humans don't remember ip's very well. IPv6 is much 
> harder to remember then v4 too.
>
> DNS has its own issues, mostly, its usually not very quick to get a DNS entry 
> updated.  At our site (and I'm sure, others), I'm afraid to say in some cases 
> it takes as long as 24 hours to get updates to happen. Even if that was 
> fixed, caching can bite you too.

We also have work going on now to automate the addition and update of
DNS entries as VMs come and go [1].  Please have a look and provide
feedback.

[1] https://review.openstack.org/#/q/topic:bp/external-dns-resolution,n,z

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican] Providing service user read access to all tenant's certificates

2015-09-15 Thread Vijay Venkatachalam
Hi,
   Is there a way to provide read access to a certain user to all 
secrets/containers of all project/tenant's certificates?
   This user with universal "read" privilege's will be used as a 
service user by LBaaS plugin to read tenant's certificates during LB 
configuration implementation.

   Today's LBaaS users are following the below mentioned process

1.  tenant's creator/admin user uploads a certificate info as secrets and 
container

2.  User then have to create ACLs for the LBaaS service user to access the 
containers and secrets

3.  User creates LB config with the container reference

4.  LBaaS plugin using the service user will then access container 
reference provided in LB config and proceeds to implement.

Ideally we would want to avoid step 2 in the process. Instead add a step 5 
where the lbaas plugin's service user checks if the user configuring the LB has 
read access to the container reference provided.

Thanks,
Vijay V.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Armando M.
On 15 September 2015 at 16:08, Monty Taylor  wrote:

> On 09/15/2015 06:30 PM, Armando M. wrote:
>
>>
>>
>> On 15 September 2015 at 08:04, Monty Taylor > > wrote:
>>
>> Hey all!
>>
>> If any of you have ever gotten drunk with me, you'll know I hate
>> floating IPs more than I hate being stabbed in the face with a very
>> angry fish.
>>
>> However, that doesn't really matter. What should matter is "what is
>> the most sane thing we can do for our users"
>>
>> As you might have seen in the glance thread, I have a bunch of
>> OpenStack public cloud accounts. Since I wrote that email this
>> morning, I've added more - so we're up to 13.
>>
>> auro
>> citycloud
>> datacentred
>> dreamhost
>> elastx
>> entercloudsuite
>> hp
>> ovh
>> rackspace
>> runabove
>> ultimum
>> unitedstack
>> vexxhost
>>
>> Of those public clouds, 5 of them require you to use a floating IP
>> to get an outbound address, the others directly attach you to the
>> public network. Most of those 8 allow you to create a private
>> network, to boot vms on the private network, and ALSO to create a
>> router with a gateway and put floating IPs on your private ip'd
>> machines if you choose.
>>
>> Which brings me to the suggestion I'd like to make.
>>
>> Instead of having our default in devstack and our default when we
>> talk about things be "you boot a VM and you put a floating IP on it"
>> - which solves one of the two usage models - how about:
>>
>> - Cloud has a shared: True, external:routable: True neutron network.
>> I don't care what it's called  ext-net, public, whatever. the
>> "shared" part is the key, that's the part that lets someone boot a
>> vm on it directly.
>>
>> - Each person can then make a private network, router, gateway, etc.
>> and get floating-ips from the same public network if they prefer
>> that model.
>>
>> Are there any good reasons to not push to get all of the public
>> networks marked as "shared"?
>>
>>
>> The reason is simple: not every cloud deployment is the same: private is
>> different from public and even within the same cloud model, the network
>> topology may vary greatly.
>>
>
> Yes. Many things may be different.
>
> Perhaps Neutron fails in the sense that it provides you with too much
>> choice, and perhaps we have to standardize on the type of networking
>> profile expected by a user of OpenStack public clouds before making
>> changes that would fragment this landscape even further.
>>
>> If you are advocating for more flexibility without limiting the existing
>> one, we're only making the problem worse.
>>
>
> I am not. I am arguing for a different arbitrary 'default' deployment.
> Right now the verbiage around things is "floating IPs is the 'right' way to
> get access to public networks"
>
> I'm not arguing for code changes, or more options, or new features.
>
> I'm saying that there a set of public clouds that provide a default
> experience out of the box that is pleasing with neutron today, and we
> should have the "I don't know what I want tell me what to do" option behave
> like those clouds.
>
> Yes. You can do other things.
> Yes. You can get fancy.
> Yes. You can express all of the things.
>
> Those are things I LOVE about neutron and one of the reasons I think that
> the arguments around neutron and nova-net are insane.
>
> I'm just saying that "I want a computer on the externally facing network
> from this cloud" is almost never well served by floating-ips unless you
> know what you're doing, so rather than leading people down the road towards
> that as the default behavior, since it's the HARDER thing to deal with -
> let's lead them to the behavior which makes the simple thing simple and
> then clearly open the door to them to increasingly complex and powerful
> things over time.


I can get behind this statement, but all I am trying to say is that Neutron
gives you the toolkit. How, as a deployer you use it, it's up to you. A
deployer can today implement a shared publicly facing network on which VM's
can connect to without problems. Now the issue may come from a user point
of view: does the user may need to specify the network? Or create the
topology ahead of time? It's my understanding that this is what this thread
is about. If not, then I clearly need a crash course in English :)


>
>
>> OH - well, one thing - that's that once there are two networks in an
>> account you have to specify which one. This is really painful in
>> nova clent. Say, for instance, you have a public network called
>> "public" and a private network called "private" ...
>>
>> You can't just say "nova boot --network=public" - nope, you need to
>> say "nova boot --nics net-id=$uuid_of_my_public_network"
>>
>> So I'd suggest 2 more things;
>>
>> a) an 

Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Fox, Kevin M
+1

From: Monty Taylor [mord...@inaugust.com]
Sent: Tuesday, September 15, 2015 4:33 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' 
network model

On 09/15/2015 08:28 PM, Matt Riedemann wrote:
>
>
> On 9/15/2015 10:27 AM, Mike Spreitzer wrote:
>> Monty Taylor  wrote on 09/15/2015 11:04:07 AM:
>>
>>  > a) an update to python-novaclient to allow a named network to be
>> passed
>>  > to satisfy the "you have more than one network" - the nics argument is
>>  > still useful for more complex things
>>
>> I am not using the latest, but rather Juno.  I find that in many places
>> the Neutron CLI insists on a UUID when a name could be used.  Three
>> cheers for any campaign to fix that.
>
> It's my understanding that network names in neutron, like security
> groups, are not unique, that's why you have to specify a UUID.

Yah.

EXCEPT - we already error when the user does not specify the network
specifically enough, so there is nothing stopping us from trying the
obvious thing and the moving on. Such as:

nova boot

ERROR: There are more than one network, please specify one

nova boot --network public

\o/

OR

nova boot

ERROR: There are more than one network, please specify one

nova boot --network public

ERROR: There are more than one network named 'public', please specify one

nova boot --network ecc967b6-5c01-11e5-b218-4c348816caa1

\o/

These are successive attempts at a simple operation that should be
simple, and as the situation becomes increasingly complex, so does the
necessity of the user's response.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Armando M.
On 15 September 2015 at 14:04, Mike Spreitzer  wrote:

> "Armando M."  wrote on 09/15/2015 03:50:24 PM:
>
> > On 15 September 2015 at 10:02, Doug Hellmann 
> wrote:
> > Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
> ...
> > As with the Glance image upload API discussion, this is an example
> > of an extremely common use case that is either complex for the end
> > user or for which they have to know something about the deployment
> > in order to do it at all. The usability of an OpenStack cloud running
> > neutron would be enhanced greatly if there was a simple, clear, way
> > for the user to get a new VM with a public IP on any cloud without
> > multiple steps on their part.
>
> <>
>
> ...
> >
> > So this boils down to: in light of the possible ways of providing VM
> > connectivity, how can we make a choice on the user's behalf? Can we
> > assume that he/she always want a publicly facing VM connected to
> > Internet? The answer is 'no'.
>
> While it may be true that in some deployments there is no good way for the
> code to choose, I think that is not the end of the story here.  The
> motivation to do this is that in *some* deployments there *is* a good way
> for the code to figure out what to do.


Agreed, I wasn't dismissing this entirely. I was simply saying that if we
don't put constraints in place it's difficult to come up with a good
'default' answer.


>
>
> Regards,
> Mike
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Armando M.
On 15 September 2015 at 15:11, Mathieu Gagné  wrote:

> On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
> > We run several clouds where there are multiple external networks. the
> "just run it in on THE public network" doesn't work. :/
> >
> > I also strongly recommend to users to put vms on a private network and
> use floating ip's/load balancers. For many reasons. Such as, if you don't,
> the ip that gets assigned to the vm helps it become a pet. you can't
> replace the vm and get the same IP. Floating IP's and load balancers can
> help prevent pets. It also prevents security issues with DNS and IP's.
> Also, for every floating ip/lb I have, I usually have 3x or more the number
> of instances that are on the private network. Sure its easy to put
> everything on the public network, but it provides much better security if
> you only put what you must on the public network. Consider the internet.
> would you want to expose every device in your house directly on the
> internet? No. you put them in a private network and poke holes just for the
> stuff that does. we should be encouraging good security practices. If we
> encourage bad ones, then it will bite us later when OpenStack gets a
> reputation for being associated with compromises.
> >
>
> Sorry but I feel this kind of reply explains why people are still using
> nova-network over Neutron. People want simplicity and they are denied it
> at every corner because (I feel) Neutron thinks it knows better.
>

I am sorry, but how can you associate a person's opinion to a project,
which is a collectivity? Surely everyone is entitled to his/her opinion,
but I don't honestly believe these are fair statements to make.


> The original statement by Monty Taylor is clear to me:
>
> I wish to boot an instance that is on a public network and reachable
> without madness.
>
> As of today, you can't unless you implement a deployer/provider specific
> solution (to scale said network). Just take a look at what actual public
> cloud providers are doing:
>
> - Rackspace has a "magic" public network
> - GoDaddy has custom code in their nova-scheduler (AFAIK)
> - iWeb (which I work for) has custom code in front of nova-api.
>
> We are all writing our own custom code to implement what (we feel)
> Neutron should be providing right off the bat.
>

What is that you think Neutron should be providing right off the bat? I
personally have never seen you publicly report usability issues that
developers could go and look into. Let's escalate these so that the Neutron
team can be aware.


>
> By reading the openstack-dev [1], openstack-operators [2] lists, Neutron
> specs [3] and the Large Deployment Team meeting notes [4], you will see
> that what is suggested here (a scalable public shared network) is an
> objective we wish but are struggling hard to achieve.
>

There are many ways to skin this cat IMO, and scalable public shared
network can really have multiple meanings, I appreciate the pointers
nonetheless.


>
> People keep asking for simplicity and Neutron looks to not be able to
> offer it due to philosophical conflicts between Neutron developers and
> actual public users/operators. We can't force our users to adhere to ONE
> networking philosophy: use NAT, floating IPs, firewall, routers, etc.
> They just don't buy it. Period. (see monty's list of public providers
> attaching VMs to public network)
>

Public providers networking needs are not the only needs that Neutron tries
to gather. There's a balance to be struck, and I appreciate that the
balance may need to be adjusted, but being so dismissive is being myopic of
the entire industry landscape.


>
> If we can accept and agree that not everyone wishes to adhere to the
> "full stack of networking good practices" (TBH, I don't know how to call
> this thing), it will be a good start. Otherwise I feel we won't be able
> to achieve anything.
>
> What Monty is explaining and suggesting is something we (my team) have
> been struggling with for *years* and just didn't have bandwidth (we are
> operators, not developers) or public charisma to change.
>
> I'm glad Monty brought up this subject so we can officially address it.
>
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/070028.html
> [2]
>
> http://lists.openstack.org/pipermail/openstack-operators/2015-August/007857.html
> [3]
>
> http://specs.openstack.org/openstack/neutron-specs/specs/liberty/get-me-a-network.html
> [4]
>
> http://lists.openstack.org/pipermail/openstack-operators/2015-June/007427.html
>
> --
> Mathieu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Doug Wiegley
Hi all,

If I can attempt to summarize this thread:

“We want simple networks with VMs”

Ok, in progress, start here:

https://blueprints.launchpad.net/neutron/+spec/get-me-a-network 


“It should work with multiple networks”

Same spec, click above.

“It should work with just ‘nova boot’”

Yup, you guessed it, starting point is the same spec, click above.

“It should still work in the face of N-tiered ambiguity.”

Umm, how, exactly? I think if you have a super complicated setup, your boot 
might be a bit harder, too. Please look at the cases that are covered before 
getting upset, and then provide feedback on the spec.

“Networks should be accessible by name.”

Yup, if they don’t, it’s a bug. The client a few cycles ago was particularly 
bad at this. If you find more cases, please file a bug.

“Neutron doesn’t get it and never will.”

I’m not sure how all ‘yes’ above keeps translating to this old saw, but is 
there any tiny chance we can stop living in the past and instead focus on the 
use cases that we want to solve?

Thanks,
doug



> On Sep 15, 2015, at 5:44 PM, Armando M.  wrote:
> 
> 
> 
> On 15 September 2015 at 15:11, Mathieu Gagné  > wrote:
> On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
> > We run several clouds where there are multiple external networks. the "just 
> > run it in on THE public network" doesn't work. :/
> >
> > I also strongly recommend to users to put vms on a private network and use 
> > floating ip's/load balancers. For many reasons. Such as, if you don't, the 
> > ip that gets assigned to the vm helps it become a pet. you can't replace 
> > the vm and get the same IP. Floating IP's and load balancers can help 
> > prevent pets. It also prevents security issues with DNS and IP's. Also, for 
> > every floating ip/lb I have, I usually have 3x or more the number of 
> > instances that are on the private network. Sure its easy to put everything 
> > on the public network, but it provides much better security if you only put 
> > what you must on the public network. Consider the internet. would you want 
> > to expose every device in your house directly on the internet? No. you put 
> > them in a private network and poke holes just for the stuff that does. we 
> > should be encouraging good security practices. If we encourage bad ones, 
> > then it will bite us later when OpenStack gets a reputation for being 
> > associated with compromises.
> >
> 
> Sorry but I feel this kind of reply explains why people are still using
> nova-network over Neutron. People want simplicity and they are denied it
> at every corner because (I feel) Neutron thinks it knows better.
> 
> I am sorry, but how can you associate a person's opinion to a project, which 
> is a collectivity? Surely everyone is entitled to his/her opinion, but I 
> don't honestly believe these are fair statements to make.
> 
> 
> The original statement by Monty Taylor is clear to me:
> 
> I wish to boot an instance that is on a public network and reachable
> without madness.
> 
> As of today, you can't unless you implement a deployer/provider specific
> solution (to scale said network). Just take a look at what actual public
> cloud providers are doing:
> 
> - Rackspace has a "magic" public network
> - GoDaddy has custom code in their nova-scheduler (AFAIK)
> - iWeb (which I work for) has custom code in front of nova-api.
> 
> We are all writing our own custom code to implement what (we feel)
> Neutron should be providing right off the bat.
> 
> What is that you think Neutron should be providing right off the bat? I 
> personally have never seen you publicly report usability issues that 
> developers could go and look into. Let's escalate these so that the Neutron 
> team can be aware.
>  
> 
> By reading the openstack-dev [1], openstack-operators [2] lists, Neutron
> specs [3] and the Large Deployment Team meeting notes [4], you will see
> that what is suggested here (a scalable public shared network) is an
> objective we wish but are struggling hard to achieve.
> 
> There are many ways to skin this cat IMO, and scalable public shared network 
> can really have multiple meanings, I appreciate the pointers nonetheless.
>  
> 
> People keep asking for simplicity and Neutron looks to not be able to
> offer it due to philosophical conflicts between Neutron developers and
> actual public users/operators. We can't force our users to adhere to ONE
> networking philosophy: use NAT, floating IPs, firewall, routers, etc.
> They just don't buy it. Period. (see monty's list of public providers
> attaching VMs to public network)
> 
> Public providers networking needs are not the only needs that Neutron tries 
> to gather. There's a balance to be struck, and I appreciate that the balance 
> may need to be adjusted, but being so dismissive is being myopic of the 
> entire industry landscape.
>  
> 
> If we can 

Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Mathieu Gagné
On 2015-09-15 7:44 PM, Armando M. wrote:
> 
> 
> On 15 September 2015 at 15:11, Mathieu Gagné  > wrote:
> 
> On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
> > We run several clouds where there are multiple external networks. the 
> "just run it in on THE public network" doesn't work. :/
> >
> > I also strongly recommend to users to put vms on a private network and 
> use floating ip's/load balancers. For many reasons. Such as, if you don't, 
> the ip that gets assigned to the vm helps it become a pet. you can't replace 
> the vm and get the same IP. Floating IP's and load balancers can help prevent 
> pets. It also prevents security issues with DNS and IP's. Also, for every 
> floating ip/lb I have, I usually have 3x or more the number of instances that 
> are on the private network. Sure its easy to put everything on the public 
> network, but it provides much better security if you only put what you must 
> on the public network. Consider the internet. would you want to expose every 
> device in your house directly on the internet? No. you put them in a private 
> network and poke holes just for the stuff that does. we should be encouraging 
> good security practices. If we encourage bad ones, then it will bite us later 
> when OpenStack gets a reputation for being associated with compromises.
> >
> 
> Sorry but I feel this kind of reply explains why people are still using
> nova-network over Neutron. People want simplicity and they are denied it
> at every corner because (I feel) Neutron thinks it knows better.
> 
> 
> I am sorry, but how can you associate a person's opinion to a project,
> which is a collectivity? Surely everyone is entitled to his/her opinion,
> but I don't honestly believe these are fair statements to make.

You are right, this is not fair. I apologize for that.


> The original statement by Monty Taylor is clear to me:
> 
> I wish to boot an instance that is on a public network and reachable
> without madness.
> 
> As of today, you can't unless you implement a deployer/provider specific
> solution (to scale said network). Just take a look at what actual public
> cloud providers are doing:
> 
> - Rackspace has a "magic" public network
> - GoDaddy has custom code in their nova-scheduler (AFAIK)
> - iWeb (which I work for) has custom code in front of nova-api.
> 
> We are all writing our own custom code to implement what (we feel)
> Neutron should be providing right off the bat.
> 
> 
> What is that you think Neutron should be providing right off the bat? I
> personally have never seen you publicly report usability issues that
> developers could go and look into. Let's escalate these so that the
> Neutron team can be aware.

Please understand that I'm an operator and don't have the luxury to
contribute as much as I did before. I however participate to OpenStack
Ops meetup and this is the kind of things we discuss. You can read the
use cases below to understand what I'm referring to. I don't feel the
need to add yet another version of it since there are already multiple
ones identifying my needs.

People (such as Monty) are already voicing my concerns and I didn't feel
the need to voice mine too.


> By reading the openstack-dev [1], openstack-operators [2] lists, Neutron
> specs [3] and the Large Deployment Team meeting notes [4], you will see
> that what is suggested here (a scalable public shared network) is an
> objective we wish but are struggling hard to achieve.
> 
> 
> There are many ways to skin this cat IMO, and scalable public shared
> network can really have multiple meanings, I appreciate the pointers
> nonetheless.
>  
> 
> 
> People keep asking for simplicity and Neutron looks to not be able to
> offer it due to philosophical conflicts between Neutron developers and
> actual public users/operators. We can't force our users to adhere to ONE
> networking philosophy: use NAT, floating IPs, firewall, routers, etc.
> They just don't buy it. Period. (see monty's list of public providers
> attaching VMs to public network)
> 
> 
> Public providers networking needs are not the only needs that Neutron
> tries to gather. There's a balance to be struck, and I appreciate that
> the balance may need to be adjusted, but being so dismissive is being
> myopic of the entire industry landscape.

We (my employer) also maintain private clouds and I'm fully aware of the
different between those needs. Therefore I don't think it's fair to say
that my opinion in nearsighted. Nonetheless, I would like this balance
to be adjusted and that's what I'm asking for and glad to see.


> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/070028.html
> [2]
> 
> http://lists.openstack.org/pipermail/openstack-operators/2015-August/007857.html
> [3]
> 
> 

Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Doug Wiegley
Sorry, didn’t mean that to come down as a triple pile-on, with me doing it 
twice. My bad, I’m sorry.

Thanks,
doug


> On Sep 15, 2015, at 6:11 PM, Mathieu Gagné  wrote:
> 
> On 2015-09-15 8:06 PM, Doug Wiegley wrote:
>> 
>> “Neutron doesn’t get it and never will.”
>> 
>> I’m not sure how all ‘yes’ above keeps translating to this old saw, but
>> is there any tiny chance we can stop living in the past and instead
>> focus on the use cases that we want to solve?
>> 
> 
> I apologized for my unfair statement where I very wrongly associated a
> person's opinion to a whole project. I would like to move on. Thanks
> 
> -- 
> Mathieu
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Monty Taylor

On 09/16/2015 02:06 AM, Doug Wiegley wrote:

Hi all,

If I can attempt to summarize this thread:

“We want simple networks with VMs”

Ok, in progress, start here:

https://blueprints.launchpad.net/neutron/+spec/get-me-a-network


\o/


“It should work with multiple networks”

Same spec, click above.


\o/


“It should work with just ‘nova boot’”

Yup, you guessed it, starting point is the same spec, click above.


\o/


“It should still work in the face of N-tiered ambiguity.”

Umm, how, exactly? I think if you have a super complicated setup, your
boot might be a bit harder, too. Please look at the cases that are
covered before getting upset, and then provide feedback on the spec.


Yeah. For the record, I have never thought the simple case should handle 
anything but the simple case.



“Networks should be accessible by name.”

Yup, if they don’t, it’s a bug. The client a few cycles ago was
particularly bad at this. If you find more cases, please file a bug.

“Neutron doesn’t get it and never will.”

I’m not sure how all ‘yes’ above keeps translating to this old saw, but
is there any tiny chance we can stop living in the past and instead
focus on the use cases that we want to solve?


Also for the record, I find neutron very pleasant to work with. The 
clouds who have chosen to mark their "public" network as "shared" are 
the most pleasant- but even the ones who have not chosen to do that are 
still pretty darned good.


The clouds without neutron at all are the worst to work with.




On Sep 15, 2015, at 5:44 PM, Armando M. > wrote:



On 15 September 2015 at 15:11, Mathieu Gagné > wrote:

On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
> We run several clouds where there are multiple external networks. the "just 
run it in on THE public network" doesn't work. :/
>
> I also strongly recommend to users to put vms on a private network and 
use floating ip's/load balancers. For many reasons. Such as, if you don't, the ip 
that gets assigned to the vm helps it become a pet. you can't replace the vm and 
get the same IP. Floating IP's and load balancers can help prevent pets. It also 
prevents security issues with DNS and IP's. Also, for every floating ip/lb I have, 
I usually have 3x or more the number of instances that are on the private network. 
Sure its easy to put everything on the public network, but it provides much better 
security if you only put what you must on the public network. Consider the 
internet. would you want to expose every device in your house directly on the 
internet? No. you put them in a private network and poke holes just for the stuff 
that does. we should be encouraging good security practices. If we encourage bad 
ones, then it will bite us later when OpenStack gets a reputation for being 
associated with compromises.
>

Sorry but I feel this kind of reply explains why people are still
using
nova-network over Neutron. People want simplicity and they are
denied it
at every corner because (I feel) Neutron thinks it knows better.


I am sorry, but how can you associate a person's opinion to a project,
which is a collectivity? Surely everyone is entitled to his/her
opinion, but I don't honestly believe these are fair statements to make.


The original statement by Monty Taylor is clear to me:

I wish to boot an instance that is on a public network and reachable
without madness.

As of today, you can't unless you implement a deployer/provider
specific
solution (to scale said network). Just take a look at what actual
public
cloud providers are doing:

- Rackspace has a "magic" public network
- GoDaddy has custom code in their nova-scheduler (AFAIK)
- iWeb (which I work for) has custom code in front of nova-api.

We are all writing our own custom code to implement what (we feel)
Neutron should be providing right off the bat.


What is that you think Neutron should be providing right off the bat?
I personally have never seen you publicly report usability issues that
developers could go and look into. Let's escalate these so that the
Neutron team can be aware.


By reading the openstack-dev [1], openstack-operators [2] lists,
Neutron
specs [3] and the Large Deployment Team meeting notes [4], you
will see
that what is suggested here (a scalable public shared network) is an
objective we wish but are struggling hard to achieve.


There are many ways to skin this cat IMO, and scalable public shared
network can really have multiple meanings, I appreciate the pointers
nonetheless.


People keep asking for simplicity and Neutron looks to not be able to
offer it due to philosophical conflicts between Neutron developers and
actual public users/operators. We can't force our users to adhere
to ONE
networking philosophy: use NAT, floating IPs, firewall, routers, etc.
They 

Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Matt Kassawara
Monty,

The architectural changes to the installation guide for Liberty [1] support
booting VMs on both the public/external/provider and
private/project/self-service networks.

Also, we should consider including similar "hybrid" scenarios in the
networking guide [2] so deployers don't have to choose between these
architectures.

[1] https://review.openstack.org/#/c/221560/
[2] http://docs.openstack.org/networking-guide/deploy.html

Matt

On Tue, Sep 15, 2015 at 9:04 AM, Monty Taylor  wrote:

> Hey all!
>
> If any of you have ever gotten drunk with me, you'll know I hate floating
> IPs more than I hate being stabbed in the face with a very angry fish.
>
> However, that doesn't really matter. What should matter is "what is the
> most sane thing we can do for our users"
>
> As you might have seen in the glance thread, I have a bunch of OpenStack
> public cloud accounts. Since I wrote that email this morning, I've added
> more - so we're up to 13.
>
> auro
> citycloud
> datacentred
> dreamhost
> elastx
> entercloudsuite
> hp
> ovh
> rackspace
> runabove
> ultimum
> unitedstack
> vexxhost
>
> Of those public clouds, 5 of them require you to use a floating IP to get
> an outbound address, the others directly attach you to the public network.
> Most of those 8 allow you to create a private network, to boot vms on the
> private network, and ALSO to create a router with a gateway and put
> floating IPs on your private ip'd machines if you choose.
>
> Which brings me to the suggestion I'd like to make.
>
> Instead of having our default in devstack and our default when we talk
> about things be "you boot a VM and you put a floating IP on it" - which
> solves one of the two usage models - how about:
>
> - Cloud has a shared: True, external:routable: True neutron network. I
> don't care what it's called  ext-net, public, whatever. the "shared" part
> is the key, that's the part that lets someone boot a vm on it directly.
>
> - Each person can then make a private network, router, gateway, etc. and
> get floating-ips from the same public network if they prefer that model.
>
> Are there any good reasons to not push to get all of the public networks
> marked as "shared"?
>
> OH - well, one thing - that's that once there are two networks in an
> account you have to specify which one. This is really painful in nova
> clent. Say, for instance, you have a public network called "public" and a
> private network called "private" ...
>
> You can't just say "nova boot --network=public" - nope, you need to say
> "nova boot --nics net-id=$uuid_of_my_public_network"
>
> So I'd suggest 2 more things;
>
> a) an update to python-novaclient to allow a named network to be passed to
> satisfy the "you have more than one network" - the nics argument is still
> useful for more complex things
>
> b) ability to say "vms in my cloud should default to being booted on the
> public network" or "vms in my cloud should default to being booted on a
> network owned by the user"
>
> Thoughts?
>
> Monty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Mathieu Gagné
On 2015-09-15 6:49 PM, Doug Wiegley wrote:
> 
> 
>> On Sep 15, 2015, at 4:11 PM, Mathieu Gagné  wrote:
>>
>>> On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
>>> We run several clouds where there are multiple external networks. the "just 
>>> run it in on THE public network" doesn't work. :/
>>>
>>> I also strongly recommend to users to put vms on a private network and use 
>>> floating ip's/load balancers. For many reasons. Such as, if you don't, the 
>>> ip that gets assigned to the vm helps it become a pet. you can't replace 
>>> the vm and get the same IP. Floating IP's and load balancers can help 
>>> prevent pets. It also prevents security issues with DNS and IP's. Also, for 
>>> every floating ip/lb I have, I usually have 3x or more the number of 
>>> instances that are on the private network. Sure its easy to put everything 
>>> on the public network, but it provides much better security if you only put 
>>> what you must on the public network. Consider the internet. would you want 
>>> to expose every device in your house directly on the internet? No. you put 
>>> them in a private network and poke holes just for the stuff that does. we 
>>> should be encouraging good security practices. If we encourage bad ones, 
>>> then it will bite us later when OpenStack gets a reputation for being 
>>> associated with compromises.
>>
>> Sorry but I feel this kind of reply explains why people are still using
>> nova-network over Neutron. People want simplicity and they are denied it
>> at every corner because (I feel) Neutron thinks it knows better.
> 
> Please stop painting such generalizations.  Go to the third or fourth email 
> in this thread and you will find a spec, worked on by neutron and nova, that 
> addresses exactly this use case.
> 
> It is a valid use case, and neutron does care about it. It has wrinkles. That 
> has not stopped work on it for the common cases.
> 

I've read the neutron spec you are referring (which I mentioned in my
email) and I'm glad the subject is discussed. This was not my intention
to diminish the work done by the Neutron team to address those issues. I
wrongly associate a person's opinion to a whole project, this is not
fair, I apologize for that.

Jeremy Stanley replied to Kevin with much better words than mine.

-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Mathieu Gagné
On 2015-09-15 8:06 PM, Doug Wiegley wrote:
> 
> “Neutron doesn’t get it and never will.”
> 
> I’m not sure how all ‘yes’ above keeps translating to this old saw, but
> is there any tiny chance we can stop living in the past and instead
> focus on the use cases that we want to solve?
> 

I apologized for my unfair statement where I very wrongly associated a
person's opinion to a whole project. I would like to move on. Thanks

-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Mathieu Gagné
On 2015-09-15 2:00 PM, Fox, Kevin M wrote:
> We run several clouds where there are multiple external networks. the "just 
> run it in on THE public network" doesn't work. :/
> 
> I also strongly recommend to users to put vms on a private network and use 
> floating ip's/load balancers. For many reasons. Such as, if you don't, the ip 
> that gets assigned to the vm helps it become a pet. you can't replace the vm 
> and get the same IP. Floating IP's and load balancers can help prevent pets. 
> It also prevents security issues with DNS and IP's. Also, for every floating 
> ip/lb I have, I usually have 3x or more the number of instances that are on 
> the private network. Sure its easy to put everything on the public network, 
> but it provides much better security if you only put what you must on the 
> public network. Consider the internet. would you want to expose every device 
> in your house directly on the internet? No. you put them in a private network 
> and poke holes just for the stuff that does. we should be encouraging good 
> security practices. If we encourage bad ones, then it will bite us later when 
> OpenStack gets a reputation for being associated with compromises.
> 

Sorry but I feel this kind of reply explains why people are still using
nova-network over Neutron. People want simplicity and they are denied it
at every corner because (I feel) Neutron thinks it knows better.

The original statement by Monty Taylor is clear to me:

I wish to boot an instance that is on a public network and reachable
without madness.

As of today, you can't unless you implement a deployer/provider specific
solution (to scale said network). Just take a look at what actual public
cloud providers are doing:

- Rackspace has a "magic" public network
- GoDaddy has custom code in their nova-scheduler (AFAIK)
- iWeb (which I work for) has custom code in front of nova-api.

We are all writing our own custom code to implement what (we feel)
Neutron should be providing right off the bat.

By reading the openstack-dev [1], openstack-operators [2] lists, Neutron
specs [3] and the Large Deployment Team meeting notes [4], you will see
that what is suggested here (a scalable public shared network) is an
objective we wish but are struggling hard to achieve.

People keep asking for simplicity and Neutron looks to not be able to
offer it due to philosophical conflicts between Neutron developers and
actual public users/operators. We can't force our users to adhere to ONE
networking philosophy: use NAT, floating IPs, firewall, routers, etc.
They just don't buy it. Period. (see monty's list of public providers
attaching VMs to public network)

If we can accept and agree that not everyone wishes to adhere to the
"full stack of networking good practices" (TBH, I don't know how to call
this thing), it will be a good start. Otherwise I feel we won't be able
to achieve anything.

What Monty is explaining and suggesting is something we (my team) have
been struggling with for *years* and just didn't have bandwidth (we are
operators, not developers) or public charisma to change.

I'm glad Monty brought up this subject so we can officially address it.


[1] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070028.html
[2]
http://lists.openstack.org/pipermail/openstack-operators/2015-August/007857.html
[3]
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/get-me-a-network.html
[4]
http://lists.openstack.org/pipermail/openstack-operators/2015-June/007427.html

-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] How to enable devstack plugins

2015-09-15 Thread Chris Dent

On Tue, 15 Sep 2015, Zhai, Edwin wrote:

I saw some patches from Chris Dent to enable functions in devstack/*. But it 
conflicts with devstack upstream so that start each ceilometer service twice. 
Is there any official way to setup ceilometer as devstack plugin?


What I've been doing is checking out the devstack branch associated
with this review that removes ceilometer from devstack [1] (with a
`git review -d 196383`) and then stacking from there. It's cumbersome
but gets the job done.

This pain point should go away very soon. We've just been waiting on
the necessary infra changes to get various jobs that use ceilometer
prepared to use the ceilometer devstack plugin[2]. I think that's
ready to go now so we ought to see that merge soon.

[1] https://review.openstack.org/#/c/196383/
[2] https://review.openstack.org/#/c/196446/ and dependent reviews.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] PTL Non-Candidacy

2015-09-15 Thread Silvan Kaiser
Thanks Mike!
That was really demanding work!

2015-09-15 9:27 GMT+02:00 陈莹 :

> Thanks Mike. Thank you for doing a great job.
>
>
> > From: sxmatch1...@gmail.com
> > Date: Tue, 15 Sep 2015 10:05:22 +0800
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [cinder] PTL Non-Candidacy
>
> >
> > Thanks Mike ! Your help is very important to me to get started in
> > cinder and we do a lot of proud work with your leadership.
> >
> > 2015-09-15 6:36 GMT+08:00 John Griffith :
> > >
> > >
> > > On Mon, Sep 14, 2015 at 11:02 AM, Sean McGinnis  >
> > > wrote:
> > >>
> > >> On Mon, Sep 14, 2015 at 09:15:44AM -0700, Mike Perez wrote:
> > >> > Hello all,
> > >> >
> > >> > I will not be running for Cinder PTL this next cycle. Each cycle I
> ran
> > >> > was for a reason [1][2], and the Cinder team should feel proud of
> our
> > >> > accomplishments:
> > >>
> > >> Thanks for a couple of awesome cycles Mike!
> > >>
> > >>
> __
> > >> OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > > You did a fantastic job Mike, thank you very much for the hard work and
> > > dedication.
> > >
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> >
> >
> > --
> > Best Wishes For You!
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Dr. Silvan Kaiser
Quobyte GmbH
Hardenbergplatz 2, 10623 Berlin - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender

-- 

--
*Quobyte* GmbH
Hardenbergplatz 2 - 10623 Berlin - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Is there a sandbox project how to use tempest test plugin interface?

2015-09-15 Thread Marc Koderer

Am 14.09.2015 um 10:01 schrieb Lajos Katona :

> Hi Matthew,
> 
> Finally I made it working, so now I have a dummy plugin.
> 
> Few questions, remarks:
> - For me it was little hard to merge my weak knowledge from python packaging 
> with the documentation for tempest plugins, do you mind if I push an example 
> to github, and I add the link to that to the documentation.

Can we add it directly into the documentation? I am fine with more code 
snippets there and
external links can be broken over time.

> - From this point the generation of the idempotent id is not clear for me. I 
> was able to use the check_uuid.py, and as I used a virtenv, the script edited 
> the .tox/venv/local/lib/python2.7/site-packages/dummyplugin/ file.
> Would be good maybe to add an extra path option to the check_uuid.py to make 
> it possible to edit the real source files in similar cases not the ones in 
> the venv.

Idempotent id’s aren’t covered in the first drop of tempest plugin interface.
I am wondering if there is a need from refstack..?

Regards
Marc

> Regards
> Lajos
> 
> On 09/11/2015 08:50 AM, Lajos Katona wrote:
>> Hi Matthew,
>> 
>> Thanks for the help, this helped a lot a start the work.
>> 
>> regards
>> Lajos
>> 
>> On 09/10/2015 04:13 PM, Matthew Treinish wrote:
>>> On Thu, Sep 10, 2015 at 02:56:31PM +0200, Lajos Katona wrote:
>>> 
 Hi,
 
 I just noticed that from tag 6, the test plugin interface considered ready,
 and I am eager to start to use it.
 I have some questions:
 
 If I understand well in the future the plugin interface will be moved to
 tempest-lib, but now I have to import module(s) from tempest to start to 
 use
 the interface.
 Is there a plan for this, I mean when the whole interface will be moved to
 tempest-lib?
 
>>> The only thing which will eventually move to tempest-lib is the abstract 
>>> class
>>> that defines the expected methods of a plugin class [1] The other pieces 
>>> will
>>> remain in tempest. Honestly this won't likely happen until sometime during
>>> Mitaka. Also when it does move to tempest-lib we'll deprecate the tempest
>>> version and keep it around to allow for a graceful switchover.
>>> 
>>> The rationale behind this is we really don't provide any stability 
>>> guarantees
>>> on tempest internals (except for a couple of places which are documented, 
>>> like
>>> this plugin class) and we want any code from tempest that's useful to 
>>> external
>>> consumers to really live in tempest-lib.
>>> 
>>> 
 If I start to create a test plugin now (from tag 6), what should be the 
 best
 solution to do this?
 I thought to create a repo for my plugin and add that as a subrepo to my
 local tempest repo, and than I can easily import stuff from tempest, but I
 can keep my test code separated from other parts of tempest.
 Is there a better way of doing this?
 
>>> To start I'd take a look at the documentation for tempest plugins:
>>> 
>>> 
>>> http://docs.openstack.org/developer/tempest/plugin.html
>>> 
>>> 
>>> >From tempest's point of view a plugin is really just an entry point that 
>>> >points
>>> to a class that exposes certain methods. So the Tempest plugin can live 
>>> anywhere
>>> as long as it's installed as an entry point in the proper namespace. 
>>> Personally
>>> I feel like including it as a subrepo in a local tempest tree is a bit 
>>> strange,
>>> but I don't think it'll cause any issues if you do that.
>>> 
>>> 
 If there would be an example plugin somewhere, that would be the most
 preferable maybe.
 
>>> There is a cookiecutter repo in progress. [2] Once that's ready it'll let 
>>> you
>>> create a blank plugin dir that'll be ready for you to populate. (similar to 
>>> the
>>> devstack plugin cookiecutter that already exists)
>>> 
>>> For current examples the only project I know of that's using a plugin 
>>> interface
>>> is manila [3] so maybe take a look at what they're doing.
>>> 
>>> -Matt Treinish
>>> 
>>> [1] 
>>> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/test_discover/plugins.py#n26
>>> 
>>> [2] 
>>> https://review.openstack.org/208389
>>> 
>>> [3] 
>>> https://review.openstack.org/#/c/201955
>>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for 

[openstack-dev] [fuel] [plugin] Release tagging - possible problem

2015-09-15 Thread Irina Povolotskaya
Hi John,

To put a tag, you need to:
- be a member of release group (that has Push Signed Tag rights) - and you
are https://review.openstack.org/#/admin/groups/956,members
- use gpg key using console gnupg - you might have missed this.


-- 
Best regards,

Irina
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL Non-Candidacy

2015-09-15 Thread Somanchi Trinath
Kyle –

I see a Good and a Sad things here,

The Good one being, you lighted the path for new PTL to come up. The Sad thing 
is we are missing your leadership.
Hope you still lead the team in dotted line. ☺

-
Trinath

From: Irena Berezovsky [mailto:irenab@gmail.com]
Sent: Sunday, September 13, 2015 10:58 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [neutron] PTL Non-Candidacy

Kyle,
Thank you for the hard work you did making neuron project and neutron community 
 better!
You have been open and very supportive as a neutron community lead.
Hope you will stay involved.


On Fri, Sep 11, 2015 at 11:12 PM, Kyle Mestery 
> wrote:
I'm writing to let everyone know that I do not plan to run for Neutron PTL for 
a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan 
recently put it in his non-candidacy email [1]. But it goes further than that 
for me. As Flavio put it in his post about "Being a PTL" [2], it's a full time 
job. In the case of Neutron, it's more than a full time job, it's literally an 
always on job.

I've tried really hard over my three cycles as PTL to build a stronger web of 
trust so the project can grow, and I feel that's been accomplished. We have a 
strong bench of future PTLs and leaders ready to go, I'm excited to watch them 
lead and help them in anyway I can.
As was said by Zane in a recent email [3], while Heat may have pioneered the 
concept of rotating PTL duties with each cycle, I'd like to highly encourage 
Neutron and other projects to do the same. Having a deep bench of leaders 
supporting each other is important for the future of all projects.
See you all in Tokyo!
Kyle

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074157.html
[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/073986.html
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] PTL Non-Candidacy

2015-09-15 Thread 陈莹
Thanks Mike. Thank you for doing a great job.

> From: sxmatch1...@gmail.com
> Date: Tue, 15 Sep 2015 10:05:22 +0800
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [cinder] PTL Non-Candidacy
> 
> Thanks Mike ! Your help is very important to me to get started in
> cinder and we do a lot of proud work with your leadership.
> 
> 2015-09-15 6:36 GMT+08:00 John Griffith :
> >
> >
> > On Mon, Sep 14, 2015 at 11:02 AM, Sean McGinnis 
> > wrote:
> >>
> >> On Mon, Sep 14, 2015 at 09:15:44AM -0700, Mike Perez wrote:
> >> > Hello all,
> >> >
> >> > I will not be running for Cinder PTL this next cycle. Each cycle I ran
> >> > was for a reason [1][2], and the Cinder team should feel proud of our
> >> > accomplishments:
> >>
> >> Thanks for a couple of awesome cycles Mike!
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > You did a fantastic job Mike, thank you very much for the hard work and
> > dedication.
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> -- 
> Best Wishes For You!
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [kolla] Possible to support multiple compute drivers?

2015-09-15 Thread Kevin Benton
>I'm no Neutron expert, but I suspect that one could use either the
LinuxBridge *or* the OVS ML2 mechanism driver for the L2 agent, along with
a single flat provider network for your baremetal nodes.

If it's a baremetal node, it wouldn't be running an agent at all, would it?

On Mon, Sep 14, 2015 at 8:12 AM, Jay Pipes  wrote:

> On 09/10/2015 12:00 PM, Jeff Peeler wrote:
>
>> On Wed, Sep 9, 2015 at 10:25 PM, Steve Gordon > > wrote:
>>
>> - Original Message -
>> > From: "Jeff Peeler" > >>
>> > To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org
>> >
>> >
>> > I'd greatly prefer using availability zones/host aggregates as I'm
>> trying
>> > to keep the footprint as small as possible. It does appear that in
>> the
>> > section "configure scheduler to support host aggregates" [1], that
>> I can
>> > configure filtering using just one scheduler (right?). However,
>> perhaps
>> > more importantly, I'm now unsure with the network configuration
>> changes
>> > required for Ironic that deploying normal instances along with
>> baremetal
>> > servers is possible.
>> >
>> > [1]
>> >
>> http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html
>>
>> Hi Jeff,
>>
>> I assume your need for a second scheduler is spurred by wanting to
>> enable different filters for baremetal vs virt (rather than
>> influencing scheduling using the same filters via image properties,
>> extra specs, and boot parameters (hints)?
>>
>> I ask because if not you should be able to use the hypervisor_type
>> image property to ensure that images intended for baremetal are
>> directed there and those intended for kvm etc. are directed to those
>> hypervisors. The documentation [1] doesn't list ironic as a valid
>> value for this property but I looked into the code for this a while
>> ago and it seemed like it should work... Apologies if you had
>> already considered this.
>>
>> Thanks,
>>
>> Steve
>>
>> [1]
>>
>> http://docs.openstack.org/cli-reference/content/chapter_cli-glance-property.html
>>
>>
>> I hadn't considered that, thanks.
>>
>
> Yes, that's the recommended way to direct scheduling requests -- via the
> hypervisor_type image property.
>
> > It's still unknown to me though if a
>
>> separate compute service is required. And if it is required, how much
>> segregation is required to make that work.
>>
>
> Yes, a separate nova-compute worker daemon is required to manage the
> baremetal Ironic nodes.
>
> Not being a networking guru, I'm also unsure if the Ironic setup
>> instructions to use a flat network is a requirement or is just a sample
>> of possible configuration.
>>
>
> AFAIK, flat DHCP networking is currently the only supported network
> configuration for Ironic.
>
> > In a brief out of band conversation I had, it
>
>> does sound like Ironic can be configured to use linuxbridge too, which I
>> didn't know was possible.
>>
>
> Well, LinuxBridge vs. OVS isn't really about whether you have a flat
> network topology or not. It's just a different way of doing the actual
> switching (virtual bridging vs. standard linux bridges).
>
> I'm no Neutron expert, but I suspect that one could use either the
> LinuxBridge *or* the OVS ML2 mechanism driver for the L2 agent, along with
> a single flat provider network for your baremetal nodes.
>
> Hopefully an Ironic + Neutron expert will confirm or deny this?
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Policy][Group-based-policy]

2015-09-15 Thread Sumit Naiksatam
Hi Sagar,

GBP has a single REST API interface. The CLI, Horizon and Heat are
merely clients of the same REST API.

There was a similar question on this which I had responded to in a
different mailer:
http://lists.openstack.org/pipermail/openstack/2015-September/013952.html

and I believe you are cc'ed on that thread. I have provided more
information on how you can run the CLI in the verbose mode to explore
the REST request and responses. Hope that will be helpful, and we are
happy to guide you through this exercise (catch us on #openstack-gbp
for real time help).

Thanks,
~Sumit.

On Tue, Sep 15, 2015 at 3:45 AM, Sagar Pradhan  wrote:
>
>  Hello ,
>
> We were exploring group based policy for some project.We could find CLI and
> REST API documentation for GBP.
> Do we have separate REST API for GBP which can be called separately ?
> From documentation it seems that we can only use CLI , Horizon and Heat.
> Please point us to CLI or REST API documentation for GBP.
>
>
> Regards,
> Sagar
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Matt Riedemann



On 9/15/2015 10:27 AM, Mike Spreitzer wrote:

Monty Taylor  wrote on 09/15/2015 11:04:07 AM:

 > a) an update to python-novaclient to allow a named network to be passed
 > to satisfy the "you have more than one network" - the nics argument is
 > still useful for more complex things

I am not using the latest, but rather Juno.  I find that in many places
the Neutron CLI insists on a UUID when a name could be used.  Three
cheers for any campaign to fix that.


It's my understanding that network names in neutron, like security 
groups, are not unique, that's why you have to specify a UUID.




And, yeah, creating VMs on a shared public network is good too.

Thanks,
mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-09-15 Thread Morgan Fainberg
On Mon, Sep 14, 2015 at 2:46 PM, Sofer Athlan-Guyot 
wrote:

> Morgan Fainberg  writes:
>
> > On Mon, Sep 14, 2015 at 1:53 PM, Rich Megginson 
> > wrote:
> >
> >
> > On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
> >
> > Hi,
> >
> > Gilles Dubreuil  writes:
> >
> > A. The 'composite namevar' approach:
> >
> > keystone_tenant {'projectX::domainY': ... }
> > B. The 'meaningless name' approach:
> >
> > keystone_tenant {'myproject': name='projectX',
> domain=>'domainY',
> > ...}
> >
> > Notes:
> > - Actually using both combined should work too with the
> > domain
> > supposedly overriding the name part of the domain.
> > - Please look at [1] this for some background between the
> > two approaches:
> >
> > The question
> > -
> > Decide between the two approaches, the one we would like
> > to retain for
> > puppet-keystone.
> >
> > Why it matters?
> > ---
> > 1. Domain names are mandatory in every user, group or
> > project. Besides
> > the backward compatibility period mentioned earlier, where
> > no domain
> > means using the default one.
> > 2. Long term impact
> > 3. Both approaches are not completely equivalent which
> > different
> > consequences on the future usage.
> > I can't see why they couldn't be equivalent, but I may be
> > missing
> > something here.
> >
> >
> > I think we could support both. I don't see it as an either/or
> > situation.
> >
> >
> > 4. Being consistent
> > 5. Therefore the community to decide
> >
> > Pros/Cons
> > --
> > A.
> > I think it's the B: meaningless approach here.
> >
> > Pros
> > - Easier names
> > That's subjective, creating unique and meaningful name don't
> > look easy
> > to me.
> >
> > The point is that this allows choice - maybe the user already has
> > some naming scheme, or wants to use a more "natural" meaningful
> > name - rather than being forced into a possibly "awkward" naming
> > scheme with "::"
> >
> > keystone_user { 'heat domain admin user':
> > name => 'admin',
> > domain => 'HeatDomain',
> > ...
> > }
> >
> > keystone_user_role {'heat domain admin user@::HeatDomain':
> > roles => ['admin']
> > ...
> > }
> >
> >
> > Cons
> > - Titles have no meaning!
> >
> > They have meaning to the user, not necessarily to Puppet.
> >
> > - Cases where 2 or more resources could exists
> >
> > This seems to be the hardest part - I still cannot figure out how
> > to use "compound" names with Puppet.
> >
> > - More difficult to debug
> >
> > More difficult than it is already? :P
> >
> >
> >
> > - Titles mismatch when listing the resources
> > (self.instances)
> >
> > B.
> > Pros
> > - Unique titles guaranteed
> > - No ambiguity between resource found and their title
> > Cons
> > - More complicated titles
> > My vote
> > 
> > I would love to have the approach A for easier name.
> > But I've seen the challenge of maintaining the providers
> > behind the
> > curtains and the confusion it creates with name/titles and
> > when not sure
> > about the domain we're dealing with.
> > Also I believe that supporting self.instances consistently
> > with
> > meaningful name is saner.
> > Therefore I vote B
> > +1 for B.
> >
> > My view is that this should be the advertised way, but the
> > other method
> > (meaningless) should be there if the user need it.
> >
> > So as far as I'm concerned the two idioms should co-exist.
> > This would
> > mimic what is possible with all puppet resources. For instance
> > you can:
> >
> > file { '/tmp/foo.bar': ensure => present }
> >
> > and you can
> >
> > file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
> > present }
> >
> > The two refer to the same resource.
> >
> >
> > Right.
> >
> >
> > But, If that's indeed not possible to have them both, then I
> > would keep
> > only the meaningful name.
> >
> >
> > As a side note, someone raised an issue about the delimiter
> > being
> > hardcoded to "::". 

Re: [openstack-dev] [puppet] monasca,murano,mistral governance

2015-09-15 Thread Serg Melikyan
Hi Emilien,

I don't think Murano team needs to be core on a Puppet module ether,
current scheme was implemented per your proposal [1] and I am happy
that we are ready to move Murano to the same scheme that is used with
other projects.

I've updated ACL for puppet-murano project:
https://review.openstack.org/223694


References:
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-June/067260.html

On Tue, Sep 15, 2015 at 6:32 AM, Emilien Macchi  wrote:
>
>
> On 09/15/2015 07:39 AM, Ivan Berezovskiy wrote:
>> Emilien,
>>
>> puppet-murano module have a bunch of patches from Alexey Deryugin on
>> review [0], which implements most of all Murano deployment stuff.
>> Murano project was added to OpenStack namespace not so long ago, that's
>> why I suggest to have murano-core rights on puppet-murano as they
>> are till all these patches will be merged.
>> Anyway, murano-core team doesn't merge any patches without OpenStack
>> Puppet team approvals.
>
> [repeating what I said on IRC so it's official and public]
>
> I don't think Murano team needs to be core on a Puppet module.
> All OpenStack modules are managed by one group, this is how we worked
> until here and I don't think we want to change that.s
> Project teams (Keystone, Nova, Neutron, etc) already use -1/+1 to review
> Puppet code when they want to share feedback and they are very valuable,
> we actually need it.
> I don't see why we would do an exception for Murano. I would like Murano
> team to continue to give their valuable feedback by -1/+1 patches but
> it's the Puppet OpenStack team duty to decide if they merge the code or not.
>
> This collaboration is important and we need your experience to create
> new modules, but please understand how Puppet OpenStack governance works
> now.
>
> Thanks,
>
>
>> [0]
>> - 
>> https://review.openstack.org/#/q/status:open+project:openstack/puppet-murano+owner:%22Alexey+Deryugin+%253Caderyugin%2540mirantis.com%253E%22,n,z
>>
>> 2015-09-15 1:01 GMT+03:00 Matt Fischer > >:
>>
>> Emilien,
>>
>> I've discussed this with some of the Monasca puppet guys here who
>> are doing most of the work. I think it probably makes sense to move
>> to that model now, especially since the pace of development has
>> slowed substantially. One blocker before to having it "big tent" was
>> the lack of test coverage, so as long as we know that's a work in
>> progress...  I'd also like to get Brad Kiein's thoughts on this, but
>> he's out of town this week. I'll ask him to reply when he is back.
>>
>>
>> On Mon, Sep 14, 2015 at 3:44 PM, Emilien Macchi > > wrote:
>>
>> Hi,
>>
>> As a reminder, Puppet modules that are part of OpenStack are
>> documented
>> here [1].
>>
>> I can see puppet-murano & puppet-mistral Gerrit permissions
>> different
>> from other modules, because Mirantis helped to bootstrap the
>> module a
>> few months ago.
>>
>> I think [2] the modules should be consistent in governance and only
>> Puppet OpenStack group should be able to merge patches for these
>> modules.
>>
>> Same question for puppet-monasca: if Monasca team wants their module
>> under the big tent, I think they'll have to change Gerrit
>> permissions to
>> only have Puppet OpenStack able to merge patches.
>>
>> [1]
>> 
>> http://governance.openstack.org/reference/projects/puppet-openstack.html
>> [2] https://review.openstack.org/223313
>>
>> Any feedback is welcome,
>> --
>> Emilien Macchi
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>> Thanks, Ivan Berezovskiy
>> MOS Puppet Team Lead
>> at Mirantis 
>>
>> slack: iberezovskiy
>> skype: bouhforever
>> phone: + 7-960-343-42-46
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Doug Hellmann
Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
> On 15 September 2015 at 08:04, Monty Taylor  wrote:
> 
> > Hey all!
> >
> > If any of you have ever gotten drunk with me, you'll know I hate floating
> > IPs more than I hate being stabbed in the face with a very angry fish.
> >
> > However, that doesn't really matter. What should matter is "what is the
> > most sane thing we can do for our users"
> >
> > As you might have seen in the glance thread, I have a bunch of OpenStack
> > public cloud accounts. Since I wrote that email this morning, I've added
> > more - so we're up to 13.
> >
> > auro
> > citycloud
> > datacentred
> > dreamhost
> > elastx
> > entercloudsuite
> > hp
> > ovh
> > rackspace
> > runabove
> > ultimum
> > unitedstack
> > vexxhost
> >
> > Of those public clouds, 5 of them require you to use a floating IP to get
> > an outbound address, the others directly attach you to the public network.
> > Most of those 8 allow you to create a private network, to boot vms on the
> > private network, and ALSO to create a router with a gateway and put
> > floating IPs on your private ip'd machines if you choose.
> >
> > Which brings me to the suggestion I'd like to make.
> >
> > Instead of having our default in devstack and our default when we talk
> > about things be "you boot a VM and you put a floating IP on it" - which
> > solves one of the two usage models - how about:
> >
> > - Cloud has a shared: True, external:routable: True neutron network. I
> > don't care what it's called  ext-net, public, whatever. the "shared" part
> > is the key, that's the part that lets someone boot a vm on it directly.
> >
> > - Each person can then make a private network, router, gateway, etc. and
> > get floating-ips from the same public network if they prefer that model.
> >
> > Are there any good reasons to not push to get all of the public networks
> > marked as "shared"?
> >
> 
> The reason is simple: not every cloud deployment is the same: private is
> different from public and even within the same cloud model, the network
> topology may vary greatly.
> 
> Perhaps Neutron fails in the sense that it provides you with too much
> choice, and perhaps we have to standardize on the type of networking
> profile expected by a user of OpenStack public clouds before making changes
> that would fragment this landscape even further.
> 
> If you are advocating for more flexibility without limiting the existing
> one, we're only making the problem worse.

As with the Glance image upload API discussion, this is an example
of an extremely common use case that is either complex for the end
user or for which they have to know something about the deployment
in order to do it at all. The usability of an OpenStack cloud running
neutron would be enhanced greatly if there was a simple, clear, way
for the user to get a new VM with a public IP on any cloud without
multiple steps on their part. There are a lot of ways to implement
that "under the hood" (what you call "networking profile" above)
but the users don't care about "under the hood" so we should provide
a way for them to ignore it. That's *not* the same as saying we
should only support one profile. Think about the API from the use
case perspective, and build it so if there are different deployment
configurations available, the right action can be taken based on
the deployment choices made without the user providing any hints.

Doug

> 
> >
> > OH - well, one thing - that's that once there are two networks in an
> > account you have to specify which one. This is really painful in nova
> > clent. Say, for instance, you have a public network called "public" and a
> > private network called "private" ...
> >
> > You can't just say "nova boot --network=public" - nope, you need to say
> > "nova boot --nics net-id=$uuid_of_my_public_network"
> >
> > So I'd suggest 2 more things;
> >
> > a) an update to python-novaclient to allow a named network to be passed to
> > satisfy the "you have more than one network" - the nics argument is still
> > useful for more complex things
> >
> > b) ability to say "vms in my cloud should default to being booted on the
> > public network" or "vms in my cloud should default to being booted on a
> > network owned by the user"
> >
> > Thoughts?
> >
> 
> As I implied earlier, I am not sure how healthy this choice is. As a user
> of multiple clouds I may end up having a different user experience based on
> which cloud I am using...I thought you were partially complaining about
> lack of consistency?
> 
> >
> > Monty
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__
OpenStack Development Mailing List 

[openstack-dev] [ironic] [tripleo] Deprecating the bash ramdisk

2015-09-15 Thread Jim Rollenhagen
Hi all,

We have a spec[0] for deprecating our bash ramdisk during Liberty, so we
can remove it in Mitaka and have everything using IPA.

The last patch remaining[1] is to mark it deprecated in
disk-image-builder. This has been sitting with 4x +2 votes for 3 weeks
or so now. Is there any reason we aren't landing it? It's failing tests,
but I'm fairly certain they're unrelated and I'm waiting for a recheck
right now.

// jim

[0] 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/deprecate-bash-ramdisk.html
[1] https://review.openstack.org/#/c/209079/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] proposed priorities for Mitaka

2015-09-15 Thread stuart . mclaren

I've been looking into the existing task-based-upload that Doug mentions:
can anyone clarify the following?

On a default devstack install you can do this 'task' call:

http://paste.openstack.org/show/462919


Yup. That's the one.


as an alternative to the traditional image upload (the bytes are streamed
from the URL).

It's not clear to me if this is just an interesting example of the kind
of operator specific thing you can configure tasks to do, or a real
attempt to define an alternative way to upload images.

The change which added it [1] calls it a 'sample'.

Is it just an example, or is it a second 'official' upload path?


It's how you have to upload images on Rackspace.


Ok, so Rackspace have a task called image_import. But it seems to take
different json input than the devstack version. (A Swift container/object
rather than a URL.)

That seems to suggest that tasks really are operator specific, that there
is no standard task based upload ... and it should be ok to try
again with a clean slate.


If you want to see the
full fun:

https://github.com/openstack-infra/shade/blob/master/shade/__init__.py#L1335-L1510

Which is "I want to upload an image to an OpenStack Cloud"

I've listed it on this slide in CLI format too:

http://inaugust.com/talks/product-management/index.html#/27

It should be noted that once you create the task, you need to poll the
task with task-show, and then the image id will be in the completed
task-show output.

Monty


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][compass] Support of Offline Install

2015-09-15 Thread Weidong Shao
Jesse, thanks for the information! I will look into this. The proxy server
and local repo option might just work for us.

On Tue, Sep 15, 2015 at 2:53 AM Jesse Pretorius 
wrote:

> On 15 September 2015 at 05:36, Weidong Shao  wrote:
>
>> Compass, an openstack deployment project, is in process of using osad
>> project in the openstack deployment. We need to support a use case where
>> there is no Internet connection. The way we handle this is to split the
>> deployment into "build" and "install" phase. In Build phase, the Compass
>> server node can have Internet connection and can build local repo and other
>> necessary dynamic artifacts that requires Internet connection. In "install"
>> phase, the to-be-installed nodes do not have Internet connection, and they
>> only download necessary data from Compass server and other services
>> constructed in Build phase.
>>
>> Now, is "offline install" something that OSAD project shall also support?
>> If yes, what is the scope of work for any changes, if required.
>>
>
> Currently we don't have a offline install paradigm - but that doesn't mean
> that we couldn't shift things around to support it if it makes sense. I
> think this is something that we could discuss via the ML, via a spec
> review, or at the summit.
>
> Some notes which may be useful:
>
> 1. We have support for the use of a proxy server [1].
> 2. As you probably already know, we build the python wheels for the
> environment on the repo-server - so all python wheel installs (except
> tempest venv requirements) are done directly from the repo server.
> 3. All apt-key and apt-get actions are done against online repositories.
> If you wish to have these be done online then there would need to be an
> addition of some sort of apt-key and apt package mirror which we currently
> do not have. If there is a local repo in the environment, the functionality
> to direct all apt-key and apt-get install actions against an internal
> mirror is all there.
>
> [1]
> http://git.openstack.org/cgit/openstack/openstack-ansible/commit/?id=ed7f78ea5689769b3a5e1db444f4c16f3cc06060
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] Deprecating the bash ramdisk

2015-09-15 Thread James Slagle
On Tue, Sep 15, 2015 at 1:05 PM, Jim Rollenhagen  
wrote:
> Hi all,
>
> We have a spec[0] for deprecating our bash ramdisk during Liberty, so we
> can remove it in Mitaka and have everything using IPA.
>
> The last patch remaining[1] is to mark it deprecated in
> disk-image-builder. This has been sitting with 4x +2 votes for 3 weeks
> or so now. Is there any reason we aren't landing it? It's failing tests,
> but I'm fairly certain they're unrelated and I'm waiting for a recheck
> right now.

Failing tests...that's the exact reason it hasn't landed.

Exceptions can be made, but we don't always go hunting for them.
Someone just needs to take another look at it and push it through.

>
> // jim
>
> [0] 
> http://specs.openstack.org/openstack/ironic-specs/specs/approved/deprecate-bash-ramdisk.html
> [1] https://review.openstack.org/#/c/209079/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config] Reloading configuration of service

2015-09-15 Thread Joshua Harlow
Sounds like a useful idea if projects can plug-in themselves into the 
reloading process. I definitely think there needs to be a way for 
services to plug-in to this, although I'm not quite sure it will be 
sufficient at the current time though.


An example of why:

- 
https://github.com/openstack/cinder/blob/stable/kilo/cinder/volume/__init__.py#L24 
(unless this module is purged from python and reloaded it will likely 
not reload correctly).


Likely these can all be easily fixed (I just don't know how many of 
those exist in the various projects); but I guess we have to start 
somewhere so getting the underlying code able to be reloaded is a first 
step of likely many.


- Josh

mhorban wrote:

Hi guys,

I would like to talk about reloading config during reloading service.
Now we have ability to reload config of service with SIGHUP signal.
Right now SIGHUP causes just calling conf.reload_config_files().
As result configuration is updated, but services don't know about it,
there is no way to notify them.
I've created review https://review.openstack.org/#/c/213062/ to allow to
execute service's code on reloading config event.
Possible usage can be https://review.openstack.org/#/c/223668/.

Any ideas or suggestions

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Current meeting timeslot

2015-09-15 Thread James Slagle
On Tue, Sep 15, 2015 at 9:04 AM, Derek Higgins  wrote:
>
>
> On 15/09/15 12:38, Derek Higgins wrote:
>>
>> On 10/09/15 15:12, Derek Higgins wrote:
>>>
>>> Hi All,
>>>
>>> The current meeting slot for TripleO is every second Tuesday @ 1900 UTC,
>>> since that time slot was chosen a lot of people have joined the team and
>>> others have moved on, I like to revisit the timeslot to see if we can
>>> accommodate more people at the meeting (myself included).
>>>
>>> Sticking with Tuesday I see two other slots available that I think will
>>> accommodate more people currently working on TripleO,
>>>
>>> Here is the etherpad[1], can you please add your name under the time
>>> slots that would suit you so we can get a good idea how a change would
>>> effect people
>>
>>
>> Looks like moving the meeting to 1400 UTC will best accommodate
>> everybody, I've proposed a patch to change our slot
>>
>> https://review.openstack.org/#/c/223538/
>
>
> This has merged so as of next tuesdat, the tripleo meeting will be at
> 1400UTC
>
> Hope to see ye there

Thanks for running this down. I'm looking forward to seeing more folks
at the meeting next week. We have a standing wiki page where anyone
can add one-off agenda items that they'd like to discuss at the
meeting:
https://wiki.openstack.org/wiki/Meetings/TripleO

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] LVM snapshot performance issue -- why isn't thin provisioning the default?

2015-09-15 Thread Chris Friesen
I'm currently trying to work around an issue where activating LVM snapshots 
created through cinder takes potentially a long time.  (Linearly related to the 
amount of data that differs between the original volume and the snapshot.)  On 
one system I tested it took about one minute per 25GB of data, so the worst-case 
boot delay can become significant.


According to Zdenek Kabelac on the LVM mailing list, LVM snapshots were not 
intended to be kept around indefinitely, they were supposed to be used only 
until the backup was taken and then deleted.  He recommends using thin 
provisioning for long-lived snapshots due to differences in how the metadata is 
maintained.  (He also says he's heard reports of volume activation taking half 
an hour, which is clearly crazy when instances are waiting to access their volumes.)


Given the above, is there any reason why we couldn't make thin provisioning the 
default?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] PTL Candidacy

2015-09-15 Thread Devdatta Kulkarni
Hi,

I would like to announce my candidacy for the PTL position of Solum for Mitaka.

In my view the challenges in front of us are twofold, which I have outlined 
below.
I believe that I will be able to help us take concrete steps towards addressing 
these

challenges in this cycle.

Our first challenge is to continue developing and evolving Solum's feature set
so that the project becomes a valuable option for operators to offer it in 
their OpenStack installations.

Particularly, in my opinion, following features need to be completed in this 
regard:


(a) Consistency between API and CLI:
Currently Solum API and CLI have slightly different abstractions preventing
consistency of usage when using CLI vs. directly using the REST API.
We have started working on changing this[1], which needs to be completed.

(b) Ability to scale application instances:
For this, we need to investigate how we can use Magnum for satisfying
Solum's application-instance scaling and scheduling requirements.

(c) Insight into application building and deployment process:
This includes, collecting fine-grained logs in various Solum services,
ability to correlate logs across Solum services, collecting and maintaining 
historical information

about user actions to build and deploy applications on Solum.

(d) Ability to build and deploy multi-tier applications:
One idea here is to investigate if something like Magnum's Pod abstraction
can be leveraged for this.

As PTL, I will work towards helping the team move the story forward on these 
features.
Also, whenever required, I will work closely with other OpenStack projects, 
particularly Magnum,
to ensure that our team's requirements are adequately represented in their 
forum.


The second challenge for us is to increase community involvement in and around 
Solum.
Some ideas that I have in this regard are as follows:

(1) Bug squash days:
My involvement with OpenStack started three years ago when I participated
in a bug squash day organized by Anne Gentle at Rackspace Austin[2].
I believe we could organize similar bug squash days for Solum to attract 
new contributors.

Also, there could be experienced Solum contributors whose current 
priorities might
not be allowing them to participate in Solum development, but who might 
still like to

continue contributing to Solum. Bug squash days would provide them a 
dedicated time

and place to participate again in Solum development.

(2) Increasing project visibility:
Some of the actionable items here are:
- Periodic emails to openstack-dev mailing list giving updates on project's 
status, achievements, etc.
- Periodic blog posts
- Presentations at meetup events

(3) Growing community:
- Reaching out to folks who are interested in application build and 
deployment story on OpenStack,
  and inviting them to join Solum IRC meetings and mailing list discussions.
- Reviving mid-cycle meetups

As PTL, I will take actions on some of the above ideas towards helping build 
and grow our community.

About my background -- I have been involved with Solum since the beginning of 
the project.
In this period, I have contributed to the project in several ways including,
designing and implementing different features, fixing bugs, performing code 
reviews,
helping debug gate issues, helping maintain a working Vagrant environment,
maintaining community engagement through various avenues (IRC channel, IRC 
meetings, emails), and so on.
More details about my participation and involvement in the project can be found 
here:
http://stackalytics.com/?module=solum
http://stackalytics.com/?module=python-solumclient

I hope you will give me an opportunity to serve as Solum's PTL for Mitaka.

Best regards,
Devdatta Kulkarni

[1] 
https://github.com/openstack/solum-specs/blob/master/specs/liberty/app-resource.rst
[2] http://www.meetup.com/OpenStack-Austin/events/48406252/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] LVM snapshot performance issue -- why isn't thin provisioning the default?

2015-09-15 Thread Eric Harney
On 09/15/2015 01:00 PM, Chris Friesen wrote:
> I'm currently trying to work around an issue where activating LVM
> snapshots created through cinder takes potentially a long time. 
> (Linearly related to the amount of data that differs between the
> original volume and the snapshot.)  On one system I tested it took about
> one minute per 25GB of data, so the worst-case boot delay can become
> significant.
> 
> According to Zdenek Kabelac on the LVM mailing list, LVM snapshots were
> not intended to be kept around indefinitely, they were supposed to be
> used only until the backup was taken and then deleted.  He recommends
> using thin provisioning for long-lived snapshots due to differences in
> how the metadata is maintained.  (He also says he's heard reports of
> volume activation taking half an hour, which is clearly crazy when
> instances are waiting to access their volumes.)
> 
> Given the above, is there any reason why we couldn't make thin
> provisioning the default?
> 


My intention is to move toward thin-provisioned LVM as the default -- it
is definitely better suited to our use of LVM.  Previously this was less
easy, since some older Ubuntu platforms didn't support it, but in
Liberty we added the ability to specify lvm_type = "auto" [1] to use
thin if it is supported on the platform.

The other issue preventing using thin by default is that we default the
max oversubscription ratio to 20.  IMO that isn't a safe thing to do for
the reference implementation, since it means that people who deploy
Cinder LVM on smaller storage configurations can easily fill up their
volume group and have things grind to halt.  I think we want something
closer to the semantics of thick LVM for the default case.

We haven't thought through a reasonable migration strategy for how to
handle that.  I'm not sure we can change the default oversubscription
ratio without breaking deployments using other drivers.  (Maybe I'm
wrong about this?)

If we sort out that issue, I don't see any reason we can't switch over
in Mitaka.

[1] https://review.openstack.org/#/c/104653/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Fox, Kevin M
Most projects let you specify a name, and only force you to use a uuid IFF 
there is a conflict, leaving it up to the user to decide if they want the ease 
of use of names and being careful to name things, or having to use uuid's and 
not.

Neutron also has the odd wrinkle in that if your a cloud admin, it always gives 
you all the resources back in a listing rather then just the current tenant 
with a flag saying all.

This means if you try to use the "default" security group for example, it may 
work as a user, and then fail as an admin on the same tenant. very annoying. :/

I've had to work around that in heat templates before.

Thanks,
Kevin



From: Matt Riedemann [mrie...@linux.vnet.ibm.com]
Sent: Tuesday, September 15, 2015 11:28 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' 
network model

On 9/15/2015 10:27 AM, Mike Spreitzer wrote:
> Monty Taylor  wrote on 09/15/2015 11:04:07 AM:
>
>  > a) an update to python-novaclient to allow a named network to be passed
>  > to satisfy the "you have more than one network" - the nics argument is
>  > still useful for more complex things
>
> I am not using the latest, but rather Juno.  I find that in many places
> the Neutron CLI insists on a UUID when a name could be used.  Three
> cheers for any campaign to fix that.

It's my understanding that network names in neutron, like security
groups, are not unique, that's why you have to specify a UUID.

>
> And, yeah, creating VMs on a shared public network is good too.
>
> Thanks,
> mike
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using non "admin" tenant?

2015-09-15 Thread Adam Harwell
There is not really good documentation for this yet…
When I say Neutron-LBaaS tenant, I am maybe using the wrong word — I guess the 
user that is configured as the service-account in neutron.conf.
The user will hit the ACL API themselves to set up the ACLs on their own 
secrets/containers, we won’t do it for them. So, workflow is like:


  *   User creates Secrets in Barbican.
  *   User creates CertificateContainer in Barbican.
  *   User sets ACLs on Secrets and Container in Barbican, to allow the LBaaS 
user (right now using whatever user-id we publish in our docs) to read their 
data.
  *   User creates a LoadBalancer in Neutron-LBaaS.
  *   LBaaS hits Barbican using its standard configured service-account to 
retrieve the Container/Secrets from the user’s Barbican account.

This honestly hasn’t even been *fully* tested yet, but it SHOULD work. The 
question is whether right now in devstack the admin user is allowed to read all 
user secrets just because it is the admin user (which I think might be the 
case), in which case we won’t actually know if ACLs are working as intended 
(but I think we assume that Barbican has tested that feature and we can just 
rely on it working).

--Adam

https://keybase.io/rm_you


From: Vijay Venkatachalam 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, September 14, 2015 at 9:12 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible 
using non "admin" tenant?

Is there a documentation which records step by step?

What is Neutron-LBaaS tenant?

Is it the tenant who is configuring the listener? *OR* is it some tenant which 
is created for lbaas plugin that is the having all secrets for all tenants 
configuring lbaas.

>>You need to set up ACLs on the Barbican side for that container, to make it 
>>readable to the Neutron-LBaaS tenant.
I checked the ACL docs
http://docs.openstack.org/developer/barbican/api/quickstart/acls.html

The ACL API is to allow “users”(not “Tenants”) access to secrets/containers. 
What is the API or CLI that the admin will use to allow access of the tenant’s 
secret+container to Neutron-LBaaS tenant.


From: Adam Harwell [mailto:adam.harw...@rackspace.com]
Sent: 15 September 2015 03:00
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible 
using non "admin" tenant?

You need to set up ACLs on the Barbican side for that container, to make it 
readable to the Neutron-LBaaS tenant. For now, the tenant-id should just be 
documented, but we are looking into making an API call that would expose the 
admin tenant-id to the user so they can make an API call to discover it.

Once the user has the neutron-lbaas tenant ID, they use the Barbican ACL system 
to add that ID as a readable user of the container and all of the secrets. Then 
Neutron-LBaaS hits barbican with the credentials of the admin tenant, and is 
granted access to the user’s container.

--Adam

https://keybase.io/rm_you


From: Vijay Venkatachalam 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, September 11, 2015 at 2:35 PM
To: "OpenStack Development Mailing List 
(openstack-dev@lists.openstack.org)" 
>
Subject: [openstack-dev] [neutron][lbaas] Is SSL offload config possible using 
non "admin" tenant?

Hi,
  Has anyone tried configuring SSL Offload as a tenant?
  During listener creation there is an error thrown saying ‘could 
not locate/find container’.
  The lbaas plugin is not able to fetch the tenant’s certificate.

  From the code it looks like the lbaas plugin is tyring to connect 
to barbican with keystone details provided in neutron.conf
  Which is by default username = “admin” and tenant_name =”admin”.
  This means lbaas plugin is looking for tenant’s ceritifcate in 
“admin” tenant, which it will never be able to find.

  What is the procedure for the lbaas plugin to get hold of the 
tenant’s certificate?

  Assuming “admin” user has access to all tenant’s certificates. 
Should the lbaas plugin connect to barbican with username=’admin’ and 
tenant_name =  listener’s tenant_name?

Is this, the way forward ? *OR* Am I missing something?


Thanks,
Vijay V.

Re: [openstack-dev] [glance] proposed priorities for Mitaka

2015-09-15 Thread Flavio Percoco

On 14/09/15 15:51 -0400, Doug Hellmann wrote:

Excerpts from Flavio Percoco's message of 2015-09-14 14:41:00 +0200:

On 14/09/15 08:10 -0400, Doug Hellmann wrote:

[snip]


The task upload process you're referring to is the one that uses the
`import` task, which allows you to download an image from an external
source, asynchronously, and import it in Glance. This is the old
`copy-from` behavior that was moved into a task.

The "fun" thing about this - and I'm sure other folks in the Glance
community will disagree - is that I don't consider tasks to be a
public API. That is to say, I would expect tasks to be an internal API
used by cloud admins to perform some actions (bsaed on its current
implementation). Eventually, some of these tasks could be triggered
from the external API but as background operations that are triggered
by the well-known public ones and not through the task API.


Does that mean it's more of an "admin" API?


As it is right now, yes. I don't think it's suitable for public use
and the current supported features are more useful for admins than
end-users.

Could it be improved to be a public API? Sure.

[snip]


This is definitely unfortunate. I believe a good step forward for this
discussion would be to create a list of issues related to uploading
images and see how those issues can be addressed. The result from that
work might be that it's not recommended to make that endpoint public
but again, without going through the issues, it'll be hard to
understand how we can improve this situation. I expect most of this
issues to have a security impact.


A report like that would be good to have. Can someone on the Glance team
volunteer to put it together?


Here's an attempt from someone that uses clouds but doesn't run any:

- Image authenticity (we recently landed code that allows for having
 signed images)
- Quota management: Glance's quota management is very basic and it
 allows for setting quota in a per-user level[1]
- Bandwidth requirements to upload images
- (add more here)

[0] 
http://specs.openstack.org/openstack/glance-specs/specs/liberty/image-signing-and-verification-support.html
[1] 
http://docs.openstack.org/developer/glance/configuring.html#configuring-glance-user-storage-quota

[snip]

This is, indeed, an interesting interpretation of what tasks are for.
I'd probably just blame us (Glance team) for not communicating
properly what tasks are meant to be. I don't believe tasks are a way
to extend the *public* API and I'd be curious to know if others see it
that way. I fully agree that just breaks interoperability and as I've
mentioned a couple of times in this reply already, I don't even think
tasks should be part of the public API.


Whether they are intended to be an extension mechanism, they
effectively are right now, as far as I can tell.


Sorry, I probably didn't express myself correctly. What I meant to say
is that I don't see them as a way to extend the *public* API but
rather as a way to add functionality to glance that is useful for
admins.


The mistake here could be that the library should've been refactored
*before* adopting it in Glance.


The fact that there is disagreement over the intent of the library makes
me think the plan for creating it wasn't sufficiently circulated or
detailed.


There wasn't much disagreement when it was created. Some folks think
the use-cases for the library don't exist anymore and some folks that
participated in this effort are not part of OpenStack anymore.

[snip]

Flavio

--
@flaper87
Flavio Percoco


pgpVhzArbV3YP.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] proposed priorities for Mitaka

2015-09-15 Thread Flavio Percoco

On 14/09/15 16:46 -0400, Doug Hellmann wrote:

Excerpts from Clint Byrum's message of 2015-09-14 13:25:43 -0700:

Excerpts from Doug Hellmann's message of 2015-09-14 12:51:24 -0700:
> Excerpts from Flavio Percoco's message of 2015-09-14 14:41:00 +0200:
> > On 14/09/15 08:10 -0400, Doug Hellmann wrote:
> > >
> > >After having some conversations with folks at the Ops Midcycle a
> > >few weeks ago, and observing some of the more recent email threads
> > >related to glance, glance-store, the client, and the API, I spent
> > >last week contacting a few of you individually to learn more about
> > >some of the issues confronting the Glance team. I had some very
> > >frank, but I think constructive, conversations with all of you about
> > >the issues as you see them. As promised, this is the public email
> > >thread to discuss what I found, and to see if we can agree on what
> > >the Glance team should be focusing on going into the Mitaka summit
> > >and development cycle and how the rest of the community can support
> > >you in those efforts.
> > >
> > >I apologize for the length of this email, but there's a lot to go
> > >over. I've identified 2 high priority items that I think are critical
> > >for the team to be focusing on starting right away in order to use
> > >the upcoming summit time effectively. I will also describe several
> > >other issues that need to be addressed but that are less immediately
> > >critical. First the high priority items:
> > >
> > >1. Resolve the situation preventing the DefCore committee from
> > >   including image upload capabilities in the tests used for trademark
> > >   and interoperability validation.
> > >
> > >2. Follow through on the original commitment of the project to
> > >   provide an image API by completing the integration work with
> > >   nova and cinder to ensure V2 API adoption.
> >
> > Hi Doug,
> >
> > First and foremost, I'd like to thank you for taking the time to dig
> > into these issues, and for reaching out to the community seeking for
> > information and a better understanding of what the real issues are. I
> > can imagine how much time you had to dedicate on this and I'm glad you
> > did.
> >
> > Now, to your email, I very much agree with the priorities you
> > mentioned above and I'd like for, whomever will win Glance's PTL
> > election, to bring focus back on that.
> >
> > Please, find some comments in-line for each point:
> >
> > >
> > >I. DefCore
> > >
> > >The primary issue that attracted my attention was the fact that
> > >DefCore cannot currently include an image upload API in its
> > >interoperability test suite, and therefore we do not have a way to
> > >ensure interoperability between clouds for users or for trademark
> > >use. The DefCore process has been long, and at times confusing,
> > >even to those of us following it sort of closely. It's not entirely
> > >surprising that some projects haven't been following the whole time,
> > >or aren't aware of exactly what the whole thing means. I have
> > >proposed a cross-project summit session for the Mitaka summit to
> > >address this need for communication more broadly, but I'll try to
> > >summarize a bit here.
> >
> > +1
> >
> > I think it's quite sad that some projects, especially those considered
> > to be part of the `starter-kit:compute`[0], don't follow closely
> > what's going on in DefCore. I personally consider this a task PTLs
> > should incorporate in their role duties. I'm glad you proposed such
> > session, I hope it'll help raising awareness of this effort and it'll
> > help moving things forward on that front.
>
> Until fairly recently a lot of the discussion was around process
> and priorities for the DefCore committee. Now that those things are
> settled, and we have some approved policies, it's time to engage
> more fully.  I'll be working during Mitaka to improve the two-way
> communication.
>
> >
> > >
> > >DefCore is using automated tests, combined with business policies,
> > >to build a set of criteria for allowing trademark use. One of the
> > >goals of that process is to ensure that all OpenStack deployments
> > >are interoperable, so that users who write programs that talk to
> > >one cloud can use the same program with another cloud easily. This
> > >is a *REST API* level of compatibility. We cannot insert cloud-specific
> > >behavior into our client libraries, because not all cloud consumers
> > >will use those libraries to talk to the services. Similarly, we
> > >can't put the logic in the test suite, because that defeats the
> > >entire purpose of making the APIs interoperable. For this level of
> > >compatibility to work, we need well-defined APIs, with a long support
> > >period, that work the same no matter how the cloud is deployed. We
> > >need the entire community to support this effort. From what I can
> > >tell, that is going to require some changes to the current Glance
> > >API to meet the requirements. I'll list those requirements, and I
> > >hope we can 

Re: [openstack-dev] [Fuel] Nominate Denis Dmitriev for fuel-qa(devops) core

2015-09-15 Thread Alexander Kostrikov
+1

On Mon, Sep 14, 2015 at 10:19 PM, Anastasia Urlapova  wrote:

> Folks,
> I would like to nominate Denis Dmitriev[1] for fuel-qa/fuel-devops core.
>
> Dennis spent three months in Fuel BugFix team, his velocity was between
> 150-200% per week. Thanks to his efforts we have won these old issues with
> time sync and ceph's clock skew. Dennis's ideas constantly help us to
> improve our functional system suite.
>
> Fuelers, please vote for Denis!
>
> Nastya.
>
> [1]
> http://stackalytics.com/?user_id=ddmitriev=all_type=all=fuel-qa
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Kind Regards,

Alexandr Kostrikov,

Mirantis, Inc.

35b/3, Vorontsovskaya St., 109147, Moscow, Russia


Tel.: +7 (495) 640-49-04
Tel.: +7 (925) 716-64-52 <%2B7%20%28906%29%20740-64-79>

Skype: akostrikov_mirantis

E-mail: akostri...@mirantis.com 

*www.mirantis.com *
*www.mirantis.ru *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] proposed priorities for Mitaka

2015-09-15 Thread Kuvaja, Erno
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com]
> Sent: Monday, September 14, 2015 5:40 PM
> To: openstack-dev
> Subject: Re: [openstack-dev] [glance] proposed priorities for Mitaka
> 
> Excerpts from Kuvaja, Erno's message of 2015-09-14 15:02:59 +:
> > > -Original Message-
> > > From: Flavio Percoco [mailto:fla...@redhat.com]
> > > Sent: Monday, September 14, 2015 1:41 PM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [glance] proposed priorities for Mitaka
> > >
> > > On 14/09/15 08:10 -0400, Doug Hellmann wrote:

> > > >
> > > >I. DefCore
> > > >
> > > >The primary issue that attracted my attention was the fact that
> > > >DefCore cannot currently include an image upload API in its
> > > >interoperability test suite, and therefore we do not have a way to
> > > >ensure interoperability between clouds for users or for trademark
> > > >use. The DefCore process has been long, and at times confusing,
> > > >even to those of us following it sort of closely. It's not entirely
> > > >surprising that some projects haven't been following the whole
> > > >time, or aren't aware of exactly what the whole thing means. I have
> > > >proposed a cross-project summit session for the Mitaka summit to
> > > >address this need for communication more broadly, but I'll try to
> summarize a bit here.
> > >
> >
> > Looking how different OpenStack based public clouds limits or fully
> prevents their users to upload images to their deployments, I'm not
> convinced the Image Upload should be included to this definition.
> 
> The problem with that approach is that it means end consumers of those
> clouds cannot write common tools that include image uploads, which is a
> frequently used/desired feature. What makes that feature so special that we
> don't care about it for interoperability?
> 

I'm not sure it really is so special API or technical wise, it's just the one 
that was lifted to the pedestal in this discussion.

> >
> > > +1
> > >
> > > I think it's quite sad that some projects, especially those
> > > considered to be part of the `starter-kit:compute`[0], don't follow
> > > closely what's going on in DefCore. I personally consider this a
> > > task PTLs should incorporate in their role duties. I'm glad you
> > > proposed such session, I hope it'll help raising awareness of this effort
> and it'll help moving things forward on that front.
> > >
> > >
> > > >
> > > >DefCore is using automated tests, combined with business policies,
> > > >to build a set of criteria for allowing trademark use. One of the
> > > >goals of that process is to ensure that all OpenStack deployments
> > > >are interoperable, so that users who write programs that talk to
> > > >one cloud can use the same program with another cloud easily. This
> > > >is a *REST
> > > >API* level of compatibility. We cannot insert cloud-specific
> > > >behavior into our client libraries, because not all cloud consumers
> > > >will use those libraries to talk to the services. Similarly, we
> > > >can't put the logic in the test suite, because that defeats the
> > > >entire purpose of making the APIs interoperable. For this level of
> > > >compatibility to work, we need well-defined APIs, with a long
> > > >support period, that work the same no matter how the cloud is
> > > >deployed. We need the entire community to support this effort. From
> > > >what I can tell, that is going to require some changes to the
> > > >current Glance API to meet the requirements. I'll list those
> > > >requirements, and I hope we can discuss them to a degree that
> > > >ensures everyone understands them. I don't want this email thread
> > > >to get bogged down in implementation details or API designs,
> > > >though, so let's try to keep the discussion at a somewhat high
> > > >level, and leave the details for specs and summit discussions. I do
> > > >hope you will correct any misunderstandings or misconceptions,
> > > >because unwinding this as an outside observer has been quite a
> challenge and it's likely I have some details wrong.
> >
> > This just reinforces my doubt above. By including upload to the defcore
> requirements probably just closes out lots of the public clouds out there. Is
> that the intention here?
> 
> No, absolutely not. The intention is to provide clear technical direction 
> about
> what we think the API for uploading images should be.
> 

Gr8, that's easy goal to stand behind and support!

> >
> > > >

> > >
> > > The task upload process you're referring to is the one that uses the
> > > `import` task, which allows you to download an image from an
> > > external source, asynchronously, and import it in Glance. This is
> > > the old `copy-from` behavior that was moved into a task.
> > >
> > > The "fun" thing about this - and I'm sure other folks in the Glance
> > > community will disagree - is that I don't consider tasks to be a
> > > public API. That is to say, 

Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-09-15 Thread Sofer Athlan-Guyot
Gilles Dubreuil  writes:

> On 15/09/15 06:53, Rich Megginson wrote:
>> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>>> Hi,
>>>
>>> Gilles Dubreuil  writes:
>>>
 A. The 'composite namevar' approach:

 keystone_tenant {'projectX::domainY': ... }
   B. The 'meaningless name' approach:

keystone_tenant {'myproject': name='projectX', domain=>'domainY',
 ...}

 Notes:
   - Actually using both combined should work too with the domain
 supposedly overriding the name part of the domain.
   - Please look at [1] this for some background between the two
 approaches:

 The question
 -
 Decide between the two approaches, the one we would like to retain for
 puppet-keystone.

 Why it matters?
 ---
 1. Domain names are mandatory in every user, group or project. Besides
 the backward compatibility period mentioned earlier, where no domain
 means using the default one.
 2. Long term impact
 3. Both approaches are not completely equivalent which different
 consequences on the future usage.
>>> I can't see why they couldn't be equivalent, but I may be missing
>>> something here.
>> 
>> I think we could support both.  I don't see it as an either/or situation.
>> 
>>>
 4. Being consistent
 5. Therefore the community to decide

 Pros/Cons
 --
 A.
>>> I think it's the B: meaningless approach here.
>>>
Pros
  - Easier names
>>> That's subjective, creating unique and meaningful name don't look easy
>>> to me.
>> 
>> The point is that this allows choice - maybe the user already has some
>> naming scheme, or wants to use a more "natural" meaningful name - rather
>> than being forced into a possibly "awkward" naming scheme with "::"
>> 
>>   keystone_user { 'heat domain admin user':
>> name => 'admin',
>> domain => 'HeatDomain',
>> ...
>>   }
>> 
>>   keystone_user_role {'heat domain admin user@::HeatDomain':
>> roles => ['admin']
>> ...
>>   }
>> 
>>>
Cons
  - Titles have no meaning!
>> 
>> They have meaning to the user, not necessarily to Puppet.
>> 
  - Cases where 2 or more resources could exists
>> 
>> This seems to be the hardest part - I still cannot figure out how to use
>> "compound" names with Puppet.
>> 
  - More difficult to debug
>> 
>> More difficult than it is already? :P
>> 
  - Titles mismatch when listing the resources (self.instances)

 B.
Pros
  - Unique titles guaranteed
  - No ambiguity between resource found and their title
Cons
  - More complicated titles
 My vote
 
 I would love to have the approach A for easier name.
 But I've seen the challenge of maintaining the providers behind the
 curtains and the confusion it creates with name/titles and when not sure
 about the domain we're dealing with.
 Also I believe that supporting self.instances consistently with
 meaningful name is saner.
 Therefore I vote B
>>> +1 for B.
>>>
>>> My view is that this should be the advertised way, but the other method
>>> (meaningless) should be there if the user need it.
>>>
>>> So as far as I'm concerned the two idioms should co-exist.  This would
>>> mimic what is possible with all puppet resources.  For instance you can:
>>>
>>>file { '/tmp/foo.bar': ensure => present }
>>>
>>> and you can
>>>
>>>file { 'meaningless_id': name => '/tmp/foo.bar', ensure => present }
>>>
>>> The two refer to the same resource.
>> 
>> Right.
>> 
>
> I disagree, using the name for the title is not creating a composite
> name. The latter requires adding at least another parameter to be part
> of the title.
>
> Also in the case of the file resource, a path/filename is a unique name,
> which is not the case of an Openstack user which might exist in several
> domains.
>
> I actually added the meaningful name case in:
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html
>
> But that doesn't work very well because without adding the domain to the
> name, the following fails:
>
> keystone_tenant {'project_1': domain => 'domain_A', ...}
> keystone_tenant {'project_1': domain => 'domain_B', ...}
>
> And adding the domain makes it a de-facto 'composite name'.

I agree that my example is not similar to what the keystone provider has
to do.  What I wanted to point out is that user in puppet should be used
to have this kind of *interface*, one where your put something
meaningful in the title and one where you put something meaningless.
The fact that the meaningful one is a compound one shouldn't matter to
the user.

>>>
>>> But, If that's indeed not possible to have them both,
>
> There are cases where having both won't be possible like the trusts, but
> why not for the resources supporting it.
>
> That said, I think we need to make a choice, at 

Re: [openstack-dev] [openstack-ansible][compass] Support of Offline Install

2015-09-15 Thread Jesse Pretorius
On 15 September 2015 at 05:36, Weidong Shao  wrote:

> Compass, an openstack deployment project, is in process of using osad
> project in the openstack deployment. We need to support a use case where
> there is no Internet connection. The way we handle this is to split the
> deployment into "build" and "install" phase. In Build phase, the Compass
> server node can have Internet connection and can build local repo and other
> necessary dynamic artifacts that requires Internet connection. In "install"
> phase, the to-be-installed nodes do not have Internet connection, and they
> only download necessary data from Compass server and other services
> constructed in Build phase.
>
> Now, is "offline install" something that OSAD project shall also support?
> If yes, what is the scope of work for any changes, if required.
>

Currently we don't have a offline install paradigm - but that doesn't mean
that we couldn't shift things around to support it if it makes sense. I
think this is something that we could discuss via the ML, via a spec
review, or at the summit.

Some notes which may be useful:

1. We have support for the use of a proxy server [1].
2. As you probably already know, we build the python wheels for the
environment on the repo-server - so all python wheel installs (except
tempest venv requirements) are done directly from the repo server.
3. All apt-key and apt-get actions are done against online repositories. If
you wish to have these be done online then there would need to be an
addition of some sort of apt-key and apt package mirror which we currently
do not have. If there is a local repo in the environment, the functionality
to direct all apt-key and apt-get install actions against an internal
mirror is all there.

[1]
http://git.openstack.org/cgit/openstack/openstack-ansible/commit/?id=ed7f78ea5689769b3a5e1db444f4c16f3cc06060
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][SR-IOV] Hardware changes and shifting PCI addresses

2015-09-15 Thread Daniel P. Berrange
On Mon, Sep 14, 2015 at 09:34:31PM -0400, Jay Pipes wrote:
> On 09/10/2015 05:23 PM, Brent Eagles wrote:
> >Hi,
> >
> >I was recently informed of a situation that came up when an engineer
> >added an SR-IOV nic to a compute node that was hosting some guests that
> >had VFs attached. Unfortunately, adding the card shuffled the PCI
> >addresses causing some degree of havoc. Basically, the PCI addresses
> >associated with the previously allocated VFs were no longer valid.
> >
> >I tend to consider this a non-issue. The expectation that hosts have
> >relatively static hardware configuration (and kernel/driver configs for
> >that matter) is the price you pay for having pets with direct hardware
> >access. That being said, this did come as a surprise to some of those
> >involved and I don't think we have any messaging around this or advice
> >on how to deal with situations like this.
> >
> >So what should we do? I can't quite see altering OpenStack to deal with
> >this situation (or even how that could work). Has anyone done any
> >research into this problem, even if it is how to recover or extricate
> >a guest that is no longer valid? It seems that at the very least we
> >could use some stern warnings in the docs.
> 
> Hi Brent,
> 
> Interesting issue. We have code in the PCI tracker that ostensibly handles
> this problem:
> 
> https://github.com/openstack/nova/blob/master/nova/pci/manager.py#L145-L164
> 
> But the note from yjiang5 is telling:
> 
> # Pci properties may change while assigned because of
> # hotplug or config changes. Although normally this should
> # not happen.
> # As the devices have been assigned to a instance, we defer
> # the change till the instance is destroyed. We will
> # not sync the new properties with database before that.
> # TODO(yjiang5): Not sure if this is a right policy, but
> # at least it avoids some confusion and, if
> # we can add more action like killing the instance
> # by force in future.
> 
> Basically, if the PCI device tracker notices that an instance is assigned a
> PCI device with an address that no longer exists in the PCI device addresses
> returned from libvirt, it will (eventually, in the _free_instance() method)
> remove the PCI device assignment from the Instance object, but it will make
> no attempt to assign a new PCI device that meets the original PCI device
> specification in the launch request.
> 
> Should we handle this case and attempt a "hot re-assignment of a PCI
> device"? Perhaps. Is it high priority? Not really, IMHO.

Hotplugging new PCI devices to a running host should not have any impact
on existing PCI device addresses - it'll merely add new adddresses for
new devices - existing devices are unchanged. So Everything should "just
work" in that case. IIUC, Brent's Q was around turning off the host and
cold-plugging/unplugging hardware, which /is/ liable to arbitrarily
re-arrange existing PCI device addresses.

> If you'd like to file a bug against Nova, that would be cool, though.

I think it is explicitly out of scope for Nova to deal with this
scenario.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-15 Thread Thierry Carrez
Sean Dague wrote:
> On 09/08/2015 03:32 PM, Doug Hellmann wrote:
>> Excerpts from Sean Dague's message of 2015-09-08 14:11:48 -0400:
>>> On 09/08/2015 01:07 PM, Doug Hellmann wrote:
 Excerpts from Dean Troyer's message of 2015-09-08 11:20:47 -0500:
> On Tue, Sep 8, 2015 at 9:10 AM, Doug Hellmann
>>
>> I'd like to come up with some way to express the time other than
>> N+M because in the middle of a cycle it can be confusing to know
>> what that means (if I want to deprecate something in August am I
>> far enough through the current cycle that it doesn't count?).
>>
>> Also, as we start moving more projects to doing intermediate releases
>> the notion of a "release" vs. a "cycle" will drift apart, so we
>> want to talk about "stable releases" not just any old release.
>
> I've always thought the appropriate equivalent for projects not following
> the (old) integrated release cadence was for N == six months.  It sets
> approx. the same pace and expectation with users/deployers.
>
> For those deployments tracking trunk, a similar approach can be taken, in
> that deprecating a config option in M3 then removing it in N1 might be too
> quick, but rather wait at least the same point in the following release
> cycle to increment 'N'.

 Making it explicitly date-based would simplify tracking, to be sure.
>>>
>>> I would agree that the M3 -> N0 drop can be pretty quick, it can be 6
>>> weeks (which I've seen happen). However N == six months might make FFE
>>> deprecation lands in one release run into FFE in the next. For the CD
>>> case my suggestion is > 3 months. Because if you aren't CDing in
>>> increments smaller than that, and hence seeing the deprecation, you
>>> aren't really doing the C part of CDing.
>>
>> Do those 3 months need to span more than one stable release? For
>> projects doing intermediary releases, there may be several releases
>> within a 3 month period.
> 
> Yes. 1 stable release branch AND 3 months linear time is what I'd
> consider reasonable.

OK, so it seems we have convergence around:

"config options and features will have to be marked deprecated for a
minimum of one stable release branch and a minimum of 3 months"

I'll add some language in there to encourage major features to be marked
deprecated for at least two stable release branches, rather than come
with a hard rule defining what a "major" feature is.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] PTL Non-Candidacy

2015-09-15 Thread Jesse Pretorius
On 14 September 2015 at 22:02, Kevin Carter 
wrote:

>
> TL;DR - I'm sending this out to announce that I won't be running for PTL
> of the OpenStack-Ansible project in the upcoming cycle. Although I won't be
> running for PTL, with community support, I intend to remain an active
> contributor just with more time spent more cross project and in other
> upstream communities.
>
> Being a PTL has been difficult, fun, and rewarding and is something I
> think everyone should strive to do at least once. In the the upcoming cycle
> I believe our project has reached the point of maturity where its time for
> the leadership to change. OpenStack-Ansible was recently moved into the
> "big-tent" and I consider this to be the perfect juncture for me to step
> aside and allow the community to evolve under the guidance of a new team
> lead. I share the opinions of current and former PTLs that having a
> revolving door of leadership is key to the success of any project [0].
> While OpenStack-Ansible has only recently been moved out of Stackforge and
> into the OpenStack namespace as a governed project (I'm really excited
> about that) I've had the privileged of working as the project technical
> lead ever since it's inception at Rackspace with the initial proof of
> concept known as "Ansible-LXC-RPC". It's been an amazing journey so far and
> I'd like to thank everyone that's helped make OpenStack-Ansible (formally
> OSAD) possible; none of this would have happened without the contributions
> made by our devout and ever growing community of deployers and developers.
>
> Thank you again and I look forward to seeing you all online and in Tokyo.
>
> [0] -
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074242.html
>

Thank you Kevin for your leadership through the journey thus far! It's been
fantastic to work with you driving the vision and execution of being Open
[1].

[1] https://wiki.openstack.org/wiki/Open
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [docs][ptl] Docs PTL Candidacy

2015-09-15 Thread Christian Berendt

On 09/14/2015 04:31 AM, Lana Brindley wrote:

I'd love to have your support for the PTL role for Mitaka, and I'm
looking forward to continuing to grow the documentation team.


You have my support. Thanks for your great work during the current cycle.

Christian.

--
Christian Berendt
Cloud Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL Non-Candidacy

2015-09-15 Thread Neil Jerram
On 15/09/15 10:59, Ihar Hrachyshka wrote:
>> On 11 Sep 2015, at 23:12, Kyle Mestery  wrote:
>>
>> I'm writing to let everyone know that I do not plan to run for Neutron PTL 
>> for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan 
>> recently put it in his non-candidacy email [1]. But it goes further than 
>> that for me. As Flavio put it in his post about "Being a PTL" [2], it's a 
>> full time job. In the case of Neutron, it's more than a full time job, it's 
>> literally an always on job.
>>
>> I've tried really hard over my three cycles as PTL to build a stronger web 
>> of trust so the project can grow, and I feel that's been accomplished. We 
>> have a strong bench of future PTLs and leaders ready to go, I'm excited to 
>> watch them lead and help them in anyway I can.
> Wow, it took me by surprise. :( I want you to know that your leadership for 
> the last cycles was really game changing, and I am sure you should feel good 
> leaving the position with such a live and bright and open community as it is 
> now. Thanks a lot for what you did to the neutron island!
>
> I am also very happy that you don’t step back from neutron, and maybe we’ll 
> see you serve the position one more time in the future. ;)
>
> However hard it is, for you set the bar too high, I believe this community 
> will find someone to replace you in this role, and we’ll see no earthquakes 
> and disasters.
>
> See you in Tokyo! [You should give a talk about how to be an awesome PTL!]

Kyle, I'm afraid I won't be original, but I'd like to add my words to
the stream of thanks to you.  Although I've only been in the Neutron
community for a short time so far, I have felt very welcomed, and I
think that's largely due to the positive and open-minded feeling that
you have embodied.

Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][SR-IOV] Hardware changes and shifting PCI addresses

2015-09-15 Thread Daniel P. Berrange
On Thu, Sep 10, 2015 at 06:53:06PM -0230, Brent Eagles wrote:
> Hi,
> 
> I was recently informed of a situation that came up when an engineer
> added an SR-IOV nic to a compute node that was hosting some guests that
> had VFs attached. Unfortunately, adding the card shuffled the PCI
> addresses causing some degree of havoc. Basically, the PCI addresses
> associated with the previously allocated VFs were no longer valid.

This seems to be implying that they took the host offline to make
hardware changes, and then tried to re-start the originally running
guests directly, without letting the schedular re-run.

If correct, then IMHO that is an unsupported approach. After making
any hardware changes you should essentially consider that to be a
new compute host. There is no expectation that previously running
guests on that host can be restarted. You must let the compute
host report its new hardware capabilities, and let the schedular
place guests on it from scratch, using the new PCI address info.

> I tend to consider this a non-issue. The expectation that hosts have
> relatively static hardware configuration (and kernel/driver configs for
> that matter) is the price you pay for having pets with direct hardware
> access. That being said, this did come as a surprise to some of those
> involved and I don't think we have any messaging around this or advice
> on how to deal with situations like this.
> 
> So what should we do? I can't quite see altering OpenStack to deal with
> this situation (or even how that could work). Has anyone done any
> research into this problem, even if it is how to recover or extricate
> a guest that is no longer valid? It seems that at the very least we
> could use some stern warnings in the docs.

Taking a host offline for maintenance, should be considered
equivalent to throwing away the existing host and deploying a new
host. There should be zero state carry-over from OpenStack POV,
since both the software and hardware changes can potentially
invalidate previous informationm used by the schedular for deploying
on that host.  The idea of recovering a previously running guest
should be explicitly unsupported.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-09-15 Thread Sofer Athlan-Guyot
Rich Megginson  writes:

>>> This seems to be the hardest part - I still cannot figure out how to
>>> use "compound" names with Puppet.
>> I don't get this point.  what is "2 or more resource could exists" and
>> how it relates to compound names ?
>
> I would like to uniquely specify a resource by the _combination_ of
> the name + the domain.  For example:
>
>   keystone_user { 'domain A admin user':
> name => 'admin',
> domain => 'domainA',
>   }
>
>   keystone_user { 'domain B admin user':
> name => 'admin',
> domain => 'domainB',
>   }
>
> Puppet doesn't like this - the value of the 'name' property of
> keystone_user is not unique throughout the manifest/catalog, even
> though both users are distinct and unique because they existing in
> different domains (and will have different UUIDs assigned by
> Keystone).
>
> Gilles posted links to discussions about how to use isnamevar and
> title_patterns with Puppet Ruby providers, but I could not get it to
> work.  I was using Puppet 3.8 - perhaps it only works in Puppet 4.0 or
> later.  At any rate, this is an area for someone to do some research

Thanks for the explanation.  I will definitively look into it.

>   - More difficult to debug
>>> More difficult than it is already? :P
>> require 'pry';binding.pry :)
>
> Tried that on Fedora 22 (actually - debugger pry because pry by itself
> isn't a debugger, but a REPL inspector).  Didn't work.
>
> Also doesn't help you when someone hands you a pile of Puppet logs . . .

Agreed, more useful for creating provider than debugging puppet logs.

-- 
Sofer Athlan-Guyot

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL Non-Candidacy

2015-09-15 Thread Ihar Hrachyshka
> On 11 Sep 2015, at 23:12, Kyle Mestery  wrote:
> 
> I'm writing to let everyone know that I do not plan to run for Neutron PTL 
> for a fourth cycle. Being a PTL is a rewarding but difficult job, as Morgan 
> recently put it in his non-candidacy email [1]. But it goes further than that 
> for me. As Flavio put it in his post about "Being a PTL" [2], it's a full 
> time job. In the case of Neutron, it's more than a full time job, it's 
> literally an always on job.
> 
> I've tried really hard over my three cycles as PTL to build a stronger web of 
> trust so the project can grow, and I feel that's been accomplished. We have a 
> strong bench of future PTLs and leaders ready to go, I'm excited to watch 
> them lead and help them in anyway I can.

Wow, it took me by surprise. :( I want you to know that your leadership for the 
last cycles was really game changing, and I am sure you should feel good 
leaving the position with such a live and bright and open community as it is 
now. Thanks a lot for what you did to the neutron island!

I am also very happy that you don’t step back from neutron, and maybe we’ll see 
you serve the position one more time in the future. ;)

However hard it is, for you set the bar too high, I believe this community will 
find someone to replace you in this role, and we’ll see no earthquakes and 
disasters.

See you in Tokyo! [You should give a talk about how to be an awesome PTL!]

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] proposed priorities for Mitaka

2015-09-15 Thread Flavio Percoco

On 15/09/15 02:46 +0200, Monty Taylor wrote:

On 09/15/2015 02:06 AM, Clint Byrum wrote:

Excerpts from Doug Hellmann's message of 2015-09-14 13:46:16 -0700:

Excerpts from Clint Byrum's message of 2015-09-14 13:25:43 -0700:

Excerpts from Doug Hellmann's message of 2015-09-14 12:51:24 -0700:

Excerpts from Flavio Percoco's message of 2015-09-14 14:41:00 +0200:

On 14/09/15 08:10 -0400, Doug Hellmann wrote:


After having some conversations with folks at the Ops Midcycle a
few weeks ago, and observing some of the more recent email threads
related to glance, glance-store, the client, and the API, I spent
last week contacting a few of you individually to learn more about
some of the issues confronting the Glance team. I had some very
frank, but I think constructive, conversations with all of you about
the issues as you see them. As promised, this is the public email
thread to discuss what I found, and to see if we can agree on what
the Glance team should be focusing on going into the Mitaka summit
and development cycle and how the rest of the community can support
you in those efforts.

I apologize for the length of this email, but there's a lot to go
over. I've identified 2 high priority items that I think are critical
for the team to be focusing on starting right away in order to use
the upcoming summit time effectively. I will also describe several
other issues that need to be addressed but that are less immediately
critical. First the high priority items:

1. Resolve the situation preventing the DefCore committee from
  including image upload capabilities in the tests used for trademark
  and interoperability validation.

2. Follow through on the original commitment of the project to
  provide an image API by completing the integration work with
  nova and cinder to ensure V2 API adoption.


Hi Doug,

First and foremost, I'd like to thank you for taking the time to dig
into these issues, and for reaching out to the community seeking for
information and a better understanding of what the real issues are. I
can imagine how much time you had to dedicate on this and I'm glad you
did.

Now, to your email, I very much agree with the priorities you
mentioned above and I'd like for, whomever will win Glance's PTL
election, to bring focus back on that.

Please, find some comments in-line for each point:



I. DefCore

The primary issue that attracted my attention was the fact that
DefCore cannot currently include an image upload API in its
interoperability test suite, and therefore we do not have a way to
ensure interoperability between clouds for users or for trademark
use. The DefCore process has been long, and at times confusing,
even to those of us following it sort of closely. It's not entirely
surprising that some projects haven't been following the whole time,
or aren't aware of exactly what the whole thing means. I have
proposed a cross-project summit session for the Mitaka summit to
address this need for communication more broadly, but I'll try to
summarize a bit here.


+1

I think it's quite sad that some projects, especially those considered
to be part of the `starter-kit:compute`[0], don't follow closely
what's going on in DefCore. I personally consider this a task PTLs
should incorporate in their role duties. I'm glad you proposed such
session, I hope it'll help raising awareness of this effort and it'll
help moving things forward on that front.


Until fairly recently a lot of the discussion was around process
and priorities for the DefCore committee. Now that those things are
settled, and we have some approved policies, it's time to engage
more fully.  I'll be working during Mitaka to improve the two-way
communication.





DefCore is using automated tests, combined with business policies,
to build a set of criteria for allowing trademark use. One of the
goals of that process is to ensure that all OpenStack deployments
are interoperable, so that users who write programs that talk to
one cloud can use the same program with another cloud easily. This
is a *REST API* level of compatibility. We cannot insert cloud-specific
behavior into our client libraries, because not all cloud consumers
will use those libraries to talk to the services. Similarly, we
can't put the logic in the test suite, because that defeats the
entire purpose of making the APIs interoperable. For this level of
compatibility to work, we need well-defined APIs, with a long support
period, that work the same no matter how the cloud is deployed. We
need the entire community to support this effort. From what I can
tell, that is going to require some changes to the current Glance
API to meet the requirements. I'll list those requirements, and I
hope we can discuss them to a degree that ensures everyone understands
them. I don't want this email thread to get bogged down in
implementation details or API designs, though, so let's try to keep
the discussion at a somewhat high level, and leave the details for
specs and summit discussions. 

Re: [openstack-dev] [glance] tasks (following "proposed priorities for Mitaka")

2015-09-15 Thread Flavio Percoco

On 14/09/15 16:09 -0400, Doug Hellmann wrote:

Excerpts from Monty Taylor's message of 2015-09-14 20:41:38 +0200:

On 09/14/2015 04:58 PM, Brian Rosmaita wrote:

[snip]

If "glance import-from http://example.com/my-image.qcow2' always worked,
and in the back end generated a task with the task workflow, and one of
the task workflows that a deployer could implement was one to do
conversions to the image format of the cloud provider's choice, that
would be teh-awesome. It's still a bit annoying to me that I, as a user,
need to come up with a place to put the image so that it can be
imported, but honestly, I'll take it. It's not _that_ hard of a problem.


This is more or less what I'm thinking we want, too. As a user, I want
to know how to import an image by having that documented clearly and by
using an obvious UI. As a deployer, I want to sometimes do things to an
image as they are imported, and background tasks may make that easier to
implement. As a user, I don't care if my image upload is a task or not.


IMHO, as much as it's not a _hard_ problem to solve, as I user I'd
hate to be asked to upload my image somewhere to create an image in my
cloud account. Not all users create scripts and not all users have the
same resources.

Simplest scenario. I'm a new user and I want to test your cloud. I
want to know how it works and I want to run *my* distro which is not
available in the list of public images. If I were to be asked to
upload the image somewhere to then use it, I'd be really sad and, at
the very least, I'd expect this cloud to provide *the* place where I
should put the image, which is not always the case.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgpX7yWTEO7lx.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] [Inspector] Finishing Liberty (was: final liberty cycle client library releases needed)

2015-09-15 Thread Dmitry Tantsur

Hi folks!

As you can see below, we have to make the final release of 
python-ironic-inspector-client really soon. We have 2 big missing parts:


1. Introspection rules support.
   I'm working on it: https://review.openstack.org/#/c/223096/
   This required a substantial requirement, so that our client does not 
become a complete mess: https://review.openstack.org/#/c/223490/


2. Support for getting introspection data. John (trown) volunteered to 
do this work.


I'd like to ask the inspector team to pay close attention to these 
patches, as the deadline for them is Friday (preferably European time).


Next, please have a look at the milestone page for ironic-inspector 
itself: https://launchpad.net/ironic-inspector/+milestone/2.2.0
There are things that require review, and there are things without an 
assignee. If you'd like to volunteer for something there, please assign 
it to yourself. Our deadline is next Thursday, but it would be really 
good to finish it earlier next week to dedicate some time to testing.


Thanks all, I'm looking forward to this release :)


 Forwarded Message 
Subject: Re: [openstack-dev] [all][ptl][release] final liberty cycle 
client library releases needed

Date: Tue, 15 Sep 2015 10:45:45 -0400
From: Doug Hellmann 
Reply-To: OpenStack Development Mailing List (not for usage questions) 


To: openstack-dev 

Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:

On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
>> PTLs and release liaisons,
>>
>> In order to keep the rest of our schedule for the end-of-cycle release
>> tasks, we need to have final releases for all client libraries in the
>> next day or two.
>>
>> If you have not already submitted your final release request for this
>> cycle, please do that as soon as possible.
>>
>> If you *have* already submitted your final release request for this
>> cycle, please reply to this email and let me know that you have so I can
>> create your stable/liberty branch.
>>
>> Thanks!
>> Doug
>
> I forgot to mention that we also need the constraints file in
> global-requirements updated for all of the releases, so we're actually
> testing with them in the gate. Please take a minute to check the version
> specified in openstack/requirements/upper-constraints.txt for your
> libraries and submit a patch to update it to the latest release if
> necessary. I'll do a review later in the week, too, but it's easier to
> identify the causes of test failures if we have one patch at a time.

Hi Doug!

When is the last and final deadline for doing all this for
not-so-important and non-release:managed projects like ironic-inspector?
We still lack some Liberty features covered in
python-ironic-inspector-client. Do we have time until end of week to
finish them?


We would like for the schedule to be the same for everyone. We need the
final versions for all libraries this week, so we can update
requirements constraints by early next week before the RC1.

https://wiki.openstack.org/wiki/Liberty_Release_Schedule

Doug



Sorry if you hear this question too often :)

Thanks!

>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Security hardening

2015-09-15 Thread Jeff Keopp
This is a very interesting proposal and one I believe is needed.  I¹m
currently looking at hardening the controller nodes from unwanted access
and discovered that every time the controller node is booted/rebooted, it
flushes the iptables and writes only those rules that neutron believes
should be there.  This behavior would render this proposal ineffective
once the node is rebooted.

So I believe neutron needs to be fixed to not flush the iptables on each
boot, but to write the iptables to /etc/sysconfig/iptables and then
restore them as a normal linux box should do.  It should be a good citizen
with other processes.

A sysadmin should be allowed to use whatever iptables handlers they wish
to implement security policies and not have an OpenStack process undo what
they have set.

I should mention this is on a system using a flat network topology and
bare metal nodes.  No VMs.

‹
Jeff Keopp | Sr. Software Engineer, ES Systems.
380 Jackson Street | St. Paul, MN 55101 | USA  | www.cray.com





-Original Message-
From: Major Hayden 
Reply-To: "OpenStack Development Mailing List (not for usage questions)"

Date: Monday, September 14, 2015 at 11:34
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [openstack-ansible] Security hardening

>On 09/14/2015 03:28 AM, Jesse Pretorius wrote:
>> I agree with Clint that this is a good approach.
>> 
>> If there is an automated way that we can verify the security of an
>>installation at a reasonable/standardised level then I think we should
>>add a gate check for it too.
>
>Here's a rough draft of a spec.  Feel free to throw some darts.
>
>  https://review.openstack.org/#/c/222619/
>
>--
>Major Hayden
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] final liberty cycle client library releases needed

2015-09-15 Thread Doug Hellmann
Excerpts from Sergey Lukjanov's message of 2015-09-15 18:12:23 +0300:
> We're in a good shape with sahara client. 0.11.0 is the final minor release
> for it. Constraints are up to date.

Thanks, Sergey! We appreciate you and the rest of the Sahara team staying
on track with the work needed for the release schedule.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Doug Wiegley
Hi all,

One solution to this was a neutron spec that was added for a “get me a network” 
api, championed by Jay Pipes, which would auto-assign a public network on vm 
boot. It looks like it was resource starved in Liberty, though:

https://blueprints.launchpad.net/neutron/+spec/get-me-a-network 


Thanks,
doug


> On Sep 15, 2015, at 9:27 AM, Mike Spreitzer  wrote:
> 
> Monty Taylor  wrote on 09/15/2015 11:04:07 AM:
> 
> > a) an update to python-novaclient to allow a named network to be passed 
> > to satisfy the "you have more than one network" - the nics argument is 
> > still useful for more complex things
> 
> I am not using the latest, but rather Juno.  I find that in many places the 
> Neutron CLI insists on a UUID when a name could be used.  Three cheers for 
> any campaign to fix that.
> 
> And, yeah, creating VMs on a shared public network is good too.
> 
> Thanks,
> mike
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Inspector] Finishing Liberty

2015-09-15 Thread Dmitry Tantsur

On 09/15/2015 05:02 PM, Dmitry Tantsur wrote:

Hi folks!

As you can see below, we have to make the final release of
python-ironic-inspector-client really soon. We have 2 big missing parts:

1. Introspection rules support.
I'm working on it: https://review.openstack.org/#/c/223096/
This required a substantial requirement, so that our client does not
become a complete mess: https://review.openstack.org/#/c/223490/

2. Support for getting introspection data. John (trown) volunteered to
do this work.

I'd like to ask the inspector team to pay close attention to these
patches, as the deadline for them is Friday (preferably European time).

Next, please have a look at the milestone page for ironic-inspector
itself: https://launchpad.net/ironic-inspector/+milestone/2.2.0
There are things that require review, and there are things without an
assignee. If you'd like to volunteer for something there, please assign
it to yourself. Our deadline is next Thursday, but it would be really
good to finish it earlier next week to dedicate some time to testing.


Forgot an important thing: we have 2 outstanding IPA patches as well:
https://review.openstack.org/#/c/222605/
https://review.openstack.org/#/c/223054



Thanks all, I'm looking forward to this release :)


 Forwarded Message 
Subject: Re: [openstack-dev] [all][ptl][release] final liberty cycle
client library releases needed
Date: Tue, 15 Sep 2015 10:45:45 -0400
From: Doug Hellmann 
Reply-To: OpenStack Development Mailing List (not for usage questions)

To: openstack-dev 

Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:

On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
>> PTLs and release liaisons,
>>
>> In order to keep the rest of our schedule for the end-of-cycle release
>> tasks, we need to have final releases for all client libraries in the
>> next day or two.
>>
>> If you have not already submitted your final release request for this
>> cycle, please do that as soon as possible.
>>
>> If you *have* already submitted your final release request for this
>> cycle, please reply to this email and let me know that you have so
I can
>> create your stable/liberty branch.
>>
>> Thanks!
>> Doug
>
> I forgot to mention that we also need the constraints file in
> global-requirements updated for all of the releases, so we're actually
> testing with them in the gate. Please take a minute to check the
version
> specified in openstack/requirements/upper-constraints.txt for your
> libraries and submit a patch to update it to the latest release if
> necessary. I'll do a review later in the week, too, but it's easier to
> identify the causes of test failures if we have one patch at a time.

Hi Doug!

When is the last and final deadline for doing all this for
not-so-important and non-release:managed projects like ironic-inspector?
We still lack some Liberty features covered in
python-ironic-inspector-client. Do we have time until end of week to
finish them?


We would like for the schedule to be the same for everyone. We need the
final versions for all libraries this week, so we can update
requirements constraints by early next week before the RC1.

https://wiki.openstack.org/wiki/Liberty_Release_Schedule

Doug



Sorry if you hear this question too often :)

Thanks!

>
> Doug
>
>
__

> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] final liberty cycle client library releases needed

2015-09-15 Thread Doug Hellmann
Excerpts from Renat Akhmerov's message of 2015-09-15 18:11:58 +0300:
> 
> > On 15 Sep 2015, at 17:45, Doug Hellmann  wrote:
> > 
> > We would like for the schedule to be the same for everyone. We need the
> > final versions for all libraries this week, so we can update
> > requirements constraints by early next week before the RC1.
> 
> 
> “for everyone” meaning “for all Big Tent projects”? I’m trying to figure out 
> if that affects projects like Mistral that are not massively interdependent 
> with other projects.

If you plan to have a Liberty release, we would like for you to follow
the same release schedule as everyone else.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] The current state of glance v2 in public clouds

2015-09-15 Thread Flavio Percoco

On 15/09/15 15:30 +, Mark Voelker wrote:

As another data point, I took a poke around the OpenStack Marketplace [1] this 
morning and found:

* 1 distro/appliance claims v1 support
* 3 managed services claim v1 support
* 3 public clouds claim v1 support

And everyone else claims v2 support.  I’d encourage vendors to check their 
Marketplace data for accuracy…if something’s wrong there, reach out to 
ecosys...@openstack.org to enquire about fixing it.  If you simply aren’t 
listed on the Marketplace and would like to be, check out [2].

[1] https://www.openstack.org/marketplace/
[2] http://www.openstack.org/assets/marketplace/join-the-marketplace.pdf


Great!


On Sep 15, 2015, at 7:32 AM, Monty Taylor  wrote:

Hi!

In some of our other discussions, there have been musings such as "people want to..." or 
"people are concerned about..." Those are vague and unsubstantiated. Instead of "people" 
- I thought I'd enumerate actual data that I have personally empirically gathered.

I currently have an account on 12 different public clouds:

Auro
CityCloud
Dreamhost
Elastx
EnterCloudSuite
HP
OVH
Rackspace
RunAbove
Ultimum
UnitedStack
Vexxhost


(if, btw, you have a public cloud that I did not list above, please poke me and 
let's get me an account so that I can make sure you're listed/supported in 
os-client-config and also so that I don't make sweeping generalizations without 
you)

In case you care- those clouds cover US, Canada, Sweden, UK, France, Germany, 
Netherlands, Czech Republic and China.

Here's the rundown:

11 of the 12 clouds run Glance v2, 1 only have Glance v1
11 of the 12 clouds support image-create, 1 uses tasks
8 of the 12 support qcow2, 3 require raw, 1 requires vhd


Thanks for taking the time. This is acutally good info and I won't
hide my surprise since I thought most of the clouds were disabling
Glance's v2.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgpsWdBtd4coI.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptl] Troubleshooting cross-project communications

2015-09-15 Thread Anita Kuno
On 09/15/2015 08:50 AM, Anne Gentle wrote:
> Hi all,
> 
> What can we do to make the cross-project meeting more helpful and useful
> for cross-project communications? I started with a proposal to move it to a
> different time, which morphed into an idea to alternate times. But, knowing
> that we need to layer communications I wonder if we should troubleshoot
> cross-project communications further? These are the current ways
> cross-project communications happen:
> 
> 1. The weekly meeting in IRC
> 2. The cross-project specs and reviewing those
> 3. Direct connections between team members
> 4. Cross-project talks at the Summits
> 
> What are some of the problems with each layer?
> 
> 1. weekly meeting: time zones, global reach, size of cross-project concerns
> due to multiple projects being affected, another meeting for PTLs to attend
> and pay attention to
> 2. specs: don't seem to get much attention unless they're brought up at
> weekly meeting, finding owners for the work needing to be done in a spec is
> difficult since each project team has its own priorities
> 3. direct communications: decisions from these comms are difficult to then
> communicate more widely, it's difficult to get time with busy PTLs
> 4. Summits: only happens twice a year, decisions made then need to be
> widely communicated
> 
> I'm sure there are more details and problems I'm missing -- feel free to
> fill in as needed.
> 
> Lastly, what suggestions do you have for solving problems with any of these
> layers?
> 
> Thanks,
> Anne
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Hi Anne,

Thanks for starting the conversation as I think it is an area we can
really benefit from improving.

For my part, I have been trying to attend multiple mid-cycles in order
to do what I can to alleviate the effect of decisions made by one group
either duplicating work with another or directly contravening it.

It is one small part of the over communication issue which affects us
all but I do believe it has some benefit to those with whom I am able to
interact. One of the things I look for is ways for folks who need to be
discussing something specific, since they are both involved in the same
problem area, to become aware of the efforts of the other and to
encourage direct communication between those involved.

I know due to financial and personal effects this strategy doesn't
scale, but I did want to bring awareness to my efforts in your status
report.

Thanks for initiating the discussion, Anne,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] final liberty cycle client library releases needed

2015-09-15 Thread Renat Akhmerov

> On 15 Sep 2015, at 17:45, Doug Hellmann  wrote:
> 
> We would like for the schedule to be the same for everyone. We need the
> final versions for all libraries this week, so we can update
> requirements constraints by early next week before the RC1.


“for everyone” meaning “for all Big Tent projects”? I’m trying to figure out if 
that affects projects like Mistral that are not massively interdependent with 
other projects.

Thnx

Renat Akhmerov
@ Mirantis Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] The current state of glance v2 in public clouds

2015-09-15 Thread Mark Voelker
As another data point, I took a poke around the OpenStack Marketplace [1] this 
morning and found:

* 1 distro/appliance claims v1 support
* 3 managed services claim v1 support
* 3 public clouds claim v1 support

And everyone else claims v2 support.  I’d encourage vendors to check their 
Marketplace data for accuracy…if something’s wrong there, reach out to 
ecosys...@openstack.org to enquire about fixing it.  If you simply aren’t 
listed on the Marketplace and would like to be, check out [2].

[1] https://www.openstack.org/marketplace/
[2] http://www.openstack.org/assets/marketplace/join-the-marketplace.pdf

At Your Service,

Mark T. Voelker



> On Sep 15, 2015, at 7:32 AM, Monty Taylor  wrote:
> 
> Hi!
> 
> In some of our other discussions, there have been musings such as "people 
> want to..." or "people are concerned about..." Those are vague and 
> unsubstantiated. Instead of "people" - I thought I'd enumerate actual data 
> that I have personally empirically gathered.
> 
> I currently have an account on 12 different public clouds:
> 
> Auro
> CityCloud
> Dreamhost
> Elastx
> EnterCloudSuite
> HP
> OVH
> Rackspace
> RunAbove
> Ultimum
> UnitedStack
> Vexxhost
> 
> 
> (if, btw, you have a public cloud that I did not list above, please poke me 
> and let's get me an account so that I can make sure you're listed/supported 
> in os-client-config and also so that I don't make sweeping generalizations 
> without you)
> 
> In case you care- those clouds cover US, Canada, Sweden, UK, France, Germany, 
> Netherlands, Czech Republic and China.
> 
> Here's the rundown:
> 
> 11 of the 12 clouds run Glance v2, 1 only have Glance v1
> 11 of the 12 clouds support image-create, 1 uses tasks
> 8 of the 12 support qcow2, 3 require raw, 1 requires vhd
> 
> Use this data as you will.
> 
> Monty
> 
> Monty
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Mike Spreitzer
Monty Taylor  wrote on 09/15/2015 11:04:07 AM:

> a) an update to python-novaclient to allow a named network to be passed 
> to satisfy the "you have more than one network" - the nics argument is 
> still useful for more complex things

I am not using the latest, but rather Juno.  I find that in many places 
the Neutron CLI insists on a UUID when a name could be used.  Three cheers 
for any campaign to fix that.

And, yeah, creating VMs on a shared public network is good too.

Thanks,
mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3][QA] DVR job failure rate and maintainability

2015-09-15 Thread Carl Baldwin
Sean,

Thank you for writing this.  It is clear that we have some work to do
and we need more attention on this.  We were able to get the job
voting a few months ago when the failure rates for all the jobs were
at a low point.  However, we never really addressed the fact that this
job has always had a little bit higher rate than its non-DVR
counter-part.  DVR is a supported feature now and we need to be behind
it.

I'm adding this to the agenda for the L3 meeting this Thursday [1].
Let's dedicate real talent and time to getting to the bottom of the
higher failure rate, driving the bugs out, and making the enhancements
needed to make this feature what it should be.

Carl

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam

On Mon, Sep 14, 2015 at 4:01 PM, Sean M. Collins  wrote:
> Hi,
>
> Carl Baldwin, Doug Wiegley, Matt Kassawara, Ryan Moats, and myself are
> at the QA sprint in Fort Collins. Earlier today there was a discussion
> about the failure rate about the DVR job, and the possible impact that
> it is having on the gate.
>
> Ryan has a good patch up that shows the failure rates over time:
>
> https://review.openstack.org/223201
>
> To view the graphs, you go over into your neutron git repo, and open the
> .html files that are present in doc/dashboards - which should open up
> your browser and display the Graphite query.
>
> Doug put up a patch to change the DVR job to be non-voting while we
> determine the cause of the recent spikes:
>
> https://review.openstack.org/223173
>
> There was a good discussion after pushing the patch, revolving around
> the need for Neutron to have DVR, to fit operational and reliability
> requirements, and help transition away from Nova-Network by providing
> one of many solutions similar to Nova's multihost feature.  I'm skipping
> over a huge amount of context about the Nova-Network and Neutron work,
> since that is a big and ongoing effort.
>
> DVR is an important feature to have, and we need to ensure that the job
> that tests DVR has a high pass rate.
>
> One thing that I think we need, is to form a group of contributors that
> can help with the DVR feature in the immediate term to fix the current
> bugs, and longer term maintain the feature. It's a big task and I don't
> believe that a single person or company can or should do it by themselves.
>
> The L3 group is a good place to start, but I think that even within the
> L3 team we need dedicated and diverse group of people who are interested
> in maintaining the DVR feature.
>
> Without this, I think the DVR feature will start to bit-rot and that
> will have a significant impact on our ability to recommend Neutron as a
> replacement for Nova-Network in the future.
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday September 15th at 19:00 UTC

2015-09-15 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday September 15th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-08-19.01.log.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-08-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-08-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptl] Troubleshooting cross-project communications

2015-09-15 Thread Christopher Aedo
On Tue, Sep 15, 2015 at 7:50 AM, Anne Gentle
 wrote:
> Hi all,
>
> What can we do to make the cross-project meeting more helpful and useful for
> cross-project communications? I started with a proposal to move it to a
> different time, which morphed into an idea to alternate times. But, knowing
> that we need to layer communications I wonder if we should troubleshoot
> cross-project communications further? These are the current ways
> cross-project communications happen:
>
> 1. The weekly meeting in IRC
> 2. The cross-project specs and reviewing those
> 3. Direct connections between team members
> 4. Cross-project talks at the Summits

5. This mailing list

>
> What are some of the problems with each layer?
>
> 1. weekly meeting: time zones, global reach, size of cross-project concerns
> due to multiple projects being affected, another meeting for PTLs to attend
> and pay attention to
> 2. specs: don't seem to get much attention unless they're brought up at
> weekly meeting, finding owners for the work needing to be done in a spec is
> difficult since each project team has its own priorities
> 3. direct communications: decisions from these comms are difficult to then
> communicate more widely, it's difficult to get time with busy PTLs
> 4. Summits: only happens twice a year, decisions made then need to be widely
> communicated

5. There's tremendous volume on the mailing list, and it can be very
difficult to stay on top of all that traffic.

>
> I'm sure there are more details and problems I'm missing -- feel free to
> fill in as needed.
>
> Lastly, what suggestions do you have for solving problems with any of these
> layers?

Unless I missed it, I'm really not sure why the mailing list didn't
make the list here?  My take at least is that we should be
coordinating with each other through the mailing list when real-time
isn't possible (due time zone issues, etc.)  At the very least, it
keeps people from holding on to information or issues until the next
weekly meeting, or for a few months until the next mid-cycle or
summit.

I personally would like to see more coordination happening on the ML,
and would be curious to hear opinions on how that can be improved.
Maybe a tag on the subject line to draw attention in this case makes
this a little easier, since we are by nature talking about issues that
span all projects?  [cross-project] rather than [all]?

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Armando M.
On 15 September 2015 at 08:04, Monty Taylor  wrote:

> Hey all!
>
> If any of you have ever gotten drunk with me, you'll know I hate floating
> IPs more than I hate being stabbed in the face with a very angry fish.
>
> However, that doesn't really matter. What should matter is "what is the
> most sane thing we can do for our users"
>
> As you might have seen in the glance thread, I have a bunch of OpenStack
> public cloud accounts. Since I wrote that email this morning, I've added
> more - so we're up to 13.
>
> auro
> citycloud
> datacentred
> dreamhost
> elastx
> entercloudsuite
> hp
> ovh
> rackspace
> runabove
> ultimum
> unitedstack
> vexxhost
>
> Of those public clouds, 5 of them require you to use a floating IP to get
> an outbound address, the others directly attach you to the public network.
> Most of those 8 allow you to create a private network, to boot vms on the
> private network, and ALSO to create a router with a gateway and put
> floating IPs on your private ip'd machines if you choose.
>
> Which brings me to the suggestion I'd like to make.
>
> Instead of having our default in devstack and our default when we talk
> about things be "you boot a VM and you put a floating IP on it" - which
> solves one of the two usage models - how about:
>
> - Cloud has a shared: True, external:routable: True neutron network. I
> don't care what it's called  ext-net, public, whatever. the "shared" part
> is the key, that's the part that lets someone boot a vm on it directly.
>
> - Each person can then make a private network, router, gateway, etc. and
> get floating-ips from the same public network if they prefer that model.
>
> Are there any good reasons to not push to get all of the public networks
> marked as "shared"?
>

The reason is simple: not every cloud deployment is the same: private is
different from public and even within the same cloud model, the network
topology may vary greatly.

Perhaps Neutron fails in the sense that it provides you with too much
choice, and perhaps we have to standardize on the type of networking
profile expected by a user of OpenStack public clouds before making changes
that would fragment this landscape even further.

If you are advocating for more flexibility without limiting the existing
one, we're only making the problem worse.


>
> OH - well, one thing - that's that once there are two networks in an
> account you have to specify which one. This is really painful in nova
> clent. Say, for instance, you have a public network called "public" and a
> private network called "private" ...
>
> You can't just say "nova boot --network=public" - nope, you need to say
> "nova boot --nics net-id=$uuid_of_my_public_network"
>
> So I'd suggest 2 more things;
>
> a) an update to python-novaclient to allow a named network to be passed to
> satisfy the "you have more than one network" - the nics argument is still
> useful for more complex things
>
> b) ability to say "vms in my cloud should default to being booted on the
> public network" or "vms in my cloud should default to being booted on a
> network owned by the user"
>
> Thoughts?
>

As I implied earlier, I am not sure how healthy this choice is. As a user
of multiple clouds I may end up having a different user experience based on
which cloud I am using...I thought you were partially complaining about
lack of consistency?


>
> Monty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] fuel-createmirror "command not found"

2015-09-15 Thread Adam Lawson
Hi guys,
Is there a trick to get the fuel-createmirror command to work? Customer
fuel environment was at 6.0, upgraded to 6.1, tred to create local mirror
and failed. Not working from master node.


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] proposed priorities for Mitaka

2015-09-15 Thread Flavio Percoco

On 15/09/15 08:30 -0400, Doug Hellmann wrote:

Excerpts from Kuvaja, Erno's message of 2015-09-15 09:43:26 +:

[snip]

I'm not sure it really is so special API or technical wise, it's just the one 
that was lifted to the pedestal in this discussion.


OK. I'm concerned that my message of "we need an interoperable image
upload API" is sometimes being met with various versions of "that's not
possible." I think that's wrong, and we should fix it. I also think it's
possible to make the API consistent and still support background tasks,
image scanning, and other things deployers want.


Yes, this is a discussion that started in this cycle as part of
this[0] proposed spec. The discussion was put on hold until Mitaka.
One of the concerns raised was whether it's ok to make tasks part of
the upload process or not since that changes some of the existing
behavior.

For example, right now, when an image is uploaded, it can be used
right away. If we make async tasks part of the upload workflow, then
images won't be available until all tasks are executed.

Personally, I think the above is fine and it'd give the user a better
experience in comparison w/ the current task API. There are other
issues related to this that require a lenghtier discussion and are not
strictly related to the API.

[0] https://review.openstack.org/#/c/188388/

Flavio

--
@flaper87
Flavio Percoco


pgpbN1hj_vICL.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Monty Taylor

Hey all!

If any of you have ever gotten drunk with me, you'll know I hate 
floating IPs more than I hate being stabbed in the face with a very 
angry fish.


However, that doesn't really matter. What should matter is "what is the 
most sane thing we can do for our users"


As you might have seen in the glance thread, I have a bunch of OpenStack 
public cloud accounts. Since I wrote that email this morning, I've added 
more - so we're up to 13.


auro
citycloud
datacentred
dreamhost
elastx
entercloudsuite
hp
ovh
rackspace
runabove
ultimum
unitedstack
vexxhost

Of those public clouds, 5 of them require you to use a floating IP to 
get an outbound address, the others directly attach you to the public 
network. Most of those 8 allow you to create a private network, to boot 
vms on the private network, and ALSO to create a router with a gateway 
and put floating IPs on your private ip'd machines if you choose.


Which brings me to the suggestion I'd like to make.

Instead of having our default in devstack and our default when we talk 
about things be "you boot a VM and you put a floating IP on it" - which 
solves one of the two usage models - how about:


- Cloud has a shared: True, external:routable: True neutron network. I 
don't care what it's called  ext-net, public, whatever. the "shared" 
part is the key, that's the part that lets someone boot a vm on it directly.


- Each person can then make a private network, router, gateway, etc. and 
get floating-ips from the same public network if they prefer that model.


Are there any good reasons to not push to get all of the public networks 
marked as "shared"?


OH - well, one thing - that's that once there are two networks in an 
account you have to specify which one. This is really painful in nova 
clent. Say, for instance, you have a public network called "public" and 
a private network called "private" ...


You can't just say "nova boot --network=public" - nope, you need to say 
"nova boot --nics net-id=$uuid_of_my_public_network"


So I'd suggest 2 more things;

a) an update to python-novaclient to allow a named network to be passed 
to satisfy the "you have more than one network" - the nics argument is 
still useful for more complex things


b) ability to say "vms in my cloud should default to being booted on the 
public network" or "vms in my cloud should default to being booted on a 
network owned by the user"


Thoughts?

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] final liberty cycle client library releases needed

2015-09-15 Thread Sergey Lukjanov
We're in a good shape with sahara client. 0.11.0 is the final minor release
for it. Constraints are up to date.

On Tue, Sep 15, 2015 at 5:45 PM, Doug Hellmann 
wrote:

> Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:
> > On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> > > Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
> > >> PTLs and release liaisons,
> > >>
> > >> In order to keep the rest of our schedule for the end-of-cycle release
> > >> tasks, we need to have final releases for all client libraries in the
> > >> next day or two.
> > >>
> > >> If you have not already submitted your final release request for this
> > >> cycle, please do that as soon as possible.
> > >>
> > >> If you *have* already submitted your final release request for this
> > >> cycle, please reply to this email and let me know that you have so I
> can
> > >> create your stable/liberty branch.
> > >>
> > >> Thanks!
> > >> Doug
> > >
> > > I forgot to mention that we also need the constraints file in
> > > global-requirements updated for all of the releases, so we're actually
> > > testing with them in the gate. Please take a minute to check the
> version
> > > specified in openstack/requirements/upper-constraints.txt for your
> > > libraries and submit a patch to update it to the latest release if
> > > necessary. I'll do a review later in the week, too, but it's easier to
> > > identify the causes of test failures if we have one patch at a time.
> >
> > Hi Doug!
> >
> > When is the last and final deadline for doing all this for
> > not-so-important and non-release:managed projects like ironic-inspector?
> > We still lack some Liberty features covered in
> > python-ironic-inspector-client. Do we have time until end of week to
> > finish them?
>
> We would like for the schedule to be the same for everyone. We need the
> final versions for all libraries this week, so we can update
> requirements constraints by early next week before the RC1.
>
> https://wiki.openstack.org/wiki/Liberty_Release_Schedule
>
> Doug
>
> >
> > Sorry if you hear this question too often :)
> >
> > Thanks!
> >
> > >
> > > Doug
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is down

2015-09-15 Thread Dulko, Michal
> From: Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
> Sent: Tuesday, September 15, 2015 4:54 PM
> 
> Hi,
> 
> Let me see if i got this:
> - running 3 (multiple) c-vols won't automatically give you failover
> - each c-vol is "master" of a certain number of volumes
> -- if the c-vol is "down" then those volumes cannot be managed by another
> c-vol
> 
> What i'm trying to achieve is making sure ANY volume is managed
> (manageable) by WHICHEVER c-vol is running (and gets the call first) - sort of
> A/A - so this means i need to look into Pacemaker and virtual-ips, or i should
> try first the "same name".
> 

I think you should try Pacemaker A/P configuration with same hostname in 
cinder.conf. That's the only safe option here.

I don't quite understand John's idea of how virtual IP can help with c-vol, as 
this service only listens on AMQP queue. I think VIP is useful only for running 
c-api service. 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Inspector] Finishing Liberty (was: final liberty cycle client library releases needed)

2015-09-15 Thread Doug Hellmann
Excerpts from Dmitry Tantsur's message of 2015-09-15 17:02:52 +0200:
> Hi folks!
> 
> As you can see below, we have to make the final release of 
> python-ironic-inspector-client really soon. We have 2 big missing parts:
> 
> 1. Introspection rules support.
> I'm working on it: https://review.openstack.org/#/c/223096/
> This required a substantial requirement, so that our client does not 
> become a complete mess: https://review.openstack.org/#/c/223490/

At this point in the schedule, I'm not sure it's a good idea to be
doing anything that's considered a "substantial" rewrite (what I
assume you meant instead of a "substantial requirement").

What depends on python-ironic-inspector-client? Are all of the things
that depend on it working for liberty right now? If so, that's your
liberty release and the rewrite should be considered for mitaka.

> 
> 2. Support for getting introspection data. John (trown) volunteered to 
> do this work.
> 
> I'd like to ask the inspector team to pay close attention to these 
> patches, as the deadline for them is Friday (preferably European time).

You should definitely not be trying to write anything new at this point.
The feature freeze was *last* week. The releases for this week are meant
to include bug fixes and any needed requirements updates.

> 
> Next, please have a look at the milestone page for ironic-inspector 
> itself: https://launchpad.net/ironic-inspector/+milestone/2.2.0
> There are things that require review, and there are things without an 
> assignee. If you'd like to volunteer for something there, please assign 
> it to yourself. Our deadline is next Thursday, but it would be really 
> good to finish it earlier next week to dedicate some time to testing.
> 
> Thanks all, I'm looking forward to this release :)
> 
> 
>  Forwarded Message 
> Subject: Re: [openstack-dev] [all][ptl][release] final liberty cycle 
> client library releases needed
> Date: Tue, 15 Sep 2015 10:45:45 -0400
> From: Doug Hellmann 
> Reply-To: OpenStack Development Mailing List (not for usage questions) 
> 
> To: openstack-dev 
> 
> Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:
> > On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> > > Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
> > >> PTLs and release liaisons,
> > >>
> > >> In order to keep the rest of our schedule for the end-of-cycle release
> > >> tasks, we need to have final releases for all client libraries in the
> > >> next day or two.
> > >>
> > >> If you have not already submitted your final release request for this
> > >> cycle, please do that as soon as possible.
> > >>
> > >> If you *have* already submitted your final release request for this
> > >> cycle, please reply to this email and let me know that you have so I can
> > >> create your stable/liberty branch.
> > >>
> > >> Thanks!
> > >> Doug
> > >
> > > I forgot to mention that we also need the constraints file in
> > > global-requirements updated for all of the releases, so we're actually
> > > testing with them in the gate. Please take a minute to check the version
> > > specified in openstack/requirements/upper-constraints.txt for your
> > > libraries and submit a patch to update it to the latest release if
> > > necessary. I'll do a review later in the week, too, but it's easier to
> > > identify the causes of test failures if we have one patch at a time.
> >
> > Hi Doug!
> >
> > When is the last and final deadline for doing all this for
> > not-so-important and non-release:managed projects like ironic-inspector?
> > We still lack some Liberty features covered in
> > python-ironic-inspector-client. Do we have time until end of week to
> > finish them?
> 
> We would like for the schedule to be the same for everyone. We need the
> final versions for all libraries this week, so we can update
> requirements constraints by early next week before the RC1.
> 
> https://wiki.openstack.org/wiki/Liberty_Release_Schedule
> 
> Doug
> 
> >
> > Sorry if you hear this question too often :)
> >
> > Thanks!
> >
> > >
> > > Doug
> > >
> > > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] PTL candidacy

2015-09-15 Thread gord chung

hi folks,

less than six months ago, i decided to run for PTL of Ceilometer where 
my main goal was to support the community of contributors that exists 
within OpenStack with interests in telemetry[1]. it is under that tenet 
which i will run again for team lead of Ceilometer. as mentioned 
previously, we have a diverse set of contributors from across the globe 
working on various aspects of metering and monitoring and it is my goal 
to ensure nothing slows them down (myself included).


that said, as we look forward to Mitaka, i hope to follow along the path 
of stability, simplicity and usability. some items i'd like to target are:


- rolling upgrades - having a fluid upgrade path for operators is 
critical to providing a highly available cloud environment for their 
users. i would like to have a viable solution in Ceilometer that can 
provide this functionality with zero/minimal performance degradation.


- building up events - we started work on adding inline event alarming 
in Aodh during Liberty, this is something i'd like to improve upon by 
adding multiple worker support and broader alarming evaluations. also, a 
common use case for events is to analyse the data for BI. while we allow 
the ability to query and alarm on events. one useful tool would be the 
ability to run statistics on events such as the number of instances 
launched.


- optimising collection - we improved ease of use by adding declarative 
notification support in Liberty. it'd be great if this work could be 
adopted by projects producing metrics. additionally, we currently have a 
extremely tight coupling between resource metadata and measurement data. 
i'd like to evaluate how to loosen this so our data collection and 
storage is more flexible.


- continuing the refactoring - removing deprecated/redundant 
functionality; it was nice deprecating/deleting stuff, let's keep doing 
it (within reason)! one possible target would be splitting storage/api, 
while starting the initial deprecation of v2 metering api.


- functional and integration testing -  we now have integration and 
functional test living within our repositories. this should allow us to 
develop testing easier so it'd be good to broaden the coverage.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-April/060536.html


cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] proposed priorities for Mitaka

2015-09-15 Thread Monty Taylor

On 09/15/2015 03:13 PM, stuart.mcla...@hp.com wrote:


After having some conversations with folks at the Ops Midcycle a
few weeks ago, and observing some of the more recent email threads
related to glance, glance-store, the client, and the API, I spent
last week contacting a few of you individually to learn more about
some of the issues confronting the Glance team. I had some very
frank, but I think constructive, conversations with all of you about
the issues as you see them. As promised, this is the public email
thread to discuss what I found, and to see if we can agree on what
the Glance team should be focusing on going into the Mitaka summit
and development cycle and how the rest of the community can support
you in those efforts.


Doug, thanks for reaching out here.

I've been looking into the existing task-based-upload that Doug mentions:
can anyone clarify the following?

On a default devstack install you can do this 'task' call:

http://paste.openstack.org/show/462919


Yup. That's the one.


as an alternative to the traditional image upload (the bytes are streamed
from the URL).

It's not clear to me if this is just an interesting example of the kind
of operator specific thing you can configure tasks to do, or a real
attempt to define an alternative way to upload images.

The change which added it [1] calls it a 'sample'.

Is it just an example, or is it a second 'official' upload path?


It's how you have to upload images on Rackspace. If you want to see the 
full fun:


https://github.com/openstack-infra/shade/blob/master/shade/__init__.py#L1335-L1510

Which is "I want to upload an image to an OpenStack Cloud"

I've listed it on this slide in CLI format too:

http://inaugust.com/talks/product-management/index.html#/27

It should be noted that once you create the task, you need to poll the 
task with task-show, and then the image id will be in the completed 
task-show output.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][oslo.config] Reloading configuration of service

2015-09-15 Thread mhorban

Hi guys,

I would like to talk about reloading config during reloading service.
Now we have ability to reload config of service with SIGHUP signal.
Right now SIGHUP causes just calling conf.reload_config_files().
As result configuration is updated, but services don't know about it, 
there is no way to notify them.
I've created review https://review.openstack.org/#/c/213062/ to allow to 
execute service's code on reloading config event.

Possible usage can be https://review.openstack.org/#/c/223668/.

Any ideas or suggestions

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Help with stable/juno branches / releases

2015-09-15 Thread Matt Riedemann



On 8/27/2015 10:22 PM, Tony Breeds wrote:

On Fri, Aug 28, 2015 at 11:12:43AM +1200, Robert Collins wrote:


I'm pretty sure it *will* be EOL'd. OTOH thats 10 weeks of fixes folk
can get. I think you should do it if you've the stomach for it, and if
its going to help someone. I can aid by cutting library releases for
you I think (haven't checked stable releases yet, and I need to update
myself on the tooling changes in the last fortnight...).


Okay I certainly have the stomach for it in some perverse way it's fun, as I'm
learning about parts of the process / code base that are new to me :)

My concerns were mostly around other peoples time (like the PTLs/cores I need
to hassle and thems with the release power :))

So I'll keep going on it.

Right now there aren't any library releases to be done.

Thanks Robert.

Yours Tony.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



This seems to have stalled.

For python-ceilometerclient (blocking Horizon), this is where we are at:

1. python-ceilometerclient needs g-r sync from stable/juno 
https://review.openstack.org/#/c/173126/ and then released as 1.0.15 per 
bug 1494516.  However, that's failing tests due to:


2. oslo.utils needs a stable/juno branch created from the 1.4.0 tag so 
we can sync g-r stable/juno to oslo.utils and then release that as 1.4.1.


--

We can work on the other libraries in Tony's original email when we get 
one thing done.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Ceilometer M Midcycle

2015-09-15 Thread gord chung

thanks for organising this Jason.

for those who can't attend but want to, i think it'd also be good to 
know if location is the blocker here.


retagging with [ceilometer].

On 15/09/2015 9:50 AM, Jason Myers wrote:

Hello Everyone,
We are setting up a few polls to determine the possibility of 
meeting face to face for a ceilometer midcycle in Dublin, IE. We'd 
like to gather for three days to discuss all the work we are currently 
doing; however, we have access to space for 5 so you could also use 
that space for co working outside of the meeting dates.  We have two 
date polls: one for Nov 30-Dec 18 at 
http://doodle.com/poll/hmukqwzvq7b54cef, and one for Jan 11-22 at 
http://doodle.com/poll/kbkmk5v2vass249i. You can vote for any of the 
days in there that work for you.  If we don't get enough interest in 
either poll, we will do a virtual midcycle like we did last year.  
Please vote for your favorite days in the two polls if you are 
interested in attending in person. If we don't get many votes, we'll 
circulate another poll for the virtual dates.


--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >