Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-16 Thread Dmitry Tantsur

On 06/17/2015 03:35 AM, Ken'ichi Ohmichi wrote:

2015-06-16 21:16 GMT+09:00 Jay Pipes :

On 06/16/2015 08:00 AM, Dmitry Tantsur wrote:



16 июня 2015 г. 13:52 пользователь "Jay Pipes" mailto:jaypi...@gmail.com>> написал:
  >
  > On 06/16/2015 04:36 AM, Alex Xu wrote:
  >>
  >> So if our min_version is 2.1 and the max_version is 2.50. That means
  >> alternative implementations need implement all the 50 versions
  >> api...that sounds pain...
  >
  >
  > Yes, it's pain, but it's no different than someone who is following
the Amazon EC2 API, which cuts releases at a regular (sometimes every
2-3 weeks) clip.
  >
  > In Amazon-land, the releases are date-based, instead of
microversion/incrementing version-based, but the idea is essentially the
same.
  >
  > There is GREAT value to having an API mean ONE thing and ONE thing
only. It means that developers can code against something that isn't
like quicksand -- constantly changing meanings.

Being one of such developers, I only see this "value" for breaking
changes.



Sorry, Dmitry, I'm not quite following you. Could you elaborate on what you
mean by above?


I guess maybe he is thinking the value of microversions is just for
backwards incompatible changes and backwards compatible changes are
unnecessary to be managed by microversions because he is proposing it
as Ironic patch.


Exactly. That's not only my thinking, that's my experience from Kilo as 
both Ironic developer, and developer *for* Ironic (i.e. the very person 
you're trying to make happy).




Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [Ironic] [Inspector] Where should integration tests for non-core projects live now? (Was: Toward 2.0.0 release)

2015-06-16 Thread Dmitry Tantsur

On 06/17/2015 06:54 AM, Ken'ichi Ohmichi wrote:

2015-06-17 12:38 GMT+09:00 Yuiko Takada :


Then, as you and Matt and Dimitry talked about this on IRC few days ago,
We can add Ironic/Ironic-inspector tests into Tempest still, right?
So that I've started to implement a test in Tempest,
but I'm facing another issue.
As you know, Ironic API has microversions, and Ironic-inspector can run
with microversion > 1.6.
But currently there is no feature testing specific Ironic API microversions
on Tempest, right?

So that, we have to think about some solutions.

(1) Make testing specific Ironic API microversions on Tempest possible
adam_g is posting this patch set.
https://review.openstack.org/166386

(2)Using tempest_lib instead of adding tests into Tempest
Is tempest_lib available already?
Or do we need to wait for something will be merged?


I guess the above question seems multiple factors are mixed.
You want to test ironic-inspector behaviors by
   * using ironic-inspector REST APIs directly without Ironic
   * using Ironic REST APIs which need newer microversion
right?


Hi, thanks for clarifying, let me jump in :)

The former is more or less covered by functional testing, so I'd like us 
to concentrate on the latter, and run it voting on inspector repo and 
non-voting on Ironic for the time being.




For the first test, you can implement without considering microversion.
The test just calls ironic-inspector REST APIs directly and checks its behavior.
You can implement the test on Tempest/ironic-inspector repository.
Current tempest-lib seems enough to implement tests in
ironic-inspector repository as features, but it is better to wait for
Tempest's external interface spec[1] approval.
It is trying to define directory structure of Tempest-like tests on
each project repository and Tempest will discover tests based on the
directory structure and run them.
So if implementing tests on ironic-inspector repository before the
spec approval, you will need to change the directory structure again
in the future.


This "wait" part bothers me to some extend, because gate absence badly 
affects us for some time, but fine. Thanks for heads up anyway.




For the second test, microversions support is necessary on Tempest
side and adam_g's patch seems good for implementing it.
My main concern of microversions tests is how to run multiple
microversions on the gate.
We have discussed that in Nova design session of Vancouver summit and
the conclusion is
  * Minimum microversion
  * Maximum microversion
  * Interesting microversions
as the gate test.


Facepalm. That's what I was talking about (and what we actually ended up 
in Ironic with): we're introducing a ton of non-tested (and thus 
presumably broken) microversions, because it's cool to do. Ok, that's 
another thread :)



IMO "Interesting microversions" would be the last microversions of
each release(Kilo, Liberty, ..) I feel.


With Ironic intermediate release, it will be more, I estimate it to be 
5-6 per year, but of course I can't tell for sure.



I have qa-spec[2] for testing microversions on the gate, but that is
not complete yet.
That will affect how to specify/run microversions tests on Tempest.
So now I'm not sure yet the way to specify microversion on current
adam_g's patch is the best.

So my recommendation/hope is that we concentrate on Tempest's external
interface spec[1] and make it better together, then we can implement
Tempest-like tests on each repository after that.
As the next step, we will test microversions on the same way between
projects based on conclusion of the spec[2].


What I'd prefer us to start with is a gate test, which just sets 
devstack with our plugin and runs a shell script testing a couple of 
basic things. This will be a HUGE leap forward for inspector, compared 
to only limited functional testing we have now.


So maybe we should start with it, and keep an eye on the tempest-lib 
stuff, wdyt?





(3)Make Ironic-inspector available even if microversion < 1.6
Dmitry is posting this patch set.
https://review.openstack.org/192196
# I don't mean asking you to review this, don't worry :p


I've reviewed it already :)

Thanks
Ken Ohmichi

---
[1]: https://review.openstack.org/#/c/184992/
[2]: https://review.openstack.org/#/c/169126/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Security] the need about implementing a MAC security hook framework for OpenStack

2015-06-16 Thread Yang Luo
Hi list,

  I'd like to know the need about implementing a MAC (Mandatory Access
Control) security hook framework for OpenStack, just like the Linux
Security Module to Linux. It can be used to help construct a security
module that mediates the communications between OpenStack nodes and
controls distribution of resources (i.e., images, network, shared disks).
This security hook framework should be cluster-wide, dynamic policy
updating supported, non-intrusive implemented and with low performance
overhead. The famous module in LSM, SELinux can also be imported into this
security hook framework. In my point, as OpenStack has become a leading
cloud operating system, it needs some kind of security architecture as
standard OS.

I am a Ph.D student who has been following OpenStack security closely for
nearly 1 year. This is just my initial idea and I know this project won't
be small, so before I actually work on it, I'd like to hear your
suggestions or objections about it. Thanks!

Best,
Yang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] slimming down (the RFE) process for small internal design stuff

2015-06-16 Thread Miguel Angel Ajo
A few days ago, I was talking to Rossella, about the need of a RFE for 
this [1],
in this case, it's an internal library/nicety for neutron development. 
It's not a feature

from the user point of view. And it's not a very big change.

So, we thought that starting by a simple devref to agree on, and 
continuing by adding
to the same patch the code that goes with the devref may be enough. Kyle 
agreed with

the idea, but we wanted to share it on list.

The advantages of this approach are interesting:

1) We end up with a devref we can keep on tree directly and that servers 
as developer

 documentation.

2) We save time and simplify the process by doing it all in the same place.

I agree that RFEs/specs are valuable in the context of the wider 
openstack project, but

are better for user facing functionality or big design changes.

TL;DR

About [1], it's something to be consumed by the QoS design, but made with
reusability in mind.

We expect to reuse it for security groups in the future, improving the 
design and

optimizing the way agents consume messages in the end thanks to
object-type + id  directed  fan-outs.
(may be such second step would deserve a RFE + little spec to be evaluated).

[1] https://review.openstack.org/#/c/190635/ (Generic RPC mechanism 
which could be reused)


Best,
Miguel Ángel Ajo


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-16 Thread Sam Morrison

> On 17 Jun 2015, at 10:56 am, Armando M.  wrote:
> 
> 
> 
> On 16 June 2015 at 17:31, Sam Morrison  > wrote:
> We at NeCTAR are starting the transition to neutron from nova-net and neutron 
> almost does what we want.
> 
> We have 10 “public" networks and 10 “service" networks and depending on which 
> compute node you land on you get attached to one of them.
> 
> In neutron speak we have multiple shared externally routed provider networks. 
> We don’t have any tenant networks or any other fancy stuff yet.
> How I’ve currently got this set up is by creating 10 networks and subsequent 
> subnets eg. public-1, public-2, public-3 … and service-1, service-2, 
> service-3 and so on.
> 
> In nova we have made a slight change in allocate for instance [1] whereby the 
> compute node has a designated hardcoded network_ids for the public and 
> service network it is physically attached to.
> We have also made changes in the nova API so users can’t select a network and 
> the neutron endpoint is not registered in keystone.
> 
> That all works fine but ideally I want a user to be able to choose if they 
> want a public and or service network. We can’t let them as we have 10 public 
> networks, we almost need something in neutron like a "network group” or 
> something that allows a user to select “public” and it allocates them a port 
> in one of the underlying public networks.
> 
> I tried going down the route of having 1 public and 1 service network in 
> neutron then creating 10 subnets under each. That works until you get to 
> things like dhcp-agent and metadata agent although this looks like it could 
> work with a few minor changes. Basically I need a dhcp-agent to be spun up 
> per subnet and ensure they are spun up in the right place.
> 
> I’m not sure what the correct way of doing this. What are other people doing 
> in the interim until this kind of use case can be done in Neutron?
> 
> Would something like [1] be adequate to address your use case? If not, I'd 
> suggest you to file an RFE bug (more details in [2]), so that we can keep the 
> discussion focused on this specific case.
> 
> HTH
> Armando
> 
> [1] https://blueprints.launchpad.net/neutron/+spec/rbac-networks 
> 
That’s not applicable in this case. We don’t care about what tenants are when 
in this case.

> [2] 
> https://github.com/openstack/neutron/blob/master/doc/source/policies/blueprints.rst#neutron-request-for-feature-enhancements
>  
> 
The bug Kris mentioned outlines all I want too I think.

Sam


> 
>  
> 
> Cheers,
> Sam
> 
> [1] 
> https://github.com/NeCTAR-RC/nova/commit/1bc2396edc684f83ce471dd9dc9219c4635afb12
>  
> 
> 
> 
> 
> > On 17 Jun 2015, at 12:20 am, Jay Pipes  > > wrote:
> >
> > Adding -dev because of the reference to the Neutron "Get me a network 
> > spec". Also adding [nova] and [neutron] subject markers.
> >
> > Comments inline, Kris.
> >
> > On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
> >> During the Openstack summit this week I got to talk to a number of other
> >> operators of large Openstack deployments about how they do networking.
> >>  I was happy, surprised even, to find that a number of us are using a
> >> similar type of networking strategy.  That we have similar challenges
> >> around networking and are solving it in our own but very similar way.
> >>  It is always nice to see that other people are doing the same things
> >> as you or see the same issues as you are and that "you are not crazy".
> >> So in that vein, I wanted to reach out to the rest of the Ops Community
> >> and ask one pretty simple question.
> >>
> >> Would it be accurate to say that most of your end users want almost
> >> nothing to do with the network?
> >
> > That was my experience at AT&T, yes. The vast majority of end users could 
> > not care less about networking, as long as the connectivity was reliable, 
> > performed well, and they could connect to the Internet (and have others 
> > connect from the Internet to their VMs) when needed.
> >
> >> In my experience what the majority of them (both internal and external)
> >> want is to consume from Openstack a compute resource, a property of
> >> which is it that resource has an IP address.  They, at most, care about
> >> which "network" they are on.  Where a "network" is usually an arbitrary
> >> definition around a set of real networks, that are constrained to a
> >> location, in which the company has attached some sort of policy.  For
> >> example, I want to be in the production network vs's the xyz lab
> >> network, vs's the backup network, vs's the corp network.  I would say
> >> for Godaddy, 99% of our use cases would be defined as: I want a compute
> >> re

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-16 Thread Adrian Otto
Clint,

Hi! It’s good to hear from you!

On Jun 16, 2015, at 8:58 PM, Clint Byrum 
mailto:cl...@fewbar.com>> wrote:

I don't understand at all what you said there.

If my kubernetes minions are attached to a gateway which has a direct
route to Magnum, let's say they're at, 192.0.2.{100,101,102}, and
Magnum is at 198.51.100.1, then as long as the minions' gateway knows
how to find 198.51.100.0/24, and Magnum's gateway knows how to route to
192.0.2.0/24, then you can have two-way communication and no floating
ips or NAT. This seems orthogonal to how external users find the minions.

That’s correct. Keep in mind that large clouds use layer 3 routing protocols to 
get packets around, especially for north/south traffic where public IP 
addresses are typically used. Injecting new routes into the network fabric each 
time we create a bay might cause reluctance from network administrators to 
allow the adoption of Magnum. Pre-allocating tons of RFC-1918 addresses to 
Magnum may also be impractical on networks that use those addresses 
extensively. Steve’s explanation of using routable addresses as floating IP 
addresses is one approach to leverage the prevailing SDN in the cloud’s network 
to address this concern.

Let’s not get too far off topic on this thread. We are discussing the 
implementation of TLS as a mechanism of access control for API services that 
run on networks that are reachable by the public. We got a good suggestion to 
use an approach that can work regardless of network connectivity between the 
Magnum control plane and the Nova instances (Magnum Nodes) and the containers 
that run on them. I’d like to see if we could use cloud-init to get the keys 
into the bay nodes (docker hosts). That way we can avoid the requirement for 
end-to-end network connectivity between bay nodes and the Magnum control plane.

Thanks,

Adrian

Excerpts from Steven Dake (stdake)'s message of 2015-06-16 19:40:25 -0700:
Clint,

Answering Clint’s question, yes there is a reason all nodes must expose a 
floating IP address.

In a Kubernetes cluster, each minion has a port address space.  When an 
external service contacts the floating IP’s port, the request is routed over 
the internal network to the correct container using a proxy mechanism.  The 
problem then is, how do you know which minion to connect to with your external 
service?  The answer is you can connect to any of them.  Kubernetes only has 
one port address space, so Kubernetes suffers from a single namespace problem 
(which Magnum solves with Bays).

Longer term it may make sense to put the minion external addresses on a RFC1918 
network, and put a floating VIF with a load balancer to connect to them.  Then 
no need for floating address per node.  We are blocked behind kubernetes 
implementing proper support for load balancing in OpenStack to even consider 
this work.

Regards
-steve

From: , Kevin M 
mailto:kevin@pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 at 6:36 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Out of the box, vms usually can contact the controllers though the routers nat, 
but not visa versa. So its preferable for guest agents to make the connection, 
not the controller connect to the guest agents. No floating ips, security group 
rules or special networks are needed then.

Thanks,
Kevin


From: Clint Byrum
Sent: Monday, June 15, 2015 6:10:27 PM
To: openstack-dev
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Excerpts from Fox, Kevin M's message of 2015-06-15 15:59:18 -0700:
No, I was confused by your statement:
"When we create a bay, we have an ssh keypair that we use to inject the ssh 
public key onto the nova instances we create."

It sounded like you were using that keypair to inject a public key. I just 
misunderstood.

It does raise the question though, are you using ssh between the controller and 
the instance anywhere? If so, we will still run into issues when we go to try 
and test it at our site. Sahara does currently, and we're forced to put a 
floating ip on every instance. Its less then ideal...


Why not just give each instance a port on a network which can route
directly to the controller's network? Is there some reason you feel
"forced" to use a floating IP?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___

Re: [openstack-dev] [QA] [Ironic] [Inspector] Where should integration tests for non-core projects live now? (Was: Toward 2.0.0 release)

2015-06-16 Thread Ken'ichi Ohmichi
2015-06-17 12:38 GMT+09:00 Yuiko Takada :
>
> Then, as you and Matt and Dimitry talked about this on IRC few days ago,
> We can add Ironic/Ironic-inspector tests into Tempest still, right?
> So that I've started to implement a test in Tempest,
> but I'm facing another issue.
> As you know, Ironic API has microversions, and Ironic-inspector can run
> with microversion > 1.6.
> But currently there is no feature testing specific Ironic API microversions
> on Tempest, right?
>
> So that, we have to think about some solutions.
>
> (1) Make testing specific Ironic API microversions on Tempest possible
> adam_g is posting this patch set.
> https://review.openstack.org/166386
>
> (2)Using tempest_lib instead of adding tests into Tempest
> Is tempest_lib available already?
> Or do we need to wait for something will be merged?

I guess the above question seems multiple factors are mixed.
You want to test ironic-inspector behaviors by
  * using ironic-inspector REST APIs directly without Ironic
  * using Ironic REST APIs which need newer microversion
right?

For the first test, you can implement without considering microversion.
The test just calls ironic-inspector REST APIs directly and checks its behavior.
You can implement the test on Tempest/ironic-inspector repository.
Current tempest-lib seems enough to implement tests in
ironic-inspector repository as features, but it is better to wait for
Tempest's external interface spec[1] approval.
It is trying to define directory structure of Tempest-like tests on
each project repository and Tempest will discover tests based on the
directory structure and run them.
So if implementing tests on ironic-inspector repository before the
spec approval, you will need to change the directory structure again
in the future.

For the second test, microversions support is necessary on Tempest
side and adam_g's patch seems good for implementing it.
My main concern of microversions tests is how to run multiple
microversions on the gate.
We have discussed that in Nova design session of Vancouver summit and
the conclusion is
 * Minimum microversion
 * Maximum microversion
 * Interesting microversions
as the gate test.
IMO "Interesting microversions" would be the last microversions of
each release(Kilo, Liberty, ..) I feel.
I have qa-spec[2] for testing microversions on the gate, but that is
not complete yet.
That will affect how to specify/run microversions tests on Tempest.
So now I'm not sure yet the way to specify microversion on current
adam_g's patch is the best.

So my recommendation/hope is that we concentrate on Tempest's external
interface spec[1] and make it better together, then we can implement
Tempest-like tests on each repository after that.
As the next step, we will test microversions on the same way between
projects based on conclusion of the spec[2].

> (3)Make Ironic-inspector available even if microversion < 1.6
> Dmitry is posting this patch set.
> https://review.openstack.org/192196
> # I don't mean asking you to review this, don't worry :p

I've reviewed it already :)

Thanks
Ken Ohmichi

---
[1]: https://review.openstack.org/#/c/184992/
[2]: https://review.openstack.org/#/c/169126/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][Horizon][Tuskar-ui] Making a dashboard for Ironic‏

2015-06-16 Thread NiuZhenguo
hi folks,I'm planning to propose a new horizon plugin ironic-dashboard to fill 
the gap that ironic doesn't have horizon support. I know there's a nodes panel 
on "infrastructure" dashboard handled by tuskar-ui, but it's specifically 
geared towards TripleO. Ironic needs a separate dashboard to present an 
interface for querying and managing ironic's resources (Drivers, Nodes, and 
Ports).After discussion with the ironic community, I pushed an ironic-dashboard 
project to stackforge [1].Also there's an existing JS UI for ironic in 
developing now [2], we may try to resolve the same goals, but as an integrated 
openstack project, there's clear needs to have horizon support.I'd like to get 
what's your suggestion, thanks in advance.[1] 
https://review.openstack.org/#/c/191131/[2] 
https://github.com/krotscheck/ironic-webclient

Regards-zhenguo   __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Quota enforcement

2015-06-16 Thread Carl Baldwin
On Tue, Jun 16, 2015 at 5:17 PM, Kevin Benton  wrote:
> There seems to be confusion on what causes deadlocks. Can one of you explain
> to me how an optimistic locking strategy (a.k.a. compare-and-swap)  results
> in deadlocks?
>
> Take the following example where two workers want to update a record:
>
> Worker1: "UPDATE items set value=newvalue1 where value=oldvalue"
> Worker2: "UPDATE items set value=newvalue2 where value=oldvalue"
>
> Then each worker checks the count of rows affected by the query. The one
> that modified 1 gets to proceed, the one that modified 0 must retry.

Here's my understanding:  In a Galera cluster, if the two are run in
parallel on different masters, then the second one gets a write
certification failure after believing that it had succeeded *and*
reading that 1 row was modified.  The transaction -- when it was all
prepared for commit -- is aborted because the server finds out from
the other masters that it doesn't really work.  This failure is
manifested as a deadlock error from the server that lost.  The code
must catch this "deadlock" error and retry the entire thing.

I just learned about Mike Bayer's DBFacade from this thread which will
apparently make the db behave as an active/passive for writes which
should clear this up.  This is new information to me.

I hope my understanding is sound and that it makes sense.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-16 Thread Clint Byrum
I don't understand at all what you said there.

If my kubernetes minions are attached to a gateway which has a direct
route to Magnum, let's say they're at, 192.0.2.{100,101,102}, and
Magnum is at 198.51.100.1, then as long as the minions' gateway knows
how to find 198.51.100.0/24, and Magnum's gateway knows how to route to
192.0.2.0/24, then you can have two-way communication and no floating
ips or NAT. This seems orthogonal to how external users find the minions.

Excerpts from Steven Dake (stdake)'s message of 2015-06-16 19:40:25 -0700:
> Clint,
> 
> Answering Clint’s question, yes there is a reason all nodes must expose a 
> floating IP address.
> 
> In a Kubernetes cluster, each minion has a port address space.  When an 
> external service contacts the floating IP’s port, the request is routed over 
> the internal network to the correct container using a proxy mechanism.  The 
> problem then is, how do you know which minion to connect to with your 
> external service?  The answer is you can connect to any of them.  Kubernetes 
> only has one port address space, so Kubernetes suffers from a single 
> namespace problem (which Magnum solves with Bays).
> 
> Longer term it may make sense to put the minion external addresses on a 
> RFC1918 network, and put a floating VIF with a load balancer to connect to 
> them.  Then no need for floating address per node.  We are blocked behind 
> kubernetes implementing proper support for load balancing in OpenStack to 
> even consider this work.
> 
> Regards
> -steve
> 
> From: , Kevin M mailto:kevin@pnnl.gov>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: Tuesday, June 16, 2015 at 6:36 AM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum
> 
> Out of the box, vms usually can contact the controllers though the routers 
> nat, but not visa versa. So its preferable for guest agents to make the 
> connection, not the controller connect to the guest agents. No floating ips, 
> security group rules or special networks are needed then.
> 
> Thanks,
> Kevin
> 
> 
> From: Clint Byrum
> Sent: Monday, June 15, 2015 6:10:27 PM
> To: openstack-dev
> Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum
> 
> Excerpts from Fox, Kevin M's message of 2015-06-15 15:59:18 -0700:
> > No, I was confused by your statement:
> > "When we create a bay, we have an ssh keypair that we use to inject the ssh 
> > public key onto the nova instances we create."
> >
> > It sounded like you were using that keypair to inject a public key. I just 
> > misunderstood.
> >
> > It does raise the question though, are you using ssh between the controller 
> > and the instance anywhere? If so, we will still run into issues when we go 
> > to try and test it at our site. Sahara does currently, and we're forced to 
> > put a floating ip on every instance. Its less then ideal...
> >
> 
> Why not just give each instance a port on a network which can route
> directly to the controller's network? Is there some reason you feel
> "forced" to use a floating IP?
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [Ironic] [Inspector] Where should integration tests for non-core projects live now? (Was: Toward 2.0.0 release)

2015-06-16 Thread Yuiko Takada
 Hi,

Dmitry, thank you for putting it up, and also
Ken'ichi, thank you for your reply.

> Will it be possible to run tests on Ironic as well using plugin from
> > ironic-inspector?
>
> Yeah, it will be possible.
> but I'm guessing ironic-inspector is optional and Ironic should not
> depend on the gate test result of ironic-inspector.
> So maybe you just need to run Ironic tests on ironic-inspector gate
> tests, right?

Exactly. All we want to do is  run Ironic+Ironic-inspector tests on gate.

Then, as you and Matt and Dimitry talked about this on IRC few days ago,
We can add Ironic/Ironic-inspector tests into Tempest still, right?
So that I've started to implement a test in Tempest,
but I'm facing another issue.
As you know, Ironic API has microversions, and Ironic-inspector can run
with microversion > 1.6.
But currently there is no feature testing specific Ironic API microversions
on Tempest, right?

So that, we have to think about some solutions.

(1) Make testing specific Ironic API microversions on Tempest possible
adam_g is posting this patch set.
https://review.openstack.org/166386

(2)Using tempest_lib instead of adding tests into Tempest
Is tempest_lib available already?
Or do we need to wait for something will be merged?

(3)Make Ironic-inspector available even if microversion < 1.6
Dmitry is posting this patch set.
https://review.openstack.org/192196
# I don't mean asking you to review this, don't worry :p

Could you please think about the best and fast solution together?


Best Regards,
Yuiko Takada

2015-06-10 17:57 GMT+09:00 Ken'ichi Ohmichi :

> 2015-06-10 16:48 GMT+09:00 Dmitry Tantsur :
> > On 06/10/2015 09:40 AM, Ken'ichi Ohmichi wrote:
> >> To solve it, we have decided the scope of Tempest as the etherpad
> >> mentioned.
> >>
> >>> Are there any hints now on where we can start with our integration
> tests?
> >>
> >>
> >> For the other projects, we are migrating the test framework of Tempest
> >> to tempest-lib which is a library.
> >> So each project can implement their own tests in each repository by
> >> using the test framework of tempest-lib.
> >
> >
> > So in my case we can start with putting test code to ironic-inspector
> tree
> > using tempest-lib, right?
>
> Yeah, right.
> Neutron is already doing that.
> maybe neutron/tests/api/ of Neutron repository will be a hint for it.
>
> > Will it be possible to run tests on Ironic as well using plugin from
> > ironic-inspector?
>
> Yeah, it will be possible.
> but I'm guessing ironic-inspector is optional and Ironic should not
> depend on the gate test result of ironic-inspector.
> So maybe you just need to run Ironic tests on ironic-inspector gate
> tests, right?
>
> >>> After a quick look at devstack-gate I got an impression that it's
> >>> expecting
> >>> tests as part of tempest:
> >>>
> >>>
> https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L600
> >>>
> >>> Our final goal is to have devstack gate test for Ironic and Inspector
> >>> projects working together.
> >>
> >>
> >> We have discussed external interfaces of Tempest on the summit, so
> >> that Tempest gathers tests from each project repository and runs them
> >> at the same time.
> >> There is a qa-spec for https://review.openstack.org/#/c/184992/
> >
> >
> > Cool, thanks! Does it mean that devstack-gate will also be updated to
> allow
> > something like DEVSTACK_GATE_TEMPEST_PLUGINS="https://github.com/...";?
>
> Yeah, will be.
> The idea of this external interface is based on DevStack's one.
> I think we will be able to use it on the gate like that.
>
> Thanks
> Ken'ichi Ohmichi
>
> ---
>
> >>> On 06/10/2015 08:07 AM, Yuiko Takada wrote:
> 
> 
>  Hi, Dmitry,
> 
>   I guess the whole idea of new release models is NOT to tie
> projects
>   to each other any more except for The Big Release twice a year :)
>  So
>   I think no, we don't need to. We still can do it, if we have
>   something to release by the time Ironic releases, but I suggest
>   deciding it on case-by-case basis.
> 
>  OK, I see.
> 
>  One more concern, about Tempest integration test which I will
> implement
>  in V2.1.0,
>  it seems like that we cannot add Ironic-inspector's tests into Tempest
>  even if integration tests.
>  Please see:
>  https://etherpad.openstack.org/p/YVR-QA-in-the-big-tent
> >>>
> >>>
> >>>
> >>> Good catch. I guess the answer depends on where Ironic integration
> tests
> >>> are
> >>> going to live - we're going to live with them. Let me retarget this
> >>> thread
> >>> to a wider audience.
> >>>
> 
>  But I heard from you that Devananda thinks we need this in tempest
>  itself. [3]
>  Do you know something like current situation?
> 
> 
>  Best Regards,
>  Yuiko Takada
> 
>  2015-06-09 15:59 GMT+09:00 Dmitry Tantsur   >:
> 
>   On 06/09/2015 03:49 AM, Yuiko Takada wrote:
> 
>

Re: [openstack-dev] [Mistral] Proposal for the Resume Feature

2015-06-16 Thread Lingxian Kong
Winson, thanks for the etherpad to gather different opinions.

Dmitri, I think it's ok we discuss here, make more people get
involved, we could use etherpad for summary.

On Wed, Jun 17, 2015 at 2:22 AM, W Chan  wrote:
> Here's the etherpad link.  I replied to the comments/feedbacks there.
> Please feel free to continue the conversation there.
> https://etherpad.openstack.org/p/mistral-resume
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-16 Thread Steven Dake (stdake)
Clint,

Answering Clint’s question, yes there is a reason all nodes must expose a 
floating IP address.

In a Kubernetes cluster, each minion has a port address space.  When an 
external service contacts the floating IP’s port, the request is routed over 
the internal network to the correct container using a proxy mechanism.  The 
problem then is, how do you know which minion to connect to with your external 
service?  The answer is you can connect to any of them.  Kubernetes only has 
one port address space, so Kubernetes suffers from a single namespace problem 
(which Magnum solves with Bays).

Longer term it may make sense to put the minion external addresses on a RFC1918 
network, and put a floating VIF with a load balancer to connect to them.  Then 
no need for floating address per node.  We are blocked behind kubernetes 
implementing proper support for load balancing in OpenStack to even consider 
this work.

Regards
-steve

From: , Kevin M mailto:kevin@pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 at 6:36 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Out of the box, vms usually can contact the controllers though the routers nat, 
but not visa versa. So its preferable for guest agents to make the connection, 
not the controller connect to the guest agents. No floating ips, security group 
rules or special networks are needed then.

Thanks,
Kevin


From: Clint Byrum
Sent: Monday, June 15, 2015 6:10:27 PM
To: openstack-dev
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Excerpts from Fox, Kevin M's message of 2015-06-15 15:59:18 -0700:
> No, I was confused by your statement:
> "When we create a bay, we have an ssh keypair that we use to inject the ssh 
> public key onto the nova instances we create."
>
> It sounded like you were using that keypair to inject a public key. I just 
> misunderstood.
>
> It does raise the question though, are you using ssh between the controller 
> and the instance anywhere? If so, we will still run into issues when we go to 
> try and test it at our site. Sahara does currently, and we're forced to put a 
> floating ip on every instance. Its less then ideal...
>

Why not just give each instance a port on a network which can route
directly to the controller's network? Is there some reason you feel
"forced" to use a floating IP?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Keith Bray
CLIs should get versioned like any other contract and allow for change (not be 
restricted in stone to what's already out there{).  With Solum, we have less to 
worry about as we are at the early phases of adoption and growth.  To someone's 
earlier point, you can  have —non-interactive flags which allows shell 
scripting, or —interactive which provides a more positive human interaction 
experience (defaulting either way, but my $0.2 is you default to human 
interaction, is even the shell scripters start there to learn/test the 
capabilities manually before scripting.  I think projects can solve for both, 
it just takes a willingness to do so.  To the extent that can be tackled in the 
new unified OpenStack client, that would be fantastic!

-Keith

From: , Kevin M mailto:kevin@pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 7:05 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

It sounded like the push was, cli's for interactive, if you want to script, use 
python. My assertion was, developers script in python, users/admins script in 
shell usually. Not arguing against making the cli user experience more pleasant 
for interactive users, but realize shell is the way most user/admins will 
script since that is what they are accustomed to.

Now, unfortunately there's probably a lot of scripts out there today, and if 
you make things more interactive, you risk breaking them horribly if you start 
requiring them to be default interactive  :/ Thats not an easily solved 
problem. Best way I can think of is fix it in the new unified openstack client, 
and give the interactive binary a new name to run interactive mode. Shell 
scripts can continue to use the existing stuff without fear of breakage.

Thanks,
Kevin

From: Keith Bray [keith.b...@rackspace.com]
Sent: Tuesday, June 16, 2015 4:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Kevin, I agree with your break out, except I think you are missing a 3rd 
category.   100's of public cloud support specialists, developers, and product 
management folks use the CLI without scripts every day in supporting the 
OpenStack services and customers.  Using and interacting with the CLI is how 
folks learn the OpenStack services. The CLIs can be painful for those users 
when they actually want to learn the service, not shell script around it.

-Keith

From: , Kevin M mailto:kevin@pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 6:28 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

-1. There are developers and there are users/admins. The former tend to write 
in python. the latter, shell.

Thanks,
Kevin

From: Keith Bray [keith.b...@rackspace.com]
Sent: Tuesday, June 16, 2015 2:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Isn't that what the SDK is for?   To chip in with a Product Management type hat 
on, I'd think the CLI should be primarily focused on user experience 
interaction, and the SDK should be primarily targeted for developer automation 
needs around programmatically interacting with the service.   So, I would argue 
that the target market for the CLI should not be the developer who wants to 
script.

-Keith

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 12:24 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Interactive choices like that one can make it more confusing for developers who 
want to script with the CLI. My preference would be to label the app delete 
help text to clearly indicate that it deletes logs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-16 Thread Ken'ichi Ohmichi
2015-06-16 21:14 GMT+09:00 Sean Dague :
> On 06/16/2015 07:38 AM, Alex Xu wrote:
>>
>>
>> 2015-06-16 18:57 GMT+08:00 Sean Dague > >:
>>
>> On 06/15/2015 03:45 PM, Kevin L. Mitchell wrote:
>> > On Mon, 2015-06-15 at 13:07 -0400, Jay Pipes wrote:
>> >> The original spec said that the HTTP header should contain the name of
>> >> the service type returned by the Keystone service catalog (which is 
>> also
>> >> the official name of the REST API). I don't understand why the spec 
>> was
>> >> changed retroactively and why Nova has been changed to return
>> >> X-OpenStack-Nova-API-Version instead of 
>> X-OpenStack-Compute-API-Version
>> >> HTTP headers [4].
>> >
>> > Given the disagreement evinced by the responses to this thread, let me
>> > ask a question: Would there be any particular problem with using
>> > "X-OpenStack-API-Version"?
>>
>> So, here is my concern with not having the project namespacing at all:
>>
>> Our expectation is that services are going to move towards real wsgi on
>> their API instead of eventlet. Which is, hopefully, naturally going to
>> give you things like this:
>>
>> GET api.server/compute/servers
>> GET api.server/baremetal/chasis
>>
>> In such a world it will end up possibly confusing that
>> OpenStack-API-Version 2.500 is returned from api.server/compute/servers,
>> but OpenStack-API-Version 1.200 is returned from
>> api.server/baremetal/chasis.
>>
>>
>> Client should get those url from keystone SC, that means client should
>> know what he request to.
>
> Sure, there is a lot of should in there though. But by removing a level
> of explicitness in this we potentially introduce more confusion. The
> goal of a good interface is not just to make it easy to use, but make it
> hard to misuse. Being explicit about the service in the return header
> will eliminate a class of errors where the client code got confused
> about which service they were talking to (because to setup a VM with a
> network in a neutron case you have to jump back and forth between Nova /
> Neutron quite a bit).

Does here mean Nova will be able to pass Neutron's microversion to
Neutron on a single Nova API call?
I feel Nova should not do it because in this case Neutron is a backend
and Neutron should disappear from end user POV on Nova API.
If backend services are not visible from end users POV, the users
cannot know the range of backend service microversions.
And if acceptable to pass microversion to backend service, the out of
microversion range error would happen and that would make users more
confused.

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-16 Thread Ken'ichi Ohmichi
2015-06-16 21:16 GMT+09:00 Jay Pipes :
> On 06/16/2015 08:00 AM, Dmitry Tantsur wrote:
>>
>>
>> 16 июня 2015 г. 13:52 пользователь "Jay Pipes" > > написал:
>>  >
>>  > On 06/16/2015 04:36 AM, Alex Xu wrote:
>>  >>
>>  >> So if our min_version is 2.1 and the max_version is 2.50. That means
>>  >> alternative implementations need implement all the 50 versions
>>  >> api...that sounds pain...
>>  >
>>  >
>>  > Yes, it's pain, but it's no different than someone who is following
>> the Amazon EC2 API, which cuts releases at a regular (sometimes every
>> 2-3 weeks) clip.
>>  >
>>  > In Amazon-land, the releases are date-based, instead of
>> microversion/incrementing version-based, but the idea is essentially the
>> same.
>>  >
>>  > There is GREAT value to having an API mean ONE thing and ONE thing
>> only. It means that developers can code against something that isn't
>> like quicksand -- constantly changing meanings.
>>
>> Being one of such developers, I only see this "value" for breaking
>> changes.
>
>
> Sorry, Dmitry, I'm not quite following you. Could you elaborate on what you
> mean by above?

I guess maybe he is thinking the value of microversions is just for
backwards incompatible changes and backwards compatible changes are
unnecessary to be managed by microversions because he is proposing it
as Ironic patch.

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] What code structure is recommended for a Neutron plugin( L2 and L3)?

2015-06-16 Thread Armando M.
On 15 June 2015 at 19:34, Sam Su  wrote:

> Hi stackers,
>
>
>
> I am going to implement a Neutron plugin, however when I checked the
> current Neutron code(master) structure, I found there are two way to
> organize a Neutron plugin:
>
> 1.   The first one is implement all L2 and L3 functions under the
> folder ../neutron/plugins/xxx, e.g. vmware, plumgrid, and ibm…
>
> 2.   The second way is put L2 functions on the folder
> ../neutron/plugins/ml2/drivers/xxx and put L3 funtions on the folder
> ../neutron/services/l3_router/xxx, e.g. brocade, cisco.
>
>
>
> If my understanding is correct, which way is more desirable for a neutron
> plugin? If I am wrong, what is a neutron plugin code structure?
>
>
>
> Any help will be much appreciated!
>

These requests are better directed to the -dev ML. Anyway, both structures
are perfectly fine, and they both have pros and cons. Check out these
resources that might help you decide:

[1]
https://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/how-to-write-a-neutron-plugin-if-you-really-need-to
[2]
https://github.com/openstack/neutron/blob/master/doc/source/devref/contribute.rst

HTH
Armando


>
>
> Thanks,
>
> Sam
>
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openst...@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Mid-cycle sprint

2015-06-16 Thread Zhou, Zhenzan
Hi, Tim

Is this the oslo.messaging integration task? I’m interested in participating. 
Actually I am working on a bp to receive notifications from external services 
in datasource driver at first. I’m ok to change if the direction is to 
integrate oslo.messaging thoroughly (even replacing DSE).
Thanks.

BR
Zhou Zhenzan

From: Tim Hinrichs [mailto:t...@styra.com]
Sent: Wednesday, June 17, 2015 05:14
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Congress] Mid-cycle sprint

Hi all,

In the last couple of IRCs we've been talking about running a mid-cycle sprint 
focused on enabling our message bus to span multiple processes and multiple 
hosts.  The message bus is what allows the Congress policy engine to 
communicate with the Congress wrappers around external services like Nova, 
Neutron.  This cross-process, cross-host message bus is the platform we'll use 
to build version 2.0 of our distributed architecture.

If you're interested in participating, drop me a note.  Once we know who's 
interested we'll work out date/time/location details.

Thanks!
Tim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-16 Thread Armando M.
On 16 June 2015 at 17:31, Sam Morrison  wrote:

> We at NeCTAR are starting the transition to neutron from nova-net and
> neutron almost does what we want.
>
> We have 10 “public" networks and 10 “service" networks and depending on
> which compute node you land on you get attached to one of them.
>
> In neutron speak we have multiple shared externally routed provider
> networks. We don’t have any tenant networks or any other fancy stuff yet.
> How I’ve currently got this set up is by creating 10 networks and
> subsequent subnets eg. public-1, public-2, public-3 … and service-1,
> service-2, service-3 and so on.
>
> In nova we have made a slight change in allocate for instance [1] whereby
> the compute node has a designated hardcoded network_ids for the public and
> service network it is physically attached to.
> We have also made changes in the nova API so users can’t select a network
> and the neutron endpoint is not registered in keystone.
>
> That all works fine but ideally I want a user to be able to choose if they
> want a public and or service network. We can’t let them as we have 10
> public networks, we almost need something in neutron like a "network group”
> or something that allows a user to select “public” and it allocates them a
> port in one of the underlying public networks.
>
> I tried going down the route of having 1 public and 1 service network in
> neutron then creating 10 subnets under each. That works until you get to
> things like dhcp-agent and metadata agent although this looks like it could
> work with a few minor changes. Basically I need a dhcp-agent to be spun up
> per subnet and ensure they are spun up in the right place.
>
> I’m not sure what the correct way of doing this. What are other people
> doing in the interim until this kind of use case can be done in Neutron?
>

Would something like [1] be adequate to address your use case? If not, I'd
suggest you to file an RFE bug (more details in [2]), so that we can keep
the discussion focused on this specific case.

HTH
Armando

[1] https://blueprints.launchpad.net/neutron/+spec/rbac-networks
[2]
https://github.com/openstack/neutron/blob/master/doc/source/policies/blueprints.rst#neutron-request-for-feature-enhancements



>
> Cheers,
> Sam
>
> [1]
> https://github.com/NeCTAR-RC/nova/commit/1bc2396edc684f83ce471dd9dc9219c4635afb12
>
>
>
> > On 17 Jun 2015, at 12:20 am, Jay Pipes  wrote:
> >
> > Adding -dev because of the reference to the Neutron "Get me a network
> spec". Also adding [nova] and [neutron] subject markers.
> >
> > Comments inline, Kris.
> >
> > On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
> >> During the Openstack summit this week I got to talk to a number of other
> >> operators of large Openstack deployments about how they do networking.
> >>  I was happy, surprised even, to find that a number of us are using a
> >> similar type of networking strategy.  That we have similar challenges
> >> around networking and are solving it in our own but very similar way.
> >>  It is always nice to see that other people are doing the same things
> >> as you or see the same issues as you are and that "you are not crazy".
> >> So in that vein, I wanted to reach out to the rest of the Ops Community
> >> and ask one pretty simple question.
> >>
> >> Would it be accurate to say that most of your end users want almost
> >> nothing to do with the network?
> >
> > That was my experience at AT&T, yes. The vast majority of end users
> could not care less about networking, as long as the connectivity was
> reliable, performed well, and they could connect to the Internet (and have
> others connect from the Internet to their VMs) when needed.
> >
> >> In my experience what the majority of them (both internal and external)
> >> want is to consume from Openstack a compute resource, a property of
> >> which is it that resource has an IP address.  They, at most, care about
> >> which "network" they are on.  Where a "network" is usually an arbitrary
> >> definition around a set of real networks, that are constrained to a
> >> location, in which the company has attached some sort of policy.  For
> >> example, I want to be in the production network vs's the xyz lab
> >> network, vs's the backup network, vs's the corp network.  I would say
> >> for Godaddy, 99% of our use cases would be defined as: I want a compute
> >> resource in the production network zone, or I want a compute resource in
> >> this other network zone.  The end user only cares that the IP the vm
> >> receives works in that zone, outside of that they don't care any other
> >> property of that IP.  They do not care what subnet it is in, what vlan
> >> it is on, what switch it is attached to, what router its attached to, or
> >> how data flows in/out of that network.  It just needs to work. We have
> >> also found that by giving the users a floating ip address that can be
> >> moved between vm's (but still constrained within a "network" zone) we
> >> can solve almost all of our u

Re: [openstack-dev] [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-16 Thread Sam Morrison
We at NeCTAR are starting the transition to neutron from nova-net and neutron 
almost does what we want.

We have 10 “public" networks and 10 “service" networks and depending on which 
compute node you land on you get attached to one of them.

In neutron speak we have multiple shared externally routed provider networks. 
We don’t have any tenant networks or any other fancy stuff yet.
How I’ve currently got this set up is by creating 10 networks and subsequent 
subnets eg. public-1, public-2, public-3 … and service-1, service-2, service-3 
and so on.

In nova we have made a slight change in allocate for instance [1] whereby the 
compute node has a designated hardcoded network_ids for the public and service 
network it is physically attached to.
We have also made changes in the nova API so users can’t select a network and 
the neutron endpoint is not registered in keystone.

That all works fine but ideally I want a user to be able to choose if they want 
a public and or service network. We can’t let them as we have 10 public 
networks, we almost need something in neutron like a "network group” or 
something that allows a user to select “public” and it allocates them a port in 
one of the underlying public networks.

I tried going down the route of having 1 public and 1 service network in 
neutron then creating 10 subnets under each. That works until you get to things 
like dhcp-agent and metadata agent although this looks like it could work with 
a few minor changes. Basically I need a dhcp-agent to be spun up per subnet and 
ensure they are spun up in the right place.

I’m not sure what the correct way of doing this. What are other people doing in 
the interim until this kind of use case can be done in Neutron?

Cheers,
Sam
 
[1] 
https://github.com/NeCTAR-RC/nova/commit/1bc2396edc684f83ce471dd9dc9219c4635afb12



> On 17 Jun 2015, at 12:20 am, Jay Pipes  wrote:
> 
> Adding -dev because of the reference to the Neutron "Get me a network spec". 
> Also adding [nova] and [neutron] subject markers.
> 
> Comments inline, Kris.
> 
> On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
>> During the Openstack summit this week I got to talk to a number of other
>> operators of large Openstack deployments about how they do networking.
>>  I was happy, surprised even, to find that a number of us are using a
>> similar type of networking strategy.  That we have similar challenges
>> around networking and are solving it in our own but very similar way.
>>  It is always nice to see that other people are doing the same things
>> as you or see the same issues as you are and that "you are not crazy".
>> So in that vein, I wanted to reach out to the rest of the Ops Community
>> and ask one pretty simple question.
>> 
>> Would it be accurate to say that most of your end users want almost
>> nothing to do with the network?
> 
> That was my experience at AT&T, yes. The vast majority of end users could not 
> care less about networking, as long as the connectivity was reliable, 
> performed well, and they could connect to the Internet (and have others 
> connect from the Internet to their VMs) when needed.
> 
>> In my experience what the majority of them (both internal and external)
>> want is to consume from Openstack a compute resource, a property of
>> which is it that resource has an IP address.  They, at most, care about
>> which "network" they are on.  Where a "network" is usually an arbitrary
>> definition around a set of real networks, that are constrained to a
>> location, in which the company has attached some sort of policy.  For
>> example, I want to be in the production network vs's the xyz lab
>> network, vs's the backup network, vs's the corp network.  I would say
>> for Godaddy, 99% of our use cases would be defined as: I want a compute
>> resource in the production network zone, or I want a compute resource in
>> this other network zone.  The end user only cares that the IP the vm
>> receives works in that zone, outside of that they don't care any other
>> property of that IP.  They do not care what subnet it is in, what vlan
>> it is on, what switch it is attached to, what router its attached to, or
>> how data flows in/out of that network.  It just needs to work. We have
>> also found that by giving the users a floating ip address that can be
>> moved between vm's (but still constrained within a "network" zone) we
>> can solve almost all of our users asks.  Typically, the internal need
>> for a floating ip is when a compute resource needs to talk to another
>> protected internal or external resource. Where it is painful (read:
>> slow) to have the acl's on that protected resource updated. The external
>> need is from our hosting customers who have a domain name (or many) tied
>> to an IP address and changing IP's/DNS is particularly painful.
> 
> This is precisely my experience as well.
> 
>> Since the vast majority of our end users don't care about any of the
>> technical network stuff, we spend a large amount of

Re: [openstack-dev] [Foundation Board] [OpenStack] OpenStack Diversity Working Group - Meeting Information

2015-06-16 Thread Barrett, Carol L
Roland – Thanks for your comments. Hope you can join us to help move this 
effort forward.
Carol

From: Roland Chan [mailto:rol...@aptira.com]
Sent: Tuesday, June 16, 2015 4:44 PM
To: Barrett, Carol L
Cc: Greenberg, Suzy M; foundation-bo...@lists.openstack.org; 
openst...@lists.openstack.org; OpenStack Development Mailing List (not for 
usage questions); commun...@lists.openstack.org
Subject: Re: [Foundation Board] [OpenStack] OpenStack Diversity Working Group - 
Meeting Information


I'd say meeting time accessibility is not an interest, it's a non-negotiable 
requirement. Unconscious bias creates exclusion and all that.

This group must lead by example, and it would be sad for the first step to be a 
misstep.

Roland

On 17/06/2015 8:36 AM, "Barrett, Carol L" 
mailto:carol.l.barr...@intel.com>> wrote:
The initial meeting for this work group will be: Friday,  June 19, 2015 at 
18:00 UTC, on IRC: #openstack-meeting

The Agenda is:

• Introductions

• Mission Discussion and definition of Diversity

• Discuss proposal to engage a Consultant/Coach to assist this work 
group

• Review proposed work plan, gather feedback, and owners

• Next Steps

o   Meeting Frequency

o   Interest/Need for alternating times to make the meetings globally accessible

Moving forward we’ll use the 
foundat...@lists.openstack.org mail list 
for work group discussions and meeting communications.

Thanks
Carol


From: Sousou, Imad [mailto:imad.sou...@intel.com]
Sent: Tuesday, June 09, 2015 1:28 PM
To: 
openstack-dev@lists.openstack.org; 
community-ow...@lists.openstack.org;
 openst...@lists.openstack.org
Subject: [openstack-dev] OpenStack Diversity Working Group

Stackers – We’re happy to announce the creation of a Diversity Working Group. 
The genesis for this work group was a discussion at the May meeting of the 
OpenStack Board of Directors ahead of the Vancouver Summit.

The Board is committed to fostering an inclusive and welcoming place for all 
people to collaborate to drive innovation and design cutting-edge data center 
capabilities, while finding the best answers to our most pressing challenges. 
To achieve this, the Board formed this Work Group to determine what actions are 
required to fulfill this commitment. Three Board members volunteered to work 
with community members in this Work Group and bring updates/requests to the 
Board for discussion and action on a regular basis, starting with the July 
meeting.

If you’re interested in joining this effort, please:

• Join the Foundation mail list to participate in discussions and shape 
the direction: click 
here

• Visit the wiki page for this work group to learn more about the 
charter: click here

• Plan to join the kick-off IRC meeting and let us know what day/times 
work for you by accessing the Doodle here: click 
here

We will send out the results of the Doodle to the mail list and look forward to 
working with you to foster a strong and diverse community.

Thanks
Imad Sousou (Intel), Egle Sigler (Rackspace), Kavit Munshi (Aptira)








___
Foundation-board mailing list
foundation-bo...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation-board
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Fox, Kevin M
It sounded like the push was, cli's for interactive, if you want to script, use 
python. My assertion was, developers script in python, users/admins script in 
shell usually. Not arguing against making the cli user experience more pleasant 
for interactive users, but realize shell is the way most user/admins will 
script since that is what they are accustomed to.

Now, unfortunately there's probably a lot of scripts out there today, and if 
you make things more interactive, you risk breaking them horribly if you start 
requiring them to be default interactive  :/ Thats not an easily solved 
problem. Best way I can think of is fix it in the new unified openstack client, 
and give the interactive binary a new name to run interactive mode. Shell 
scripts can continue to use the existing stuff without fear of breakage.

Thanks,
Kevin

From: Keith Bray [keith.b...@rackspace.com]
Sent: Tuesday, June 16, 2015 4:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Kevin, I agree with your break out, except I think you are missing a 3rd 
category.   100's of public cloud support specialists, developers, and product 
management folks use the CLI without scripts every day in supporting the 
OpenStack services and customers.  Using and interacting with the CLI is how 
folks learn the OpenStack services. The CLIs can be painful for those users 
when they actually want to learn the service, not shell script around it.

-Keith

From: , Kevin M mailto:kevin@pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 6:28 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

-1. There are developers and there are users/admins. The former tend to write 
in python. the latter, shell.

Thanks,
Kevin

From: Keith Bray [keith.b...@rackspace.com]
Sent: Tuesday, June 16, 2015 2:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Isn't that what the SDK is for?   To chip in with a Product Management type hat 
on, I'd think the CLI should be primarily focused on user experience 
interaction, and the SDK should be primarily targeted for developer automation 
needs around programmatically interacting with the service.   So, I would argue 
that the target market for the CLI should not be the developer who wants to 
script.

-Keith

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 12:24 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Interactive choices like that one can make it more confusing for developers who 
want to script with the CLI. My preference would be to label the app delete 
help text to clearly indicate that it deletes logs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-16 Thread Ken'ichi Ohmichi
2015-06-16 20:52 GMT+09:00 Jay Pipes :
>
>> but I have the same question with Dmitry.
>> If using service names in the header, how to define these name before
>> that?
>> Current big-tent situation can make duplications between projects like
>> X-OpenStack-Container-API-Version or something.
>> Project names are unique even if they are just implementations.
>
>
> Well, I actually like Kevin's suggestion of just removing the
> project/service-type altogether and using OpenStack-API-Version, but to
> answer your question above, I'd just say that having a single API for
> "OpenStack Containers" has value. See my previous responses about why having
> the API mean a single thing allows developers to better use our APIs.

Thanks for your reply, I got it.
I also prefer Kevin's idea, that will be nice to use in all projects.

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-16 Thread Ken'ichi Ohmichi
2015-06-16 6:30 GMT+09:00 Michael Davies :
> On Tue, Jun 16, 2015 at 5:15 AM, Kevin L. Mitchell
>  wrote:
>>
>> Given the disagreement evinced by the responses to this thread, let me
>> ask a question: Would there be any particular problem with using
>> "X-OpenStack-API-Version"?
>
>
> Well, perhaps we should consider "OpenStack-API-Version" instead and drop
> the "X-".  Ref https://tools.ietf.org/html/rfc6648.

OpenStack-API-Version seems short, simple and consistent.
So +1

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Keith Bray
Kevin, I agree with your break out, except I think you are missing a 3rd 
category.   100's of public cloud support specialists, developers, and product 
management folks use the CLI without scripts every day in supporting the 
OpenStack services and customers.  Using and interacting with the CLI is how 
folks learn the OpenStack services. The CLIs can be painful for those users 
when they actually want to learn the service, not shell script around it.

-Keith

From: , Kevin M mailto:kevin@pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 6:28 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

-1. There are developers and there are users/admins. The former tend to write 
in python. the latter, shell.

Thanks,
Kevin

From: Keith Bray [keith.b...@rackspace.com]
Sent: Tuesday, June 16, 2015 2:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Isn't that what the SDK is for?   To chip in with a Product Management type hat 
on, I'd think the CLI should be primarily focused on user experience 
interaction, and the SDK should be primarily targeted for developer automation 
needs around programmatically interacting with the service.   So, I would argue 
that the target market for the CLI should not be the developer who wants to 
script.

-Keith

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 12:24 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Interactive choices like that one can make it more confusing for developers who 
want to script with the CLI. My preference would be to label the app delete 
help text to clearly indicate that it deletes logs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][vpnaas] No VPNaaS weekly IRC meeting next week (June 23rd)

2015-06-16 Thread Sridhar Ramaswamy
As some of us are attending Neutron mid-cycle we are skipping the VPNaaS
weekly IRC meeting for next week Tuesday June 23rd. The meeting will resume
the week after.

As usual please update agenda on the wiki page:
https://wiki.openstack.org/wiki/Meetings/VPNaaS before the meeting.

See you all on Tuesday June 30th!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Fox, Kevin M
-1. There are developers and there are users/admins. The former tend to write 
in python. the latter, shell.

Thanks,
Kevin

From: Keith Bray [keith.b...@rackspace.com]
Sent: Tuesday, June 16, 2015 2:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Isn't that what the SDK is for?   To chip in with a Product Management type hat 
on, I'd think the CLI should be primarily focused on user experience 
interaction, and the SDK should be primarily targeted for developer automation 
needs around programmatically interacting with the service.   So, I would argue 
that the target market for the CLI should not be the developer who wants to 
script.

-Keith

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 12:24 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Interactive choices like that one can make it more confusing for developers who 
want to script with the CLI. My preference would be to label the app delete 
help text to clearly indicate that it deletes logs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Quota enforcement

2015-06-16 Thread Kevin Benton
There seems to be confusion on what causes deadlocks. Can one of you
explain to me how an optimistic locking strategy (a.k.a.
compare-and-swap)  results in deadlocks?

Take the following example where two workers want to update a record:

Worker1: "UPDATE items set value=newvalue1 where value=oldvalue"
Worker2: "UPDATE items set value=newvalue2 where value=oldvalue"

Then each worker checks the count of rows affected by the query. The one
that modified 1 gets to proceed, the one that modified 0 must retry.

Do those statements also risk throwing deadlock exceptions? If so, why? I
haven't seen a clear article explaining deadlock conditions not related to
"FOR UPDATE".



On Tue, Jun 16, 2015 at 4:01 PM, Carl Baldwin  wrote:

> On Tue, Jun 16, 2015 at 2:18 PM, Salvatore Orlando 
> wrote:
> > But zzzeek (Mike Bayer) is coming to our help; as a part of his DBFacade
> > work, we should be able to treat active/active cluster as active/passive
> for
> > writes, and active/active for reads. This means that the write set
> > certification issue just won't show up, and the benefits of active/active
> > clusters will still be attained for most operations (I don't think
> there's
> > any doubt that SELECT operations represent the majority of all DB
> > statements).
>
> Okay, so we stop worrying about the write certification failures?
> Lock for update would work as expected?  That would certainly simplify
> the Galera concern.  Maybe everyone already knew this and I have just
> been behind on the latest news again.
>
> > DBDeadlocks without multiple workers also suggest we should look closely
> at
> > what eventlet is doing before placing the blame on pymysql. I don't think
> > that the switch to pymysql is changing the behaviour of the database
> > interface; I think it's changing the way in which neutron interacts to
> the
> > database thus unveiling concurrency issues that we did not spot before
> as we
> > were relying on a sort of implicit locking triggered by the fact that
> some
> > parts of Mysql-Python were implemented in C.
>
> ++
>
> Carl
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Quota enforcement

2015-06-16 Thread Carl Baldwin
On Tue, Jun 16, 2015 at 2:18 PM, Salvatore Orlando  wrote:
> But zzzeek (Mike Bayer) is coming to our help; as a part of his DBFacade
> work, we should be able to treat active/active cluster as active/passive for
> writes, and active/active for reads. This means that the write set
> certification issue just won't show up, and the benefits of active/active
> clusters will still be attained for most operations (I don't think there's
> any doubt that SELECT operations represent the majority of all DB
> statements).

Okay, so we stop worrying about the write certification failures?
Lock for update would work as expected?  That would certainly simplify
the Galera concern.  Maybe everyone already knew this and I have just
been behind on the latest news again.

> DBDeadlocks without multiple workers also suggest we should look closely at
> what eventlet is doing before placing the blame on pymysql. I don't think
> that the switch to pymysql is changing the behaviour of the database
> interface; I think it's changing the way in which neutron interacts to the
> database thus unveiling concurrency issues that we did not spot before as we
> were relying on a sort of implicit locking triggered by the fact that some
> parts of Mysql-Python were implemented in C.

++

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-16 Thread Michael Still
I don't think you need a spec for this (its a refactor). That said,
I'd be interested in exploring how you deprecate the old flags. Can
you have more than one deprecated name for a single flag?

Michael

On Wed, Jun 17, 2015 at 7:29 AM, Matt Riedemann
 wrote:
>
>
> On 6/16/2015 4:21 PM, Matt Riedemann wrote:
>>
>> The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all
>> very similar.
>>
>> I want to extract a common base class that abstracts some of the common
>> code and then let the sub-classes provide overrides where necessary.
>>
>> As part of this, I'm wondering if we could just have a single
>> 'mount_point_base' config option rather than one per backend like we
>> have today:
>>
>> nfs_mount_point_base
>> glusterfs_mount_point_base
>> smbfs_mount_point_base
>> quobyte_mount_point_base
>>
>> With libvirt you can only have one of these drivers configured per
>> compute host right?  So it seems to make sense that we could have one
>> option used for all 4 different driver implementations and reduce some
>> of the config option noise.
>>
>> I checked the os-brick change [1] proposed to nova to see if there would
>> be any conflicts there and so far that's not touching any of these
>> classes so seems like they could be worked in parallel.
>>
>> Are there any concerns with this?
>>
>> Is a blueprint needed for this refactor?
>>
>> [1] https://review.openstack.org/#/c/175569/
>>
>
> I threw together a quick blueprint [1] just for tracking.
>
> I'm assuming I don't need a spec for this.
>
> [1]
> https://blueprints.launchpad.net/nova/+spec/consolidate-libvirt-fs-volume-drivers
>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Targeting icehouse-eol?

2015-06-16 Thread Alan Pevec
> let's release this last one (codename: Farewell ?) point release. I
> can do this next week after we finish pending reviews.

Remaining stable/icehouse reviews[1] have -2 or -1 except
https://review.openstack.org/176019 which I've asked
neutron-stable-maint to review.
Matt, anything else before we can tag 2014.1.5 and icehouse-eol ?

Cheers,
Alan

[1]
https://review.openstack.org/#/q/status:open+AND+branch:stable/icehouse+AND+%28project:openstack/nova+OR+project:openstack/keystone+OR+project:openstack/glance+OR+project:openstack/cinder+OR+project:openstack/neutron+OR+project:openstack/horizon+OR+project:openstack/heat+OR+project:openstack/ceilometer+OR+project:openstack/trove%29,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack] OpenStack Diversity Working Group - Meeting Information

2015-06-16 Thread Barrett, Carol L
The initial meeting for this work group will be: Friday,  June 19, 2015 at 
18:00 UTC, on IRC: #openstack-meeting

The Agenda is:

* Introductions

* Mission Discussion and definition of Diversity

* Discuss proposal to engage a Consultant/Coach to assist this work 
group

* Review proposed work plan, gather feedback, and owners

* Next Steps

o   Meeting Frequency

o   Interest/Need for alternating times to make the meetings globally accessible

Moving forward we'll use the 
foundat...@lists.openstack.org mail list 
for work group discussions and meeting communications.

Thanks
Carol


From: Sousou, Imad [mailto:imad.sou...@intel.com]
Sent: Tuesday, June 09, 2015 1:28 PM
To: openstack-dev@lists.openstack.org; community-ow...@lists.openstack.org; 
openst...@lists.openstack.org
Subject: [openstack-dev] OpenStack Diversity Working Group

Stackers - We're happy to announce the creation of a Diversity Working Group. 
The genesis for this work group was a discussion at the May meeting of the 
OpenStack Board of Directors ahead of the Vancouver Summit.

The Board is committed to fostering an inclusive and welcoming place for all 
people to collaborate to drive innovation and design cutting-edge data center 
capabilities, while finding the best answers to our most pressing challenges. 
To achieve this, the Board formed this Work Group to determine what actions are 
required to fulfill this commitment. Three Board members volunteered to work 
with community members in this Work Group and bring updates/requests to the 
Board for discussion and action on a regular basis, starting with the July 
meeting.

If you're interested in joining this effort, please:

* Join the Foundation mail list to participate in discussions and shape 
the direction: click 
here

* Visit the wiki page for this work group to learn more about the 
charter: click here

* Plan to join the kick-off IRC meeting and let us know what day/times 
work for you by accessing the Doodle here: click 
here

We will send out the results of the Doodle to the mail list and look forward to 
working with you to foster a strong and diverse community.

Thanks
Imad Sousou (Intel), Egle Sigler (Rackspace), Kavit Munshi (Aptira)







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Keith Bray
That makes sense Randall.. .a sort of "Novice mode" vs. "Expert mode."
I definitely want to see OpenStack to get easier to use, and lower the
barrier to entry. If projects only cater to developers, progress will be
slower than what it could be.

-Keith

On 6/16/15 4:52 PM, "Randall Burt"  wrote:

>While I agree with what you're saying, the way the OpenStack clients are
>traditionally written/designed, the CLI *is* the SDK for those users who
>want to do scripting in a shell rather than in Python. If we go with your
>suggestion, we'd probably also want to have the ability to suppress those
>prompts for folks that want to shell script.
>
>On Jun 16, 2015, at 4:42 PM, Keith Bray 
> wrote:
>
>> Isn't that what the SDK is for?   To chip in with a Product Management
>>type hat on, I'd think the CLI should be primarily focused on user
>>experience interaction, and the SDK should be primarily targeted for
>>developer automation needs around programmatically interacting with the
>>service.   So, I would argue that the target market for the CLI should
>>not be the developer who wants to script.
>> 
>> -Keith
>> 
>> From: Adrian Otto 
>> Reply-To: "OpenStack Development Mailing List (not for usage
>>questions)" 
>> Date: Tuesday, June 16, 2015 12:24 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>>
>> Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we
>>delete an app?
>> 
>>> Interactive choices like that one can make it more confusing for
>>>developers who want to script with the CLI. My preference would be to
>>>label the app delete help text to clearly indicate that it deletes logs
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-16 Thread Thomas Goirand
On 06/16/2015 12:06 PM, Thierry Carrez wrote:
>>> It also removes the stupid encouragement to use all components from the
>>> same date. With everything tagged at the same date, you kinda send the
>>> message that those various things should be used together. With
>>> everything tagged separately, you send te message that you can mix and
>>> match components from stable/* as you see fit. I mean, it's totally
>>> valid to use stable branch components from various points in time
>>> together, since they are all supposed to work.
>>
>> Though there's now zero guidance at what should be the speed of
>> releasing server packages to our users.
> 
> I really think it should be a distribution decision. You could release
> all commits, release every 2 months, release after each CVE, release
> as-needed when a bug in Debian BTS is fixed. I don't see what "guidance"
> upstream should give, apart from enabling all models. Currently we make
> most models more difficult than they should be, to promote an arbitrary
> time-based model. With plan D, we enable all models.

Let me put this in another way: with the plan D, I'll be lost, and wont
ever know when to release a new stable version in Debian. I don't know
better than anyone else. If we had each upstream project saying
individually: "Ok, now we gathered enough bugfixes so that it's
important to get it in downstream distributions", I'd happily follow
this kind of guidance. But the plan is to just commit bugfixes, and hope
that downstream distros (ie: me in this case) just catch when a new
release worse the effort.

> As pointed elsewhere, plan D assumes we move to generating release notes
> for each commit. So you won't lose track of what is fixed in each
> version. If anything, that will give you proper release notes for
> CVE-fix commits, something you didn't have before, since we wouldn't cut
> a proper point release after a CVE fix but on a pre-determined
> time-based schedule.
> 
> Overall, I think even your process stands to benefit from the proposed
> evolution.

I just hope so. If any core / PTL is reading me in this thread, I would
strongly encourage you guys to get in touch and ping me when you think
some commits in the stable release should be uploaded to Debian. A quick
message on IRC can be enough.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Randall Burt
While I agree with what you're saying, the way the OpenStack clients are 
traditionally written/designed, the CLI *is* the SDK for those users who want 
to do scripting in a shell rather than in Python. If we go with your 
suggestion, we'd probably also want to have the ability to suppress those 
prompts for folks that want to shell script.

On Jun 16, 2015, at 4:42 PM, Keith Bray 
 wrote:

> Isn't that what the SDK is for?   To chip in with a Product Management type 
> hat on, I'd think the CLI should be primarily focused on user experience 
> interaction, and the SDK should be primarily targeted for developer 
> automation needs around programmatically interacting with the service.   So, 
> I would argue that the target market for the CLI should not be the developer 
> who wants to script.
> 
> -Keith
> 
> From: Adrian Otto 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Tuesday, June 16, 2015 12:24 PM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
> app?
> 
>> Interactive choices like that one can make it more confusing for developers 
>> who want to script with the CLI. My preference would be to label the app 
>> delete help text to clearly indicate that it deletes logs
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Keith Bray
Isn't that what the SDK is for?   To chip in with a Product Management type hat 
on, I'd think the CLI should be primarily focused on user experience 
interaction, and the SDK should be primarily targeted for developer automation 
needs around programmatically interacting with the service.   So, I would argue 
that the target market for the CLI should not be the developer who wants to 
script.

-Keith

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 16, 2015 12:24 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Interactive choices like that one can make it more confusing for developers who 
want to script with the CLI. My preference would be to label the app delete 
help text to clearly indicate that it deletes logs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-16 Thread Matt Riedemann



On 6/16/2015 4:21 PM, Matt Riedemann wrote:

The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all
very similar.

I want to extract a common base class that abstracts some of the common
code and then let the sub-classes provide overrides where necessary.

As part of this, I'm wondering if we could just have a single
'mount_point_base' config option rather than one per backend like we
have today:

nfs_mount_point_base
glusterfs_mount_point_base
smbfs_mount_point_base
quobyte_mount_point_base

With libvirt you can only have one of these drivers configured per
compute host right?  So it seems to make sense that we could have one
option used for all 4 different driver implementations and reduce some
of the config option noise.

I checked the os-brick change [1] proposed to nova to see if there would
be any conflicts there and so far that's not touching any of these
classes so seems like they could be worked in parallel.

Are there any concerns with this?

Is a blueprint needed for this refactor?

[1] https://review.openstack.org/#/c/175569/



I threw together a quick blueprint [1] just for tracking.

I'm assuming I don't need a spec for this.

[1] 
https://blueprints.launchpad.net/nova/+spec/consolidate-libvirt-fs-volume-drivers


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-16 Thread Chris Dent

On Tue, 16 Jun 2015, Sean Dague wrote:


I was just looking at the patches that put Nova under apache wsgi for
the API, and there are a few things that I think are going in the wrong
direction. Largely I think because they were copied from the
lib/keystone code, which we've learned is kind of the wrong direction.


Yes, that's certainly what I've done the few times I've done it.
devstack is deeply encouraging of cargo culting for reasons that are
not entirely clear.


The first is the fact that a big reason for putting {SERVICES} under
apache wsgi is we aren't running on a ton of weird unregistered ports.
We're running on 80 and 443 (when appropriate). In order to do this we
really need to namespace the API urls. Which means that service catalog
needs to be updated appropriately.


So:

a) I'm very glad to hear of this. I've been bristling about the weird
   ports thing for the last year.

b) You make it sound like there's been a plan in place to not use
   those ports for quite some time and we'd get to that when we all
   had some spare time. Where do I go to keep abreast of such plans?


I also think this -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L266-L268
is completely wrong.

The Apache configs should instead specify access rules such that the
installed console entry point of nova-api can be used in place as the
WSGIScript.


I'm not able to parse this paragraph in any actionable way. The lines
you reference are one of several ways of telling mod wsgi where the
virtualenv is, which has to happen in some fashion if you are using
a virtualenv.

This doesn't appear to have anything to do with locating the module
that contains the WSGI app, so I'm missing the connection. Can you
explain please?

(Basically I'm keen on getting gnocchi and ceilometer wsgi servers
in devstack aligned with whatever the end game is, so knowing the plan
makes it a bit easier.)


This should also make lines like -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L272 and
L274 uneeded. (The WSGI Script will be in a known place). It will also
make upgrades much more friendly.


It sounds like maybe you are saying that the api console script and
the module containing the wsgi 'application' variable ought to be the
same thing. I don't reckon that's a great idea as the api console
scripts will want to import a bunch of stuff that the wsgi application
will not.

Or I may be completely misreading you. It's been a long day, etc.


I think that we need to get these things sorted before any further
progression here. Volunteers welcomed to help get us there.


Find me, happy to help. The sooner we can kill wacky port weirdness
the better.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-16 Thread Matt Riedemann
The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all 
very similar.


I want to extract a common base class that abstracts some of the common 
code and then let the sub-classes provide overrides where necessary.


As part of this, I'm wondering if we could just have a single 
'mount_point_base' config option rather than one per backend like we 
have today:


nfs_mount_point_base
glusterfs_mount_point_base
smbfs_mount_point_base
quobyte_mount_point_base

With libvirt you can only have one of these drivers configured per 
compute host right?  So it seems to make sense that we could have one 
option used for all 4 different driver implementations and reduce some 
of the config option noise.


I checked the os-brick change [1] proposed to nova to see if there would 
be any conflicts there and so far that's not touching any of these 
classes so seems like they could be worked in parallel.


Are there any concerns with this?

Is a blueprint needed for this refactor?

[1] https://review.openstack.org/#/c/175569/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Mid-cycle sprint

2015-06-16 Thread Tim Hinrichs
Hi all,

In the last couple of IRCs we've been talking about running a mid-cycle
sprint focused on enabling our message bus to span multiple processes and
multiple hosts.  The message bus is what allows the Congress policy engine
to communicate with the Congress wrappers around external services like
Nova, Neutron.  This cross-process, cross-host message bus is the platform
we'll use to build version 2.0 of our distributed architecture.

If you're interested in participating, drop me a note.  Once we know who's
interested we'll work out date/time/location details.

Thanks!
Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Quota enforcement

2015-06-16 Thread Salvatore Orlando
Some more comments inline.

Salvatore

On 16 June 2015 at 19:00, Carl Baldwin  wrote:

> On Tue, Jun 16, 2015 at 12:33 AM, Kevin Benton  wrote:
> >>Do these kinds of test even make sense? And are they feasible at all? I
> >> doubt we have any framework for injecting anything in neutron code under
> >> test.
> >
> > I was thinking about this in the context of a lot of the fixes we have
> for
> > other concurrency issues with the database. There are several exception
> > handlers that aren't exercised in normal functional, tempest, and API
> tests
> > because they require a very specific order of events between workers.
> >
> > I wonder if we could write a small shim DB driver that wraps the python
> one
> > for use in tests that just makes a desired set of queries take a long
> time
> > or fail in particular ways? That wouldn't require changes to the neutron
> > code, but it might not give us the right granularity of control.
>
> Might be worth a look.
>

It's a solution for pretty much mocking out the DB interactions. This would
work for fault injection on most neutron-server scenarios, both for RESTful
and RPC interfaces, but we'll need something else to "mock" interactions
with the data plane  that are performed by agents. I think we already have
a mock for the AMQP bus on which we shall just install hooks for injecting
faults.


> >>Finally, please note I am using DB-level locks rather than non-locking
> >> algorithms for making reservations.
> >
> > I thought these were effectively broken in Galera clusters. Is that not
> > correct?
>
> As I understand it, if two writes to two different masters end up
> violating some db-level constraint then the operation will cause a
> failure regardless if there is a lock.
>


> Basically, on Galera, instead of waiting for the lock, each will
> proceed with the transaction.  Finally, on commit, a write
> certification will double check constraints with the rest of the
> cluster (with a write certification).  It is at this point where
> Galera will fail one of them as a deadlock for violating the
> constraint.  Hence the need to retry.  To me, non-locking just means
> that you embrace the fact that the lock won't work and you don't
> bother to apply it in the first place.
>

This is correct.

Db level locks are broken in galera. As Carl says, write sets are sent out
for certification after a transaction is committed.
So the write intent lock, or even primary key constraint violations cannot
be verified before committing the transaction.
As a result you incur a write set certification failure, which is notably
more expensive than an instance-level rollback, and manifests as a
DBDeadlock exception to the OpenStack service.

Retrying a transaction is also a way of embracing this behaviour... you
just accept the idea of having to reach to write set certifications.
Non-locking approaches instead aim at avoiding write set certifications.
The downside is that especially in high concurrency scenario, the operation
is retries many times, and this might become even more expensive than
dealing with the write set certification failure.

But zzzeek (Mike Bayer) is coming to our help; as a part of his DBFacade
work, we should be able to treat active/active cluster as active/passive
for writes, and active/active for reads. This means that the write set
certification issue just won't show up, and the benefits of active/active
clusters will still be attained for most operations (I don't think there's
any doubt that SELECT operations represent the majority of all DB
statements).


> If my understanding is incorrect, please set me straight.
>

You're already straight enough ;)


>
> > If you do go that route, I think you will have to contend with DBDeadlock
> > errors when we switch to the new SQL driver anyway. From what I've
> observed,
> > it seems that if someone is holding a lock on a table and you try to grab
> > it, pymsql immediately throws a deadlock exception.
>

> I'm not familiar with pymysql to know if this is true or not.  But,
> I'm sure that it is possible not to detect the lock at all on galera.
> Someone else will have to chime in to set me straight on the details.
>

DBDeadlocks without multiple workers also suggest we should look closely at
what eventlet is doing before placing the blame on pymysql. I don't think
that the switch to pymysql is changing the behaviour of the database
interface; I think it's changing the way in which neutron interacts to the
database thus unveiling concurrency issues that we did not spot before as
we were relying on a sort of implicit locking triggered by the fact that
some parts of Mysql-Python were implemented in C.


>
> Carl
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Re: [openstack-dev] [Cinder] Volume creation fails in Horizon

2015-06-16 Thread Mike Perez
On 13:00 Jun 15, Jayanthi, Swaroop wrote:
> Hi All,
> 
> I am trying to create a Volume for VMFS with a Volume Type (selected Volume 
> Type selected has extra_specs).I am receiving an error Volume creation 
> failed incase if the volume-type has extra-specs.
> 
> Cinder doesn't support Volume creation if the volume-type has extra-specs?   
> Is this expected behavior can you please let me know your thoughts.
> 
> If not how to overcome this issue from Horizon UI incase if the Volume-Type 
> has extra-specs ?
> 
> Thanks and Regards,

Cinder does support volume creation if the volume type has extra specs. Volume
types with extra specs is information the Cinder scheduler uses in picking
a Cinder volume host. Can you please provide the Cinder scheduler log, as well
as information on the volume type's extra specs?

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Quota enforcement

2015-06-16 Thread Salvatore Orlando
On 16 June 2015 at 18:49, Carl Baldwin  wrote:

> On Thu, Jun 11, 2015 at 2:45 PM, Salvatore Orlando 
> wrote:
> > I have been then following a different approach. And a set of patches,
> > including a devref one [2], if up for review [3]. This hardly completes
> the
> > job: more work is required on the testing side, both as unit and
> functional
> > tests.
> >
> > As for the spec, since I honestly would like to spare myself the hassle
> of
> > rewriting it, I would kindly ask our glorious drivers team if they're ok
> > with me submitting a spec in the shorter format approved for Liberty
> without
> > going through the RFE process, as the spec is however in the Kilo
> backlog.
>
> It took me a second read through to realize that you're talking to me
> among the drivers team.  Personally, I'm okay with this and our
> currently documented policy seems to allow for this until Liberty-1.
>

Great!


>
> I just hope that this isn't an indication that we're requiring too
> much in this new RFE process and scaring potential filers away.  I'm
> trying to learn how to write good RFEs, so let me give it a shot:
>
>   Summary:  "Need robust quota enforcement in Neutron."
>
>   Further Information:  "Neutron can allow exceeding the quota in
> certain cases.  Some investigation revealed that quotas in Neutron are
> subject to a race where parallel requests can each check quota and
> find there is just enough left to fulfill its individual request.
> Each request proceeds to fulfillment with no more regard to the quota.
> When all of the requests are eventually fulfilled, we find that they
> have exceeded the quota."
>
> Given my current knowledge of the RFE process, that is what I would
> file as a bug in launchpad and tag it with 'rfe.'
>

The RFE process is fine and relatively simple. I was just luring somebody
into giving me the exact text to put in it!
Jokes apart, I was suggesting this because since it was a "backlog" spec,
it was already assumed that it was something we wanted to have for Neutron
and thus skip the RFE approval step.


> > For testing I wonder what strategy do you advice for implementing
> functional
> > tests. I could do some black-box testing and verifying quota limits are
> > correctly enforced. However, I would also like to go a bit white-box and
> > also verify that reservation entries are created and removed as
> appropriate
> > when a reservation is committed or cancelled.
> > Finally it would be awesome if I was able to run in the gate functional
> > tests on multi-worker servers, and inject delays or faults to verify the
> > systems behaves correctly when it comes to quota enforcement.
>
> Full black box testing would be impossible to achieve without multiple
> workers, right?  We've proposed adding multiple worker processes to
> the gate a couple of times if I recall including a recent one to .
>

Yeah but Neutron was not as stable with multiple workers, and we had to
revert it (I think I did the revert)


> Fixing the failures has not yet been seen as a priority.
>

I wonder if this is because developers are too busy bikeshedding or chasing
unicorns,  or because the issues we saw are mostly due to the way we run
tests in the gate and are not found by operators in real deployments
(another option if that operators are too afraid of neutron's
unpredictability and they do not even try turning on multiple workers)


> I agree that some whitebox testing should be added.  It may sound a
> bit double-entry to some but I don't mind, especially given the
> challenges around block box testing.  Maybe Assaf can chime in here
> and set us straight.
>

I want white-box testing. I think it's important. Unit tests to an extent
do this, but they don't test the whole functionality. On the other hand
black-bot testing tests the functionality, but it does not tell you whether
the system is actually behaving as you expect. If it's not, it means you
have a fault. And that fault will eventually emerge as a failure. So we
need this kind of testing. However, I need hooks in Neutron in order to
achieve this. Like a sqlalchemy event listener that informs me of completed
transactions, for instance. Or hooks to perform fault injection - like
adding a delay, or altering the return value of a function. It would be
good for me to know whether this is in the testing roadmap for Liberty.


>
> > Do these kinds of test even make sense? And are they feasible at all? I
> > doubt we have any framework for injecting anything in neutron code under
> > test.
>
> Dunno.


> > Finally, please note I am using DB-level locks rather than non-locking
> > algorithms for making reservations. I can move to a non-locking
> algorithm,
> > Jay proposed one for nova for Kilo, and I can just implement that one,
> but
> > first I would like to be convinced with a decent proof (or sort of) that
> the
> > extra cost deriving from collision among workers is overshadowed by the
> cost
> > for having to handle a write-set certification failure 

Re: [openstack-dev] [all][release] summit session summary: Release Versioning for Server Applications

2015-06-16 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2015-06-16 11:45:51 +0200:
> Doug Hellmann wrote:
> > [...]
> > I put together a little script [1] to try to count the previous
> > releases for projects, to use that as the basis for their first
> > SemVer-based version number. I pasted the output into an etherpad
> > [2] and started making notes about proposed release numbers at the
> > top. For now, I'm only working with the projects that have been
> > managed by the release team (have the "release:managed" tag in the
> > governance repository), but it should be easy enough for other projects
> > to use the same idea to pick a version number.
> 
> Your script missed 2015.1 tags for some reason...
> 
> I still think we should count the number of "integrated" releases
> instead of the number of releases (basically considering pre-integration
> releases as 0.x releases). That would give:
> 
> ceilometer 5.0.0
> cinder 7.0.0
> glance 11.0.0
> heat 5.0.0
> horizon 8.0.0
> ironic 2.0.0
> keystone 8.0.0
> neutron* 7.0.0
> nova 12.0.0
> sahara 3.0.0
> trove 4.0.0
> 
> We also traditionally "managed" the previously-incubated projects. That
> would add the following to the mix:
> 
> barbican 1.0.0
> designate 1.0.0
> manila 1.0.0
> zaqar 1.0.0
> 

I have submitted patches to update all of these projects to the versions
listed here.

See https://review.openstack.org/#/q/topic:semver-releases,n,z

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-16 Thread Jay Pipes

On 06/15/2015 10:55 AM, James Page wrote:

We understand and have communicated from the start of this
conversation that we will need to be able to maintain deltas between
Debian and Ubuntu - for both technical reasons, in the way the
distributions work (think Ubuntu main vs universe), as well as
objectives that each distribution has in terms of the way packaging
should work.


Hi James,

For the benefit of the TC members (such as myself) that do not have a 
great background in packaging internals, would you mind describing one 
or two of the deltas you describe above? I'm really wondering what these 
things look like and how big the difference is from the Debian packaging 
recipes (is that the right word, even?)


All the best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-16 Thread Alec Hothan (ahothan)
Gordon,

These are all great points for RPC messages (also called "CALL" in oslo
messaging). There are similar ambiguous contracts for the other types of
messages (CAST and FANOUT).
I am worried about the general lack of interest from the community to fix
this as it looks like most people assume that oslo messaging is good
enough (with rabbitMQ) and hence there is no need to invest any time on an
alternative transport (not mentioning that people generally prefer to work
on newer trending areas in OpenStack than contribute on a lower-level
messaging layer).
I saw Sean Dague mention in another email that RabbitMQ is used by 95% of
OpenStack users - and therefore does it make sense to invest in ZMQ (legit
question). RabbitMQ had had a lot of issues but there has been several
commits fixing some of the issues, so it would make sense IMHO to make
another status update to reevaluate the situation.

For OpenStack to be really production grade at scale, there is a need for
a very strong messaging layer and this cannot be achieved with such a
loose API definitions (regardless of what transport is used). This will be
what distinguishes a great cloud OS platform from a so-so one.
There is also a need for defining more clearly the roadmap for oslo
messaging because it is far from over. I see a need for clarifying the
following areas:
- validation at scale and HA
- security and encryption on the control plane

  Alec



On 6/16/15, 11:25 AM, "Gordon Sim"  wrote:

>On 06/12/2015 09:41 PM, Alec Hothan (ahothan) wrote:
>> One long standing issue I can see is the fact that the oslo messaging
>>API
>> documentation is sorely lacking details on critical areas such as API
>> behavior during fault conditions, load conditions and scale conditions.
>
>I very much agree, particularly on the contract/expectations in the face
>of different failure conditions. Even for those who are critical of the
>pluggability of oslo.messaging, greater clarity here would be of benefit.
>
>As I understand it, the intention is that RPC calls are invoked on a
>server at-most-once, meaning that in the event of any failure, the call
>will only be retried by the olso.messaging layer if it believes it can
>ensure the invocation is not made twice.
>
>If that is correct, stating so explicitly and prominently would be
>worthwhile. The expectation for services using the API would then be to
>decide on any retry themselves. An idempotent call could retry for a
>configured number of attempts perhaps. A non-idempotent call might be
>able to check the result via some other call and decide based on that
>whether to retry. Giving up would then be a last resort. This would help
>increase robustness of the system overall.
>
>Again if the assumption of at-most-once is correct, and explicitly
>stated, the design of the code can be reviewed to ensure it logically
>meets that guarantee and of course it can also be explicitly tested for
>in stress tests at the oslo.messaging level, ensuring there are no
>unintended duplicate invocations. An explicit contract also allows
>different approaches to be assessed and compared.
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-16 Thread Adam Young

On 06/16/2015 12:48 PM, Morgan Fainberg wrote:

Long term we want to see Keystone move to http:///identity. However the reason for choosing 
5000/35357 for ports was compatibility and avoiding breaking horizon. At the time we did the initial 
change over, sharing the root 80/443 ports with horizon was more than "challenging" since 
horizon needed to be based at "/".

If that issue/assumption for horizon is no longer present, moving keystone to 
be on port 80/443 would be doable. The last factor is that keystone was an a 
priori knowledge for discovering other services. As long as we update docs 
(possibly 302? For a cycle in devstack from the alternate ports) I think we're 
good to make the change.


The change to do this made its way into Horizon (courtesy of Matt Runge) 
and is in devstack as well, I think.  You need to specify WEBROOT for 
the Horizon install.




--Morgan

Sent via mobile


On Jun 16, 2015, at 09:25, Sean Dague  wrote:

I was just looking at the patches that put Nova under apache wsgi for
the API, and there are a few things that I think are going in the wrong
direction. Largely I think because they were copied from the
lib/keystone code, which we've learned is kind of the wrong direction.

The first is the fact that a big reason for putting {SERVICES} under
apache wsgi is we aren't running on a ton of weird unregistered ports.
We're running on 80 and 443 (when appropriate). In order to do this we
really need to namespace the API urls. Which means that service catalog
needs to be updated appropriately.

I'd expect nova to be running on http://localhost/compute not
http://localhost:8774 when running under wsgi. That's going to probably
interestingly break a lot of weird assumptions by different projects,
but that's part of the reason for doing this exercise. Things should be
using the service catalog, and when they aren't, we need to figure it out.

(Exceptions can be made for third party APIs that don't work this way,
like the metadata server).

I also think this -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L266-L268
is completely wrong.

The Apache configs should instead specify access rules such that the
installed console entry point of nova-api can be used in place as the
WSGIScript.

This should also make lines like -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L272 and
L274 uneeded. (The WSGI Script will be in a known place). It will also
make upgrades much more friendly.

I think that we need to get these things sorted before any further
progression here. Volunteers welcomed to help get us there.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-16 Thread Adam Young

On 06/16/2015 12:25 PM, Sean Dague wrote:

I was just looking at the patches that put Nova under apache wsgi for
the API, and there are a few things that I think are going in the wrong
direction. Largely I think because they were copied from the
lib/keystone code, which we've learned is kind of the wrong direction.

The first is the fact that a big reason for putting {SERVICES} under
apache wsgi is we aren't running on a ton of weird unregistered ports.
We're running on 80 and 443 (when appropriate). In order to do this we
really need to namespace the API urls. Which means that service catalog
needs to be updated appropriately.

I'd expect nova to be running on http://localhost/compute not

YES!

I had writtten this up for just this reason:

https://wiki.openstack.org/URLs

Lets make that the cannonical list.

Keystone suffers from the fact that the AUTH_URL is composed lots of 
places, and people hard coded port 5000 in...I would like that to die.

http://localhost:8774 when running under wsgi. That's going to probably
interestingly break a lot of weird assumptions by different projects,
but that's part of the reason for doing this exercise. Things should be
using the service catalog, and when they aren't, we need to figure it out.

Amen!


(Exceptions can be made for third party APIs that don't work this way,
like the metadata server).

I also think this -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L266-L268
is completely wrong.

The Apache configs should instead specify access rules such that the
installed console entry point of nova-api can be used in place as the
WSGIScript.

This should also make lines like -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L272 and
L274 uneeded. (The WSGI Script will be in a known place). It will also
make upgrades much more friendly.

I think that we need to get these things sorted before any further
progression here. Volunteers welcomed to help get us there.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] [horizon] [merlin] [refstack] Javascript Linting

2015-06-16 Thread Michael Krotscheck
On Tue, Jun 16, 2015 at 10:22 AM Tripp, Travis S 
wrote:

> I think agreeing on rules is the bigger problem here and I don’t think all
> the projects should have to agree on rules.


I believe we agree there, mostly. I personally feel there is some benefit
to setting some rules, likely published as an openstack linting plugin,
which enforce things like "Do not use fuzzy versions in your package.json"
and other things that make things unstable. That should be a very carefully
reserved list of rules though.

I've created an eslint configuration file that includes every single rule,
it's high level purpose, and a link to the details on it, and provided it
in a patch against horizon. The intent is that it's a good starting point
from which to activate and deactivate rules that make sense for horizon.

https://review.openstack.org/#/c/192327/


> We’ve spent a good portion of liberty 1 getting the code base cleaned up
> to meet the already adopted horizon rules and it is still in progress.
>

As a side note, the non-voting horizon linting job for javascript things is
waiting for review here: https://review.openstack.org/#/c/16/

My preference would be to see if we can use eslint to accomplish all of
> our currently adopted horizon rules [3][4] AND to also add in the angular
> specific plugin [1][2]. But we can’t do this at the expense of the entire
> liberty release.
>

Again, I agree. The patch I've provided above sets up the horizon eslint
build, and adds about... 10K additional style violations. Since neither of
the builds pass, it's difficult to see the difference, yet either way you
should probably tweak the rules to match horizon's personal preferences.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for changing 1600UTC meeting to 1700 UTC

2015-06-16 Thread Harm Weites

I'm ok with moving to 16:30 UTC instead of staying at 16:00.

I actually prefer it in my evening schedule :) Moving to 16:30 would 
already be a great improvement to the current schedule and should at 
least allow me to not miss everything.


- harmw

Op 12-06-15 om 15:44 schreef Steven Dake (stdake):

Even though 7am is not ideal for the west coast, I¹d be willing to go back
that far.  That would put the meeting at the morning school rush for the
west coast folks though (although we are in summer break in the US and we
could renegotiate a time in 3 months when school starts up again if its a
problem) - so creating different set of problems for different set of
people :)

This would be a 1400 UTC meeting.

While I wake up prior to 7am, (usually around 5:30) I am not going to put
people through the torture of a 6am meeting in any timezone if I can help
it so 1400 is the earliest we can go :)

Regards
-steve


On 6/12/15, 4:37 AM, "Paul Bourke"  wrote:


I'm fairly easy on this but, if the issue is that the meeting is running
into people's evening schedules (in EMEA), would it not make sense to
push it back an hour or two into office hours, rather than forward?

On 10/06/15 18:20, Ryan Hallisey wrote:

After some upstream discussion, moving the meeting from 1600 to 1700
UTC does not seem very popular.
It was brought up that changing the time to 16:30 UTC could accommodate
more people.

For the people that attend the 1600 UTC meeting time slot can you post
further feedback to address this?

Thanks,
Ryan

- Original Message -
From: "Jeff Peeler" 
To: "OpenStack Development Mailing List (not for usage questions)"

Sent: Tuesday, June 9, 2015 2:19:00 PM
Subject: Re: [openstack-dev] [kolla] Proposal for changing 1600UTC
meeting to 1700 UTC

On Mon, Jun 08, 2015 at 05:15:54PM +, Steven Dake (stdake) wrote:

Folks,

Several people have messaged me from EMEA timezones that 1600UTC fits
right into the middle of their family life (ferrying kids from school
and what-not) and 1700UTC while not perfect, would be a better fit
time-wise.

For all people that intend to attend the 1600 UTC, could I get your
feedback on this thread if a change of the 1600UTC timeslot to 1700UTC
would be acceptable?  If it wouldn¹t be acceptable, please chime in as
well.

Both 1600 and 1700 UTC are fine for me.

Jeff


_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

2015-06-16 Thread Harm Weites

Thanks guys, both for all the nice words and the acceptance!

harmw

Op 16-06-15 om 16:32 schreef Steven Dake (stdake):

Its unanimous!  Welcome to the core reviewer team Harm!

Regards
-steve


From: Steven Dake mailto:std...@cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage 
questions)" >

Date: Sunday, June 14, 2015 at 10:48 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla] Proposal for new core-reviewer Harm 
Waites


Hey folks,

I am proposing Harm Waites for the Kolla core team.  He did a
fantastic job implementing Designate in a container[1] which I’m
sure was incredibly difficult and never gave up even though there
were 13 separate patch reviews :)  Beyond Harm’s code
contributions, he is responsible for 32% of the “independent”
reviews[1] where independents compose 20% of our total reviewer
output.  I think we should judge core reviewers on more then
output, and I knew Harm was core reviewer material with his
fantastic review of the cinder container where he picked out 26
specific things that could be broken that other core reviewers may
have missed ;) [3].  His other reviews are also as thorough as
this particular review was.  Harm is active in IRC and in our
meetings for which his TZ fits.  Finally Harm has agreed to
contribute to the ansible-multi implementation that we will finish
in the liberty-2 cycle.

Consider my proposal to count as one +1 vote.

Any Kolla core is free to vote +1, abstain, or vote –1.  A –1 vote
is a veto for the candidate, so if you are on the fence, best to
abstain :)  Since our core team has grown a bit, I’d like 3 core
reviewer +1 votes this time around (vs Sam’s 2 core reviewer
votes).  I will leave the voting open until June 21  UTC.  If
the vote is unanimous prior to that time or a veto vote is
received, I’ll close voting and make appropriate adjustments to
the gerrit groups.

Regards
-steve

[1] https://review.openstack.org/#/c/182799/
[2]

http://stackalytics.com/?project_type=all&module=kolla&company=%2aindependent
[3] https://review.openstack.org/#/c/170965/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Getting rid of suds, which is unmaintained, and which we want out of Debian

2015-06-16 Thread Pete Zaitcev
On Thu, 11 Jun 2015 11:08:55 +0300
Duncan Thomas  wrote:

> There's only one cinder driver using it (nimble storage), and it seems to
> be using only very basic features. There are half a dozen suds forks on
> pipi, or there's pisimplesoap that the debian maintainer recommends. None
> of the above are currently packaged for Ubuntu that I can see, so can
> anybody in-the-know make a reaasoned recommendation as to what to move to?

In instances I had to deal with (talking to VMware), it was easier and
better to roll-your-own with python-xml and libhttp.

-- P

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Network path between admin network and shares

2015-06-16 Thread Sturdevant, Mark

Yes.  I think this is possible with the HP 3PAR.  I'd have to test more to be 
sure, but if I understand the plan correctly, it'll work.  However, there are 
limited resources for doing this, so it'll only work if resources allow.  I'm 
thinking that the administrator config+startup/setup code would setup admin 
network access and hold those resources to make sure that migration is possible.

I could see a scenario where a backend is usable for shares, but can't spare 
the extra resources to allow migration.  That could be a problem.  I'm not sure 
how/if we'd support that.




From: Rodrigo Barbieri [rodrigo.barbieri2...@gmail.com]
Sent: Thursday, June 11, 2015 1:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Manila] Network path between admin network and shares

Hello all,

There has been a lot of discussion around Share Migration lately. This feature 
has two main code paths:

- Driver Migration: optimized migration of shares from backend A to backend B 
where both backends belong to the same driver vendor. The driver is responsible 
for migrating and just returns a model update dictionary with necessary changes 
to DB entry.

- Generic Migration: This is the universal fallback for migrating a share from 
backend A to backend B, from any vendor to any vendor. In order to do this we 
have the approach where a machine in the admin network mounts both shares 
(source and destination) and copy the files. The problem is that it has been 
unusual so far in Manila design for a machine in the admin network to access 
shares which are served inside the cloud, a network path must exist for this to 
happen.

I was able to code this change for the generic driver in the Share Migration 
prototype (https://review.openstack.org/#/c/179791/).

We are not sure if all driver vendors are able to accomplish this. We would 
like to ask you to reply to this email if you are not able (or even not sure) 
to create a network path from your backend to the admin network so we can 
better think on the feasability of this feature.

More information in blueprint: 
https://blueprints.launchpad.net/manila/+spec/share-migration


Regards,
--
Rodrigo Barbieri
Computer Scientist
Federal University of São Carlos
+55 (11) 96889 3412

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Proposal for the Resume Feature

2015-06-16 Thread W Chan
Here's the etherpad link.  I replied to the comments/feedbacks there.
Please feel free to continue the conversation there.
https://etherpad.openstack.org/p/mistral-resume
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-16 Thread Gordon Sim

On 06/12/2015 09:41 PM, Alec Hothan (ahothan) wrote:

One long standing issue I can see is the fact that the oslo messaging API
documentation is sorely lacking details on critical areas such as API
behavior during fault conditions, load conditions and scale conditions.


I very much agree, particularly on the contract/expectations in the face 
of different failure conditions. Even for those who are critical of the 
pluggability of oslo.messaging, greater clarity here would be of benefit.


As I understand it, the intention is that RPC calls are invoked on a 
server at-most-once, meaning that in the event of any failure, the call 
will only be retried by the olso.messaging layer if it believes it can 
ensure the invocation is not made twice.


If that is correct, stating so explicitly and prominently would be 
worthwhile. The expectation for services using the API would then be to 
decide on any retry themselves. An idempotent call could retry for a 
configured number of attempts perhaps. A non-idempotent call might be 
able to check the result via some other call and decide based on that 
whether to retry. Giving up would then be a last resort. This would help 
increase robustness of the system overall.


Again if the assumption of at-most-once is correct, and explicitly 
stated, the design of the code can be reviewed to ensure it logically 
meets that guarantee and of course it can also be explicitly tested for 
in stress tests at the oslo.messaging level, ensuring there are no 
unintended duplicate invocations. An explicit contract also allows 
different approaches to be assessed and compared.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] future direction on partial upgrade support

2015-06-16 Thread Jeremy Stanley
On 2015-06-16 12:58:18 -0400 (-0400), Sean Dague wrote:
[...]
> I think the only complexity here is the fact that grenade.sh
> implicitly drives stack.sh. Which means one of:
> 
> 1) devstack-gate could build the worker first, then run grenade.sh
> 
> 2) we make it so grenade.sh can execute in parts more easily, so
> it can hand something else running stack.sh for it.'
> 
> 3) we make grenade understand the subnode for partial upgrade, so
> it will run the stack phase on the subnode itself (given
> credentials).
[...]

As a point of reference, have a look at Clark's change which
introduced Ansible for driving commands on arbitrary systems in a
devstack-gate based job:

https://review.openstack.org/172614

The idea is that you wrap all relevant commands in calls to ansible,
and then the only additional logic you need to abstract out is the
decision of which node(s) you want running those commands. It
generalizes fine to a single-node solution so that you don't need to
maintain separate multi-node-vs-single-node frameworks.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stackforge projects are not second class citizens

2015-06-16 Thread Georgy Okrokvertskhov
In Murano project we do see a positive impact of BigTent model. Since
Murano was accepted as a part of BigTent community we had a lot of
conversations with potential users. They were driven exactly by the fact
that Murano is now "officially" recognized in OpenStack community. It might
be a wrong perception, but this is a perception they have.
Most of the guys we met  are enterprises for whom catalog functionality is
interesting. The problem with enterprises is that their thinking periods
are often more than 6-9 months. They are not individuals who can start
contributing over a night. They need some time to create proper org
structure changes to organize development process. The benefits of that is
more stable and predictable development over time as soon as they start
contributing.

Thanks
Gosha



On Tue, Jun 16, 2015 at 4:44 AM, Jay Pipes  wrote:

> You may also find my explanation about the Big Tent helpful in this
> interview with Niki Acosta and Jeff Dickey:
>
> http://blogs.cisco.com/cloud/ospod-29-jay-pipes
>
> Best,
> -jay
>
>
> On 06/16/2015 06:09 AM, Flavio Percoco wrote:
>
>> On 16/06/15 04:39 -0400, gordon chung wrote:
>>
>>> i won't speak to whether this confirms/refutes the usefulness of the
>>> big tent.
>>> that said, probably as a by-product of being in non-stop meetings with
>>> sales/
>>> marketing/managers for last few days, i think there needs to be better
>>> definitions (or better publicised definitions) of what the goals of
>>> the big
>>> tent are. from my experience, they've heard of the big tent and they
>>> are, to
>>> varying degrees, critical of it. one common point is that they see it as
>>> greater fragmentation to a process that is already too slow.
>>>
>>
>> Not saying this is the final answer to all the questions but at least
>> it's a good place to start from:
>>
>>
>> https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/the-big-tent-a-look-at-the-new-openstack-projects-governance
>>
>>
>>
>> That said, this is great feedback and we may indeed need to do a
>> better job to explain the big tent. That presentation, I believe, was
>> an attempt to do so.
>>
>> Flavio
>>
>>
>>> just giving my fly-on-the-wall view from the other side.
>>>
>>> On 15/06/2015 6:20 AM, Joe Gordon wrote:
>>>
>>>One of the stated problems the 'big tent' is supposed to solve is:
>>>
>>>'The binary nature of the integrated release results in projects
>>> outside
>>>the integrated release failing to get the recognition they deserve.
>>>"Non-official" projects are second- or third-class citizens which
>>> can't get
>>>development resources. Alternative solutions can't emerge in the
>>> shadow of
>>>the blessed approach. Becoming part of the integrated release,
>>> which was
>>>originally designed to be a technical decision, quickly became a
>>>life-or-death question for new projects, and a political/community
>>>minefield.' [0]
>>>
>>>Meaning projects should see an uptick in development once they drop
>>> their
>>>second-class citizenship and join OpenStack. Now that we have been
>>> living
>>>in the world of the big tent for several months now, we can see if
>>> this
>>>claim is true.
>>>
>>>Below is a list of the first few few projects to join OpenStack
>>> after the
>>>big tent, All of which have now been part of OpenStack for at least
>>> two
>>>months.[1]
>>>
>>>* Mangum -  Tue Mar 24 20:17:36 2015
>>>* Murano - Tue Mar 24 20:48:25 2015
>>>* Congress - Tue Mar 31 20:24:04 2015
>>>* Rally - Tue Apr 7 21:25:53 2015
>>>
>>>When looking at stackalytics [2] for each project, we don't see any
>>>noticeably change in number of reviews, contributors, or number of
>>> commits
>>>from before and after each project joined OpenStack.
>>>
>>>So what does this mean? At least in the short term moving from
>>> Stackforge
>>>to OpenStack does not result in an increase in development
>>> resources (too
>>>early to know about the long term).  One of the three reasons for
>>> the big
>>>tent appears to be unfounded, but the other two reasons hold.  The
>>> only
>>>thing I think this information changes is what peoples expectations
>>> should
>>>be when applying to join OpenStack.
>>>
>>>[0] https://github.com/openstack/governance/blob/master/resolutions/
>>>20141202-project-structure-reform-spec.rst
>>>[1] Ignoring OpenStackClent since the repos were always in
>>> OpenStack it
>>>just didn't have a formal home in the governance repo.
>>>[2] h http://stackalytics.com/?module=magnum-group&metric=commits
>>>
>>>
>>>
>>>
>>> __
>>>
>>>OpenStack Development Mailing List (not for usage questions)
>>>Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> --
>>> gord
>>>
>>>
>>
>>> __

Re: [openstack-dev] [Security] Nominating Travis McPeak for Security CoreSec

2015-06-16 Thread michael mccune

On 06/16/2015 05:28 AM, Clark, Robert Graham wrote:

I’d like to nominate Travis for a CoreSec position as part of the
Security project. - CoreSec team members support the VMT with extended
consultation on externally reported vulnerabilities.

Travis has been an active member of the Security project for a couple of
years he’s a part of the bandit subproject and has been very active in
discussions over this time. He’s also found multiple vulnerabilities and
has experience of the VMT process.


+1

i'm not a core member, but Travis is very knowledgeable about the 
security domain and has been welcoming and helpful. he would make a 
great addition.


mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] (officially) deprecate stable/{grizzly, havana} branches

2015-06-16 Thread Richard Raseley

Matt Fischer wrote:

+1 from me for deprecation.

I'd also like to know or have an official policy for future
deprecations, such as when will we deprecate Icehouse?

On Tue, Jun 16, 2015 at 9:50 AM, Emilien Macchi mailto:emil...@redhat.com>> wrote:

Hi,

Some of our modules have stable/grizzly and stable/havana branches. Some
of them have the CI broken due to rspec issues that would require some
investigation and time if we wanted to fix it.

We would like to know who plan to backport some patches in these
branches?

If nobody plans to do that, we will let the branches as they are now but
won't officially support them.

By support I mean maintaining the CI jobs green (rspec, syntax, etc),
fixing bugs and adding new features.

Any feedback is welcome!

Regards,
--
Emilien Macchi



I echo your +1.

Perhaps most current stable supported, -1 stable version?

In that example, once the Liberty release of modules (or a particular 
module) is cut we would support Liberty and Kilo. When the same happens 
for M, we would deprecate Kilo and support M and Liberty.


Stable -2 also seems sane - I don't have a good sense of how far people 
are generally behind.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] (officially) deprecate stable/{grizzly, havana} branches

2015-06-16 Thread David Moreau Simard
+1 for deprecation

-- 
David Moreau Simard

On 2015-06-16 11:54 AM, Emilien Macchi wrote:
> Hi,
>
> Some of our modules have stable/grizzly and stable/havana branches. Some
> of them have the CI broken due to rspec issues that would require some
> investigation and time if we wanted to fix it.
>
> We would like to know who plan to backport some patches in these branches?
>
> If nobody plans to do that, we will let the branches as they are now but
> won't officially support them.
>
> By support I mean maintaining the CI jobs green (rspec, syntax, etc),
> fixing bugs and adding new features.
>
> Any feedback is welcome!
>
> Regards,


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Proposal for the Resume Feature

2015-06-16 Thread Dmitri Zimine
+1 great write-up Winson,

I propose we move the discussion to an etherpad, and flash out details there so 
it won’t get lost in a long thread. 
Winson would you care to create one and post here? 

Re: ‘error state’: I think it’s not absolutely necessary: pause/resume can be 
done without enabling ‘error->running’ transition, 
by making default task policy `on-error: pause`so that if user chooses, the 
workflow goes into paused state on errors.
But it may be convenient, so no strong opinion on this yet. 


Re: checkpoint and roll-backs - yes! I see this and pause-resume complimentary. 
To be precise on terminology, workflows don't “roll-back” - this is more 
transactional term, they “compensate”, by running a ‘compensation workflow’ 
that gets system to back to a checkpoint state. 
At the end of compensational process the system goes in “paused” state where it 
can be resumed once the ‘cause of failure’ is fixed. 

DZ. 

On Jun 15, 2015, at 10:25 PM, BORTMAN, Limor (Limor) 
 wrote:

> +1,
> I just have one question. Do we want to able resume for WF  in error state?
> I mean isn't real "resume" it should be more of a rerun, don't you think?
> So in an error state we will create new executor and just re run it
> Thanks Limor
> 
> 
> 
> -Original Message-
> From: Lingxian Kong [mailto:anlin.k...@gmail.com] 
> Sent: Tuesday, June 16, 2015 5:47 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Mistral] Proposal for the Resume Feature
> 
> Thanks Winson for the write-up, very detailed infomation. (the format was 
> good)
> 
> I'm totally in favor of your idea, actually, I really think you proposal is 
> complementary to my proposal in 
> https://etherpad.openstack.org/p/vancouver-2015-design-summit-mistral,
> please see 'Workflow rollback/recovery' section.
> 
> What I wanna do is configure some 'checkpoints' throughout the workflow, and 
> if some task failed, we could rollback the execution to some checkpoint, and 
> resume the whole workflow after we have fixed some problem, seems like the 
> execution has never been failed before.
> 
> It's just a initial idea, I'm waiting for our discussion to see if it really 
> makes sense to users, to get feedback, then we can talk about the 
> implementation and cooperation.
> 
> On Tue, Jun 16, 2015 at 7:51 AM, W Chan  wrote:
>> Resending to see if this fixes the formatting for outlines below.
>> 
>> 
>> I want to continue the discussion on the workflow "resume" feature.
>> 
>> 
>> Resuming from our last conversation @
>> http://lists.openstack.org/pipermail/openstack-dev/2015-March/060265.h
>> tml. I don't think we should limit how users resume. There may be 
>> different possible scenarios. User can fix the environment or 
>> condition that led to the failure of the current task and the user 
>> wants to just re-run the failed task.  Or user can actually fix the 
>> environment/condition which include fixing what the task was doing, 
>> then just want to continue the next set of task(s).
>> 
>> 
>> The following is a list of proposed changes.
>> 
>> 
>> 1. A new CLI operation to resume WF (i.e. mistral workflow-resume).
>> 
>>A. If no additional info is provided, assume this WF is manually 
>> paused and there are no task/action execution errors. The WF state is 
>> updated to RUNNING. Update using the put method @ 
>> ExecutionsController. The put method checks that there's no task/action 
>> execution errors.
>> 
>>B. If WF is in an error state
>> 
>>i. To resume from failed task, the workflow-resume command 
>> requires the WF execution ID, task name, and/or task input.
>> 
>>ii. To resume from failed with-items task
>> 
>>a. Re-run the entire task (re-run all items) requires WF
>> execution ID, task name and/or task input.
>> 
>>b. Re-run a single item requires WF execution ID, task 
>> name, with-items index, and/or task input for the item.
>> 
>>c. Re-run selected items requires WF execution ID, task 
>> name, with-items indices, and/or task input for each items.
>> 
>>- To resume from the next task(s), the workflow-resume 
>> command requires the WF execution ID, failed task name, output for the 
>> failed task, and a flag to skip the failed task.
>> 
>> 
>> 2. Make ERROR -> RUNNING as valid state transition @ 
>> is_valid_transition function.
>> 
>> 
>> 3. Add a comments field to Execution model. Add a note that indicates 
>> the execution is launched by workflow-resume. Auto-populated in this case.
>> 
>> 
>> 4. Resume from failed task.
>> 
>>A. Re-run task with the same task inputs >> POST new action 
>> execution for the task execution @ ActionExecutionsController
>> 
>>B. Re-run task with different task inputs >> POST new action 
>> execution for the task execution, allowed for different input @ 
>> ActionExecutionsController
>> 
>> 
>> 5. Resume from next task(s).
>> 
>>A. Inject a noop task execu

Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Adrian Otto

On Jun 15, 2015, at 9:10 PM, Keith Bray 
mailto:keith.b...@rackspace.com>> wrote:

Regardless of what the API defaults to, could we have the CLI prompt/warn so 
that the user easily knows that both options exist?  Is there a precedent 
within OpenStack for a similar situation?

E.g.
> solum app delete MyApp
 Do you want to also delete your logs? (default is Yes):  [YES/no]
  NOTE, if you choose No, application logs will remain on your 
account. Depending on your service provider, you may incur on-going storage 
charges.

Interactive choices like that one can make it more confusing for developers who 
want to script with the CLI. My preference would be to label the app delete 
help text to clearly indicate that it deletes logs. Today the help text is:

solum app delete 
Delete an application and all related artifacts.

Initial alternative:

solum app delete 
Delete an application and all related artifacts, including logs.

We could add the --keep-logs option Murali mentioned and say this instead:

solum app delete [--keep-logs] 
Delete an application and all related artifacts. Logs are kept if 
--keep-logs is used.

This should conform to the principle of least surprise, allow for keeping logs 
around for those who want them, and not interfere with those wanting to script 
with the CLI.

Cheers,

Adrian

Thanks,
-Keith

From: Devdatta Kulkarni 
mailto:devdatta.kulka...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 15, 2015 9:56 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?

Yes, the log deletion should be optional.

The question is what should be the default behavior. Should the default be to 
delete the logs and provide a flag to keep them, or keep the logs by default 
and provide a override flag to delete them?

Delete-by-default is consistent with the view that when an app is deleted, all 
its artifacts are deleted (the app's meta data, the deployment units (DUs), and 
the logs). This behavior is also useful in our current state when the app 
resource and the CLI are in flux. For now, without a way to specify a flag, 
either to delete the logs or to keep them, delete-by-default behavior helps us 
clean all the log files from the application's cloud files container when an 
app is deleted.
This is very useful for our CI jobs. Without this, we end up with lots of log 
files in the application's container,
and have to resort to separate scripts to delete them up after an app is 
deleted.

Once the app resource and CLI stabilize it should be straightforward to change 
the default behavior if required.

- Devdatta


From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Sent: Friday, June 12, 2015 6:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

Team,

We currently delete logs for an app when we delete the app[1].

https://bugs.launchpad.net/solum/+bug/1463986

Perhaps there should be an optional setting at the tenant level that determines 
whether your logs are deleted or not by default (set to off initially), and an 
optional parameter to our DELETE calls that allows for the opposite action from 
the default to be specified if the user wants to override it at the time of the 
deletion. Thoughts?

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] Proposal for Weekly Glance Drivers meeting.

2015-06-16 Thread Nikhil Komawar
FYI, We will be closing the vote on Friday, June 19 at 1700 UTC.

On 6/15/15 7:41 PM, Nikhil Komawar wrote:
> Hi,
>
> As per the discussion during the last weekly Glance meeting (14:51:42at
> http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-06-11-14.00.log.html
> ), we will begin a short drivers' meeting where anyone can come and get
> more feedback.
>
> The purpose is to enable those who need multiple drivers in the same
> place; easily co-ordinate, schedule & collaborate on the specs, get
> core-reviewers assigned to their specs etc. This will also enable more
> synchronous style feedback, help with more collaboration as well as with
> dedicated time for giving quality input on the specs. All are welcome to
> attend and attendance from drivers is not mandatory but encouraged.
> Initially it would be a 30 min meeting and if need persists we will
> extend the period.
>
> Please vote on the proposed time and date:
> https://review.openstack.org/#/c/192008/ (Note: Run the tests for your
> vote to ensure we are considering feasible & non-conflicting times.) We
> will start the meeting next week unless there are strong conflicts.
>

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] (officially) deprecate stable/{grizzly, havana} branches

2015-06-16 Thread Matt Fischer
+1 from me for deprecation.

I'd also like to know or have an official policy for future deprecations,
such as when will we deprecate Icehouse?

On Tue, Jun 16, 2015 at 9:50 AM, Emilien Macchi  wrote:

> Hi,
>
> Some of our modules have stable/grizzly and stable/havana branches. Some
> of them have the CI broken due to rspec issues that would require some
> investigation and time if we wanted to fix it.
>
> We would like to know who plan to backport some patches in these branches?
>
> If nobody plans to do that, we will let the branches as they are now but
> won't officially support them.
>
> By support I mean maintaining the CI jobs green (rspec, syntax, etc),
> fixing bugs and adding new features.
>
> Any feedback is welcome!
>
> Regards,
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] [horizon] [merlin] [refstack] Javascript Linting

2015-06-16 Thread Tripp, Travis S
I’m copying and pasting from the other thread some info below.

I think agreeing on rules is the bigger problem here and I don’t think all
the projects should have to agree on rules. We’ve spent a good portion of
liberty 1 getting the code base cleaned up to meet the already adopted
horizon rules and it is still in progress.

My preference would be to see if we can use eslint to accomplish all of
our currently adopted horizon rules [3][4] AND to also add in the angular
specific plugin [1][2]. But we can’t do this at the expense of the entire
liberty release.

― My previously email below:

We¹ve adopted the John Papa style guide for Angular in horizon [0]. On
cursory inspection ES lint seems to have an angular specific plugin [1]
that could be very useful to us, but we¹d need to evaluate it in depth. It
looks like there was some discussion on the style guide on this not too
long ago [2]. The jscs rules we have [3] are very generic code formatting
type rules that are helpful, but don't really provide any angular specific
help. Here are the jshint rules [4]. It would be quite nice to put all
this goodness across tools into a single tool configuration if possible.

[0] 
http://docs.openstack.org/developer/horizon/contributing.html#john-papa-sty
le-guide
[1] https://www.npmjs.com/package/eslint-plugin-angular
[2] https://github.com/johnpapa/angular-styleguide/issues/194
[3] https://github.com/openstack/horizon/blob/master/.jscsrc
[4] https://github.com/openstack/horizon/blob/master/.jshintrc


From:  "Rob Cresswell   (rcresswe)" 
Reply-To:  OpenStack List 
Date:  Tuesday, June 16, 2015 at 1:40 AM
To:  OpenStack List 
Subject:  Re: [openstack-dev] [javascript] [horizon] [merlin] [refstack]
Javascript Linting


So my view here is that I don’t particularly mind which plugin/ set of
plugins Horizon uses, but the biggest deterrent is the workload. We’re
already cleaning everything up quite productively, so I’m reluctant to
swap. That said, the cleanup from JSCS/
 JSHint should be largely relevant to ESLint. Michael, do you have any
ideas on the numbers/ workload behind a possible swap?

With regards to licensing, does this mean we must stop using JSHint, or
that we’re still okay to use it as a dev tool? Seems that if the former is
the case, then the decision is made for us.

Rob



From: Michael Krotscheck 
Reply-To: "OpenStack Development Mailing List (not for usage questions)"

Date: Tuesday, 16 June 2015 00:36
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: [openstack-dev] [javascript] [horizon] [merlin] [refstack]
Javascript Linting


I'm restarting this thread with a different subject line to get a broader
audience. Here's the original thread:
http://lists.openstack.org/pipermail/openstack-dev/2015-June/066040.html


The question at hand is "What will be OpenStack's javascript equivalent of
flake8". I'm going to consider the need for common formatting rules to be
self-evident. Here's the lay of the land so far:

* Horizon currently uses JSCS.
* Refstack uses Eslint.
* Merlin doesn't use anything.
* StoryBoard (deprecated) uses eslint.
* Nobody agrees on rules.


JSCS

JSCS Stands for "JavaScript CodeStyle". Its mission is to enforce a style
guide, yet it does not check for potential bugs, variable overrides, etc.
For those tests, the team usually defers to (preferred) JSHint, or ESLint.

JSHint
Ever since JSCS was extracted from JSHint, it has actively removed rules
that enforce code style, and focused on findbug style tests instead.
JSHint still contains the "Do no evil" license, therefore is not an option
for OpenStack, and has been disqualified.

ESLint
ESLint's original mission was to be an OSI compliant replacement for
JSHint, before the JSCS split. It wants to be a one-tool solution.

My personal opinion/recommendation: Based on the above, I recommend we use
ESLint. My reasoning: It's one tool, it's extensible, it does both
codestyle things and bug finding things, and it has a good license. JSHint
is disqualified because of the license.
 JSCS is disqualified because it is too focused, and only partially useful
on its own.

I understand that this will mean some work by the Horizon team to bring
their code in line with a new parser, however I personally consider this
to be a good thing. If the code is good to begin with, it shouldn't be
that difficult.

This thread is not there to argue about which rules to enforce. Right now
I just want to nail down a tool, so that we can (afterwards) have a
discussion about which rules to activate.

Michael

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-16 Thread Sean Dague
On 06/16/2015 12:49 PM, Clint Byrum wrote:
> Excerpts from Sean Dague's message of 2015-06-16 06:22:23 -0700:
>> FYI,
>>
>> One of the things that came out of the summit for Devstack plans going
>> forward is to trim it back to something more opinionated and remove a
>> bunch of low use optionality in the process.
>>
>> One of those branches to be trimmed is all the support for things beyond
>> RabbitMQ in the rpc layer. RabbitMQ is what's used by 95%+ of our
>> community, that's what the development environment should focus on.
>>
>> The patch to remove all of this is here -
>> https://review.openstack.org/#/c/192154/. Expect this to merge by the
>> end of the month. If people are interested in non RabbitMQ external
>> plugins, now is the time to start writing them. The oslo.messaging team
>> already moved their functional test installation for alternative
>> platforms off of devstack, so this should impact a very small number of
>> people.
>>
> 
> The recent spec we added to define a policy for oslo.messaging drivers is
> intended as a way to encourage that 5% who feels a different messaging
> layer is critical to participate upstream by adding devstack-gate jobs
> and committing developers to keep them stable. This change basically
> slams the door in their face and says "good luck, we don't actually care
> about accomodating you." This will drive them more into the shadows,
> and push their forks even further away from the core of the project. If
> that's your intention, then we need to have a longer conversation where
> you explain to me why you feel that's a good thing.

I believe it is not the responsibility of the devstack team to support
every possible backend one could imagine and carry that technical debt
in tree, confusing new users in the process that any of these things
might actually work. I believe that if you feel that your spec assumed
that was going to be the case, you made a large incorrect externalities
assumption.

> Also, I take issue with the value assigned to dropping it. If that 95%
> is calculated as orgs_running_on_rabbit/orgs then it's telling a really
> lop-sided story. I'd rather see compute_nodes_on_rabbit/compute_nodes.
> 
> I'd like to propose that we leave all of this in tree to match what is
> in oslo.messaging. I think devstack should follow oslo.messaging and
> deprecate the ones that oslo.messaging deprecates. Otherwise I feel like
> we're Vizzini cutting the rope just as The Dread Pirate 0mq is about to
> climb the last 10 meters to the top of the cliffs of insanity and battle
> RabbitMQ left handed. I know, "inconceivable" right?

We have an external plugin mechanism for devstack. That's a viable
option here. People will have to own and do that work, instead of
expecting the small devstack team to do that for them. I believe I left
enough of a hook in place that it's possible.

That would also let them control the code relevant to their plugin,
because there is no way that devstack was going to gate against other
backends here, so we'd end up breaking them pretty often, and it taking
a while to fix them in tree.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] CLI problem

2015-06-16 Thread Ali Reza Zamani
It is weired. I deleted my devstack and redo everything. I am using the 
same command and everything is fine.


Thanks,
Regards,

On 06/16/2015 01:03 PM, Steve Martinelli wrote:
What was the command you used? What was the output? Can you try 
running it with --debug? More information is needed here.


It would also probably be quicker to jump on IRC and ask around.

Thanks,

Steve Martinelli
OpenStack Keystone Core

"Ali Reza Zamani"  wrote on 06/16/2015 
12:46:16 PM:


> From: "Ali Reza Zamani" 
> To: openstack-dev@lists.openstack.org
> Date: 06/16/2015 12:47 PM
> Subject: [openstack-dev] CLI problem
>
> Hi all,
>
> I have a problem in creating the instances. When I create the instances
> using GUI web interface everything is fine. But when I do it using CLI
> after spawning it says Error.
> And the error is: ne
>
> 
__

> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] CLI problem

2015-06-16 Thread Steve Martinelli
What was the command you used? What was the output? Can you try running it 
with --debug? More information is needed here.

It would also probably be quicker to jump on IRC and ask around.

Thanks,

Steve Martinelli
OpenStack Keystone Core

"Ali Reza Zamani"  wrote on 06/16/2015 
12:46:16 PM:

> From: "Ali Reza Zamani" 
> To: openstack-dev@lists.openstack.org
> Date: 06/16/2015 12:47 PM
> Subject: [openstack-dev] CLI problem
> 
> Hi all,
> 
> I have a problem in creating the instances. When I create the instances
> using GUI web interface everything is fine. But when I do it using CLI
> after spawning it says Error.
> And the error is: ne
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Quota enforcement

2015-06-16 Thread Carl Baldwin
On Tue, Jun 16, 2015 at 12:33 AM, Kevin Benton  wrote:
>>Do these kinds of test even make sense? And are they feasible at all? I
>> doubt we have any framework for injecting anything in neutron code under
>> test.
>
> I was thinking about this in the context of a lot of the fixes we have for
> other concurrency issues with the database. There are several exception
> handlers that aren't exercised in normal functional, tempest, and API tests
> because they require a very specific order of events between workers.
>
> I wonder if we could write a small shim DB driver that wraps the python one
> for use in tests that just makes a desired set of queries take a long time
> or fail in particular ways? That wouldn't require changes to the neutron
> code, but it might not give us the right granularity of control.

Might be worth a look.

>>Finally, please note I am using DB-level locks rather than non-locking
>> algorithms for making reservations.
>
> I thought these were effectively broken in Galera clusters. Is that not
> correct?

As I understand it, if two writes to two different masters end up
violating some db-level constraint then the operation will cause a
failure regardless if there is a lock.

Basically, on Galera, instead of waiting for the lock, each will
proceed with the transaction.  Finally, on commit, a write
certification will double check constraints with the rest of the
cluster (with a write certification).  It is at this point where
Galera will fail one of them as a deadlock for violating the
constraint.  Hence the need to retry.  To me, non-locking just means
that you embrace the fact that the lock won't work and you don't
bother to apply it in the first place.

If my understanding is incorrect, please set me straight.

> If you do go that route, I think you will have to contend with DBDeadlock
> errors when we switch to the new SQL driver anyway. From what I've observed,
> it seems that if someone is holding a lock on a table and you try to grab
> it, pymsql immediately throws a deadlock exception.

I'm not familiar with pymysql to know if this is true or not.  But,
I'm sure that it is possible not to detect the lock at all on galera.
Someone else will have to chime in to set me straight on the details.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [grenade] future direction on partial upgrade support

2015-06-16 Thread Sean Dague
Back when Nova first wanted to test partial upgrade, we did a bunch of
slightly odd conditionals inside of grenade and devstack to make it so
that if you were very careful, you could just not stop some of the old
services on a single node, upgrade everything else, and as long as the
old services didn't stop, they'd be running cached code in memory, and
it would look a bit like a 2 node worker not upgraded model. It worked,
but it was weird.

There has been some interest by the Nova team to expand what's not being
touched, as well as the Neutron team to add partial upgrade testing
support. Both are great initiatives, but I think going about it the old
way is going to add a lot of complexity in weird places, and not be as
good of a test as we really want.

Nodepool now supports allocating multiple nodes. We have a multinode job
in Nova regularly testing live migration using this.

If we slice this problem differently, I think we get a better
architecture, a much easier way to add new configs, and a much more
realistic end test.

Conceptually, use devstack-gate multinode support to set up 2 nodes, an
all in one, and a worker. Let grenade upgrade the all in one, leave the
worker alone.

I think the only complexity here is the fact that grenade.sh implicitly
drives stack.sh. Which means one of:

1) devstack-gate could build the worker first, then run grenade.sh

2) we make it so grenade.sh can execute in parts more easily, so it can
hand something else running stack.sh for it.'

3) we make grenade understand the subnode for partial upgrade, so it
will run the stack phase on the subnode itself (given credentials).

This kind of approach means deciding which services you don't want to
upgrade doesn't require devstack changes, it's just a change of the
services on the worker.

We need a volunteer for taking this on, but I think all the follow on
partial upgrade support will be much much easier to do after we have
this kind of mechanism in place.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Quota enforcement

2015-06-16 Thread Carl Baldwin
On Thu, Jun 11, 2015 at 2:45 PM, Salvatore Orlando  wrote:
> I have been then following a different approach. And a set of patches,
> including a devref one [2], if up for review [3]. This hardly completes the
> job: more work is required on the testing side, both as unit and functional
> tests.
>
> As for the spec, since I honestly would like to spare myself the hassle of
> rewriting it, I would kindly ask our glorious drivers team if they're ok
> with me submitting a spec in the shorter format approved for Liberty without
> going through the RFE process, as the spec is however in the Kilo backlog.

It took me a second read through to realize that you're talking to me
among the drivers team.  Personally, I'm okay with this and our
currently documented policy seems to allow for this until Liberty-1.

I just hope that this isn't an indication that we're requiring too
much in this new RFE process and scaring potential filers away.  I'm
trying to learn how to write good RFEs, so let me give it a shot:

  Summary:  "Need robust quota enforcement in Neutron."

  Further Information:  "Neutron can allow exceeding the quota in
certain cases.  Some investigation revealed that quotas in Neutron are
subject to a race where parallel requests can each check quota and
find there is just enough left to fulfill its individual request.
Each request proceeds to fulfillment with no more regard to the quota.
When all of the requests are eventually fulfilled, we find that they
have exceeded the quota."

Given my current knowledge of the RFE process, that is what I would
file as a bug in launchpad and tag it with 'rfe.'

> For testing I wonder what strategy do you advice for implementing functional
> tests. I could do some black-box testing and verifying quota limits are
> correctly enforced. However, I would also like to go a bit white-box and
> also verify that reservation entries are created and removed as appropriate
> when a reservation is committed or cancelled.
> Finally it would be awesome if I was able to run in the gate functional
> tests on multi-worker servers, and inject delays or faults to verify the
> systems behaves correctly when it comes to quota enforcement.

Full black box testing would be impossible to achieve without multiple
workers, right?  We've proposed adding multiple worker processes to
the gate a couple of times if I recall including a recent one to .
Fixing the failures has not yet been seen as a priority.

I agree that some whitebox testing should be added.  It may sound a
bit double-entry to some but I don't mind, especially given the
challenges around block box testing.  Maybe Assaf can chime in here
and set us straight.

> Do these kinds of test even make sense? And are they feasible at all? I
> doubt we have any framework for injecting anything in neutron code under
> test.

Dunno.

> Finally, please note I am using DB-level locks rather than non-locking
> algorithms for making reservations. I can move to a non-locking algorithm,
> Jay proposed one for nova for Kilo, and I can just implement that one, but
> first I would like to be convinced with a decent proof (or sort of) that the
> extra cost deriving from collision among workers is overshadowed by the cost
> for having to handle a write-set certification failure and retry the
> operation.

Do you have a reference describing the algorithm Jay proposed?

> Please advice.
>
> Regards,
> Salvatore
>
> [1]
> http://specs.openstack.org/openstack/neutron-specs/specs/kilo-backlog/better-quotas.html
> [2] https://review.openstack.org/#/c/190798/
> [3]
> https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/better-quotas,n,z
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-16 Thread Clint Byrum
Excerpts from Sean Dague's message of 2015-06-16 06:22:23 -0700:
> FYI,
> 
> One of the things that came out of the summit for Devstack plans going
> forward is to trim it back to something more opinionated and remove a
> bunch of low use optionality in the process.
> 
> One of those branches to be trimmed is all the support for things beyond
> RabbitMQ in the rpc layer. RabbitMQ is what's used by 95%+ of our
> community, that's what the development environment should focus on.
> 
> The patch to remove all of this is here -
> https://review.openstack.org/#/c/192154/. Expect this to merge by the
> end of the month. If people are interested in non RabbitMQ external
> plugins, now is the time to start writing them. The oslo.messaging team
> already moved their functional test installation for alternative
> platforms off of devstack, so this should impact a very small number of
> people.
> 

The recent spec we added to define a policy for oslo.messaging drivers is
intended as a way to encourage that 5% who feels a different messaging
layer is critical to participate upstream by adding devstack-gate jobs
and committing developers to keep them stable. This change basically
slams the door in their face and says "good luck, we don't actually care
about accomodating you." This will drive them more into the shadows,
and push their forks even further away from the core of the project. If
that's your intention, then we need to have a longer conversation where
you explain to me why you feel that's a good thing.

Also, I take issue with the value assigned to dropping it. If that 95%
is calculated as orgs_running_on_rabbit/orgs then it's telling a really
lop-sided story. I'd rather see compute_nodes_on_rabbit/compute_nodes.

I'd like to propose that we leave all of this in tree to match what is
in oslo.messaging. I think devstack should follow oslo.messaging and
deprecate the ones that oslo.messaging deprecates. Otherwise I feel like
we're Vizzini cutting the rope just as The Dread Pirate 0mq is about to
climb the last 10 meters to the top of the cliffs of insanity and battle
RabbitMQ left handed. I know, "inconceivable" right?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-16 Thread Morgan Fainberg
Long term we want to see Keystone move to http:///identity. However the 
reason for choosing 5000/35357 for ports was compatibility and avoiding 
breaking horizon. At the time we did the initial change over, sharing the root 
80/443 ports with horizon was more than "challenging" since horizon needed to 
be based at "/". 

If that issue/assumption for horizon is no longer present, moving keystone to 
be on port 80/443 would be doable. The last factor is that keystone was an a 
priori knowledge for discovering other services. As long as we update docs 
(possibly 302? For a cycle in devstack from the alternate ports) I think we're 
good to make the change. 

--Morgan

Sent via mobile

> On Jun 16, 2015, at 09:25, Sean Dague  wrote:
> 
> I was just looking at the patches that put Nova under apache wsgi for
> the API, and there are a few things that I think are going in the wrong
> direction. Largely I think because they were copied from the
> lib/keystone code, which we've learned is kind of the wrong direction.
> 
> The first is the fact that a big reason for putting {SERVICES} under
> apache wsgi is we aren't running on a ton of weird unregistered ports.
> We're running on 80 and 443 (when appropriate). In order to do this we
> really need to namespace the API urls. Which means that service catalog
> needs to be updated appropriately.
> 
> I'd expect nova to be running on http://localhost/compute not
> http://localhost:8774 when running under wsgi. That's going to probably
> interestingly break a lot of weird assumptions by different projects,
> but that's part of the reason for doing this exercise. Things should be
> using the service catalog, and when they aren't, we need to figure it out.
> 
> (Exceptions can be made for third party APIs that don't work this way,
> like the metadata server).
> 
> I also think this -
> https://github.com/openstack-dev/devstack/blob/master/lib/nova#L266-L268
> is completely wrong.
> 
> The Apache configs should instead specify access rules such that the
> installed console entry point of nova-api can be used in place as the
> WSGIScript.
> 
> This should also make lines like -
> https://github.com/openstack-dev/devstack/blob/master/lib/nova#L272 and
> L274 uneeded. (The WSGI Script will be in a known place). It will also
> make upgrades much more friendly.
> 
> I think that we need to get these things sorted before any further
> progression here. Volunteers welcomed to help get us there.
> 
>-Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] CLI problem

2015-06-16 Thread Ali Reza Zamani
Hi all,

I have a problem in creating the instances. When I create the instances
using GUI web interface everything is fine. But when I do it using CLI
after spawning it says Error.
And the error is: ne

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mechanism drivers and Neutron server forking?

2015-06-16 Thread Terry Wilson
> Right now I'm leaning toward "parent always does nothing" + PluginWorker.
> Everything is forked, no special case for workers==0, and explicit
> designation of the "only one" case. Of course, it's still early in the day
> and I haven't had any coffee.

I have updated the patch (https://review.openstack.org/#/c/189391/) to 
implement the above. I have it marked WIP because it doesn't have any tests and 
it modifies ServicePluginBase to have a call to get_processes(), but almost no 
service plugins actually inherit from it even though they implement its 
interface. The get_processes stuff in general could be fleshed out a bit as 
well. I just wanted to get something up for the purposes of discussion, so 
anyone interested in this particular problem should take a look and discuss. :)

Terry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-16 Thread Paul Belanger

On 06/16/2015 12:41 PM, Allison Randal wrote:

On 06/15/2015 01:43 PM, Paul Belanger wrote:

While I agree those points are valid, and going to be helpful, moving
under OpenStack (even Stackforge) does also offer the chance to get more
test integration upstream (not saying this was the original scope).
However, this could also be achieved by 3rd party integration too.


Nod, 3rd party integration is worth exploring.


I'm still driving forward with some -infra specific packaging for Debian
/ Fedora ATM (zuul packaging). Mostly because of -infra needs for
packages. Not saying that is a reason to reconsider, but there is the
need for -infra to consume packages from upstream.


I suspect that, at least initially, the needs of -infra specific
packaging will be quite different than the needs of general-purpose
packaging in Debian/Fedora distros. Trying to tightly couple the two
will just bog you down in trying to solve far too many problems for far
too many people. But, I also suspect that -infra packaging will be quite
minimal and intended for the services to be configured by puppet, so
there's a very good chance that if you sprint ahead and just do it, your
style of packaging will end up feeding back into future packaging in the
distros.

My thoughts exactly. I believe by the next summit, we should have a base 
in -infra for producing packages (unsure about consuming ATM). 
Interesting times ahead.



Allison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-16 Thread Allison Randal
On 06/15/2015 01:43 PM, Paul Belanger wrote:
> While I agree those points are valid, and going to be helpful, moving
> under OpenStack (even Stackforge) does also offer the chance to get more
> test integration upstream (not saying this was the original scope).
> However, this could also be achieved by 3rd party integration too.

Nod, 3rd party integration is worth exploring.

> I'm still driving forward with some -infra specific packaging for Debian
> / Fedora ATM (zuul packaging). Mostly because of -infra needs for
> packages. Not saying that is a reason to reconsider, but there is the
> need for -infra to consume packages from upstream.

I suspect that, at least initially, the needs of -infra specific
packaging will be quite different than the needs of general-purpose
packaging in Debian/Fedora distros. Trying to tightly couple the two
will just bog you down in trying to solve far too many problems for far
too many people. But, I also suspect that -infra packaging will be quite
minimal and intended for the services to be configured by puppet, so
there's a very good chance that if you sprint ahead and just do it, your
style of packaging will end up feeding back into future packaging in the
distros.

Allison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][OSC] Keystone v3 user create --project $projid does not add user to project?

2015-06-16 Thread Rich Megginson
Using admin token credentials with the Keystone v2.0 API and the 
openstackclient, doing this:


# openstack project create bar --enable
# openstack user create foo --project bar --enable ...

The user will be added to the project.

Using admin token credentials with the Keystone v3 API and the 
openstackclient, using the v3 policy file with is_admin:1 added just 
about everywhere, doing this:


# openstack project create bar --domain Default --enable
# openstack user create foo --domain Default --enable --project 
$project_id_of_bar ...


The user will NOT be added to the project.

Is this intentional?  Am I missing some sort of policy to allow user 
create to add the user to the given project?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] apache wsgi application support

2015-06-16 Thread Sean Dague
I was just looking at the patches that put Nova under apache wsgi for
the API, and there are a few things that I think are going in the wrong
direction. Largely I think because they were copied from the
lib/keystone code, which we've learned is kind of the wrong direction.

The first is the fact that a big reason for putting {SERVICES} under
apache wsgi is we aren't running on a ton of weird unregistered ports.
We're running on 80 and 443 (when appropriate). In order to do this we
really need to namespace the API urls. Which means that service catalog
needs to be updated appropriately.

I'd expect nova to be running on http://localhost/compute not
http://localhost:8774 when running under wsgi. That's going to probably
interestingly break a lot of weird assumptions by different projects,
but that's part of the reason for doing this exercise. Things should be
using the service catalog, and when they aren't, we need to figure it out.

(Exceptions can be made for third party APIs that don't work this way,
like the metadata server).

I also think this -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L266-L268
is completely wrong.

The Apache configs should instead specify access rules such that the
installed console entry point of nova-api can be used in place as the
WSGIScript.

This should also make lines like -
https://github.com/openstack-dev/devstack/blob/master/lib/nova#L272 and
L274 uneeded. (The WSGI Script will be in a known place). It will also
make upgrades much more friendly.

I think that we need to get these things sorted before any further
progression here. Volunteers welcomed to help get us there.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] [all] Proposal for Glance Artifacts Sub-Team meeting.

2015-06-16 Thread Nikhil Komawar
Hi all,

We have planned a fast track development for Glance Artifacts; also it
would be our v3 API. To balance pace, knowledge sharing, synchronous
discussion on ideas and opinions as well as seeing this great feature
through in Liberty:

We hereby propose a non-mandatory, open to all, sub-team meeting for
Glance Artifacts.

Please vote on the time and date:
https://review.openstack.org/#/c/192270/ (Note: Run the tests for your
vote to ensure we are considering feasible & non-conflicting times.) We
will start the meeting next week unless there are strong conflicts.

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][db] online schema upgrades

2015-06-16 Thread Mike Bayer



On 6/16/15 11:41 AM, Ihar Hrachyshka wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

- - instead of migrating data with alembic rules, migrate it in runtime.
There should be a abstraction layer that will make sure that data is
migrated into new schema fields and objects, while preserving data
originally stored in 'old' schema elements.

That would allow old neutron-server code to run against new schema (it
will just ignore new additions); and new neutron-server code to
gradually migrate data into new columns/fields/tables while serving user
s.

Hi Ihar -

I was in the middle of writing a spec for neutron online schema 
migrations, which maintains "expand / contract" workflow but also 
maintains Alembic migration scripts.   As I've stated many times in the 
past, there is no reason to abandon migration scripts, while there are 
many issues related to abandoning the notion of the database in a 
specific versioned state as well as the ability to script any migrations 
whatsoever.   The spec amends Nova's approach and includes upstream 
changes to Alembic such that both approaches can be supported using the 
same codebase.


- mike



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release] summit session summary: Release Versioning for Server Applications

2015-06-16 Thread Thierry Carrez
Doug Hellmann wrote:
> Excerpts from Thierry Carrez's message of 2015-06-16 11:45:51 +0200:
>> We also traditionally "managed" the previously-incubated projects. That
>> would add the following to the mix:
>>
>> barbican 1.0.0
>> designate 1.0.0
>> manila 1.0.0
>> zaqar 1.0.0
>>
> 
> Those didn't have the release:managed tag, so didn't show up in the
> output of the script. [...]

Proposed as https://review.openstack.org/192193

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-lib library

2015-06-16 Thread Lucas Alvares Gomes
Hi,

> I haven't paid any attention to ironic-lib; I just knew that we wanted to
> have a library of common code so that we didn't cut/paste. I just took a
> look[1] and there are files there from 2 months ago. So far, everything is
> under ironic_lib (ie, no subdirectories to group things). Going forward, are
> there guidelines as to where/what goes into this library?

I don't think we have guidelines for the struct of the project, we
should of course try to organize it well.

About what goes into this library, AFAICT, this is place where code
which is used in more than one project under the Ironic umbrella
should go. For example, both Ironic and IPA (ironic-python-agent)
deals with disk partitioning, so we should create a module for disk
partitioning in the ironic-libs repository which both Ironic and IPA
will import and use.


> I think it would be good to note down the process wrt using this library.
> I'm guessing that having this library will most certainly delay things wrt
> development. Changes will need to be made to the library first, then need to
> wait until a new version is released, then possibly update the min version
> in global-requirements, then use (and profit) in ironic-related projects.
>
>
> With the code in ironic, we were able to do things like change the arguments
> to methods etc. With the library -- do we need to worry about backwards
> compatibility?

I would say so, those are things that we have to take in account when
creating a shared library. But it also brings benefits:

1. Code sharing
2. Bug are fixed in one place only
3. Flexibility, I believe that more projects using the same code will
require it to be more flexible

> How frequently were we thinking of releasing a new version? (Depends on
> whether anything was changed there that is needed really soon?)

Yes, just like the python-ironicclient a release can be cut when needed.

Thanks for starting this thread, it would be good to the community
evaluate whether we should go forward with ironic-libs or not.

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] (officially) deprecate stable/{grizzly, havana} branches

2015-06-16 Thread Emilien Macchi
Hi,

Some of our modules have stable/grizzly and stable/havana branches. Some
of them have the CI broken due to rspec issues that would require some
investigation and time if we wanted to fix it.

We would like to know who plan to backport some patches in these branches?

If nobody plans to do that, we will let the branches as they are now but
won't officially support them.

By support I mean maintaining the CI jobs green (rspec, syntax, etc),
fixing bugs and adding new features.

Any feedback is welcome!

Regards,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #38

2015-06-16 Thread Emilien Macchi


On 06/15/2015 08:06 PM, Emilien Macchi wrote:
> Hi everyone,
> 
> Here's an initial agenda for our weekly meeting tomorrow at 1500 UTC in
> #openstack-meeting-4:
> 
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150616
> 
> Please add additional items you'd like to discuss.

The meeting was short but productive, you can read the notes here:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-06-16-15.00.html

Have a nice day,

> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][db] online schema upgrades

2015-06-16 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi neutron folks,

I'd like to discuss a plan on getting support for online db schema
upgrades in neutron.

*What is it even about?*

Currently, any major version upgrade, or master-to-master upgrade,
requires neutron-server shutdown. After shutdown, operators apply db
migration rules to their database (if any), and when it's complete,
are able to start their neutron-server service(s).

It has several drawbacks:
- - while db is upgraded, API endpoints are not available (user visible
out-of-service period);
- - db upgrade may take a significant time, and the out-of-service
period can become quite long.

For rolling master-based environments, it's especially painful, since
you get the scheduled offline time more often than once per 6 months.
(Though even once per 6 months is not ideal.)

*Proposal*

Make neutron-server resilient to under-the-hood db schema changes.

How can we achieve this? There are multiple things to touch both code-
and culture-wise:
- - if we want old neutron-server to continue working with db that is
potentially upgraded to a newer schema, it means that we should stop
applying non-additive changes to schema in migration rules. (Note that
we still have a way to collect fossils once they are unused, e.g.
during the next cycle).
- - we should stop applying live data changes to database as part of
migration rules. The only changes that should be allowed should touch
schema but not insert/update/delete actual records. (I know neutron is
especially guilty of it in the past, but I believe we can stop doing it.
)
- - instead of migrating data with alembic rules, migrate it in runtime.
There should be a abstraction layer that will make sure that data is
migrated into new schema fields and objects, while preserving data
originally stored in 'old' schema elements.

That would allow old neutron-server code to run against new schema (it
will just ignore new additions); and new neutron-server code to
gradually migrate data into new columns/fields/tables while serving user
s.

Note that all neutron-server instances are still expected to restart
at the same time. There should be no neutron-servers of different
versions running, otherwise older instances will undo migration work
applied by new ones, and it may result in data loss, db conflicts,
hell raise. We may think of how to support iterative controller
restart without any downtime, but that's out of scope of the proposal.

*Isn't it too crazy?*

Not really. Other projects achieved this already. Specifically, Nova
does it since Liberty. Heat, Cinder are considering it now.

Nova needed to stop doing data migrations or non-additive changes to
schema in Kilo already. It suggests that the nearest possible time we
get actual online migration in neutron is M; that's assuming we adopt
stricter rules for migrations *now*, before anything incompatible is
merged in Liberty.

Also note that I haven't checked *aas migration rules yet: if there
are incompatible changes there, it means that for setups that rely on
those services, online migrations will become reality in Nausea only.

Since neutron joins the game late, we are in better position than nova
was, since a lot of tooling and practices are already implemented.
Specifically, I mean oslo.versionedobjects that would serve as an
abstraction object middleware in between db and the rest of neutron.

*The plan for Liberty*

We can't technically achieve online migrations in Liberty, for reasons
stated above. It does not mean that we have nothing to do this cycle
though.

We should prepare ourselves doing the following:
- - adopt stricter rules for migrations;
- - adopt oslo.versionedobjects to represent neutron resources. (It will
buy us more benefits, like object interface instead of passing dicts
around; clear versioning on RPC side of things; potentially, assuming
we apply corresponding practices, transparent remote calls to
controller from agent side using the same objects defined on
neutron-server side).

===

So, keeping in mind that there can be concerns or conflicts with
existing efforts (f.e. plugin decomp part 2) that I don't fully
realize, or maybe some architectural issues that would not allow us to
start on the road just now, I'd like to hear from others on whether
the strict rules even make sense in context of neutron.

Of course, I especially look forward to hear from our db gods: Henry,
Ann, and others.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVgEOpAAoJEC5aWaUY1u57rzIIAKg6tgJ23OUzEx9WEWly8Evy
YCRRSYAPjgX5rQ8UY1BLIPEH1j/FAdbE7RKuHuW+b2fcsKafFbh7EqW0HkCy75w7
5cja5VKZMoZ8MzR4A3TyLfR0C1IQ6FB9U+ISgaaDyqjrp/2pmr6Sobv+f9gtT6IR
viLASdvsFyC8fQOGPNNG4Q2I5mnl+q1l8oji6jxp1uL49PETdStH6R88h6LWYBJg
lGztStcVcAq1l0WVVdhgnJU8UaSJVYzlkUkTxzWiHscd8JSelCgR+Zq7rc6bx6RY
+5uDmk8ZGVXDZIz9TEZbP2KgaF9tcIhYCPajCqS5wHFoJj/8UTz1MdsaqjHBv6w=
=J+iJ
-END PGP SIGNATURE-

__

Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Randall Burt
+1 Murali. AFIAK, there is no precedent for what Keith proposes, but that 
doesn't mean its a bad thing.

On Jun 16, 2015, at 12:21 AM, Murali Allada  wrote:

> I agree, users should have a mechanism to keep logs around.
> 
> I implemented the logs deletion feature after we got a bunch of requests from 
> users to delete logs once they delete an app, so they don't get charged for 
> storage once the app is deleted.
> 
> My implementation deletes the logs by default and I think that is the right 
> behavior. Based on user requests, that is exactly what they were asking for. 
> I'm planning to add a --keep-logs flag in a follow up patch. The command will 
> look as follows
> 
> Solum delete app MyApp --keep-logs
> 
> -Murali
> 
> 
> 
> 
> 
> On Jun 15, 2015, at 11:19 PM, Keith Bray  wrote:
> 
>> Regardless of what the API defaults to, could we have the CLI prompt/warn so 
>> that the user easily knows that both options exist?  Is there a precedent 
>> within OpenStack for a similar situation?
>> 
>> E.g. 
>> > solum app delete MyApp
>>  Do you want to also delete your logs? (default is Yes):  [YES/no]
>>   NOTE, if you choose No, application logs will remain on your 
>> account. Depending on your service provider, you may incur on-going storage 
>> charges.  
>> 
>> Thanks,
>> -Keith
>> 
>> From: Devdatta Kulkarni 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Date: Monday, June 15, 2015 9:56 AM
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete 
>> an app?
>> 
>> Yes, the log deletion should be optional.
>> 
>> 
>> The question is what should be the default behavior. Should the default be 
>> to delete the logs and provide a flag to keep them, or keep the logs by 
>> default and provide a override flag to delete them?
>> 
>> Delete-by-default is consistent with the view that when an app is deleted, 
>> all its artifacts are deleted (the app's meta data, the deployment units 
>> (DUs), and the logs). This behavior is also useful in our current state when 
>> the app resource and the CLI are in flux. For now, without a way to specify 
>> a flag, either to delete the logs or to keep them, delete-by-default 
>> behavior helps us clean all the log files from the application's cloud files 
>> container when an app is deleted.
>> 
>> This is very useful for our CI jobs. Without this, we end up with lots of 
>> log files in the application's container,
>> 
>> and have to resort to separate scripts to delete them up after an app is 
>> deleted.
>> 
>> 
>> Once the app resource and CLI stabilize it should be straightforward to 
>> change the default behavior if required.
>> 
>> - Devdatta
>> 
>> From: Adrian Otto 
>> Sent: Friday, June 12, 2015 6:54 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [Solum] Should logs be deleted when we delete an 
>> app?
>>  
>> Team,
>> 
>> We currently delete logs for an app when we delete the app[1]. 
>> 
>> https://bugs.launchpad.net/solum/+bug/1463986
>> 
>> Perhaps there should be an optional setting at the tenant level that 
>> determines whether your logs are deleted or not by default (set to off 
>> initially), and an optional parameter to our DELETE calls that allows for 
>> the opposite action from the default to be specified if the user wants to 
>> override it at the time of the deletion. Thoughts?
>> 
>> Thanks,
>> 
>> Adrian
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Cloud Foundry Service Broker Api in Murano

2015-06-16 Thread Nikolay Starodubtsev
Here is a draft spec for this: https://review.openstack.org/#/c/192250/



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-06-16 13:11 GMT+03:00 Nikolay Starodubtsev 
:

> Hi all,
> I've started a work on bp:
> https://blueprints.launchpad.net/murano/+spec/cloudfoundry-api-support
> I plan to publish a spec in a day or two. If anyone interesting to
> cooperate please drop me a message here or in IRC: Nikolay_St
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How does instance's tap device macaddress generate?

2015-06-16 Thread Tapio Tallgren

On 11.06.2015 18:52, Andreas Scheuring wrote:

Maybe this helps (taken from [1])

"Actually there is one way that the MAC address of the tap device
affects
proper operation of guest networking - if you happen to set the tap
device's MAC identical to the MAC used by the guest, you will get errors
from the kernel similar to this:


   kernel: vnet9: received packet with own address as source address"



[1] http://www.redhat.com/archives/libvir-list/2012-July/msg00984.html
I was wondering the same question myself one day and found this 
explanation from the same mail list:


vnet0 is the backend of the guest NIC, and its MAC addr
is more or less irrelevant to functioning of the guest
itself, since traffic does not originate on this NIC.
The only important thing is that this TAP device must
have a high value MAC address, to avoid the bridge
device using the TAP device's MAC as its own. Hence
when creating the TAP Device  libvirt takes the guest
MAC addr and simply sets the top byte to 0xFE

http://www.redhat.com/archives/libvir-list/2012-June/msg01330.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Add your name and TZ to wiki

2015-06-16 Thread Paul Bourke

Hi all,

Steve suggested adding a new table to the Kolla wiki to help us keep 
track of who's actively working on Kolla along with relevant info such 
as timezones and IRC names.


I'm missing lots of names and timezones so if you'd like to be on this 
please feel free to update it at 
https://wiki.openstack.org/wiki/Kolla#Active_Contributors


Cheers,
-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] When do we import aodh?

2015-06-16 Thread Julien Danjou
On Tue, Jun 16 2015, Chris Dent wrote:

> 5. anything in tempest to worry about?

Yes, we need to adapt and reenable tempest after.

> 6. what's that stuff in the "ceilometer" dir?
>6.1. Looks like migration artifacts, what about migration in
> general?

That's a rest of one of the many rebases I've made during these last
weeks, I just fixed it.

I removed all the migration as we should start fresh on Alembic.

> 7. removing all the rest of the cruft (whatever it might be)

In Ceilometer you mean?

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Improvement of the blueprint specs template

2015-06-16 Thread Roman Prykhodchenko
Hi folks!

I was reviewing one of specs for Fuel 7.0 and realized the information there is 
messed up and it’s pretty hard to put it all together. The reason for that is 
basically that Fuel is a multicomponent project but the template does not 
consider that — there is a Proposed change section which is used to define all 
the changes in the entire project; then there is the API and Data impact 
sections that are specific to only specific components but still have to be 
filled in.

Since most of new features consider changes to several components I propose to 
stick to the following structure. It eliminates the need to create several 
specs to describe one feature and allows to organize everything in one document 
without messing it up:

--> Title
--> Excerpt (short version of the Problem description, proposed solution and 
final results)
--> Problem description
--> Proposed changes
--> Web UI
--> Nailgun
--> General
--> REST API
--> Data model
--> Astute
--> General
--> RPC protocol
--> Fuel Client
--> Plugins
--> Impact
--> End-user
--> QA
--> Developer
--> Infrastructure (operations)
--> Upgrade
--> Performance
--> Implementation
--> Assignee
--> Work items
--> Web UI
--> Nailgun
--> Astute
--> Fuel Client
--> Plugins
--> Documentation
--> References


- romcheg





signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] Proposing a slight change in requirements.txt syncing output.

2015-06-16 Thread Joshua Harlow

Doug Hellmann wrote:

Excerpts from Robert Collins's message of 2015-06-16 11:18:55 +1200:

At the moment we copy the global-requirements lines verbatim.

So if we have two lines in global-requirements.txt:
oslotest>=1.5.1  # Apache-2.0
PyECLib>=1.0.7  # BSD
with very different layouts


Most of the inline comments for packages are license indicators that we
started collecting a while back at someone's request. Are we actually
using those? If not, maybe we should clean up that file and reserve
inline comments for something we do actually care about?


Or if it really matters run 
https://github.com/openstack/requirements/blob/master/detail.py (which I 
submitted a while ago) on the requirements file/s and write out all the 
detailed information to a json file (stdout from a run of this @ 
http://paste.openstack.org/show/295537/ with large output detailing 
author information, license information ... @ 
http://paste.openstack.org/show/295538/). Keeping all the license + 
detailed info out of the main file seems to make sense to me...




Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taskflow] Returning information from reverted flow

2015-06-16 Thread Joshua Harlow

Dulko, Michal wrote:

-Original Message-
From: Joshua Harlow [mailto:harlo...@outlook.com]
Sent: Friday, June 12, 2015 5:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [taskflow] Returning information from reverted
flow

Dulko, Michal wrote:

Hi,

In Cinder we had merged a complicated piece of code[1] to be able to
return something from flow that was reverted. Basically outside we
needed an information if volume was rescheduled or not. Right now this
is done by injecting information needed into exception thrown from the
flow. Another idea was to use notifications mechanism of TaskFlow.
Both ways are rather workarounds than real solutions.

Unsure about notifications being a workaround (basically u are notifying to
some other entities that rescheduling happened, which seems like exactly
what it was made for) but I get the point ;)


Please take a look at this review - https://review.openstack.org/#/c/185545/. 
Notifications cannot help if some further revert decision needs to be based on 
something that happened earlier.


That sounds like conditional reverting, which seems like it should be 
handled differently anyway, or am I misunderstanding something?



I wonder if TaskFlow couldn't provide a mechanism to mark stored element
to not be removed when revert occurs. Or maybe another way of returning
something from reverted flow?

Any thoughts/ideas?

I have a couple, I'll make some paste(s) and see what people think,

How would this look (as pseudo-code or other) to you, what would be your
ideal, and maybe we can work from there (maybe u could do some paste(s)
to and we can prototype it), just storing information that is returned
from revert() somewhere? Or something else? There has been talk about
task 'local storage' (or something like that/along those lines) that
could also be used for this similar purpose.


I think that the easiest idea from the perspective of an end user would be to 
save items returned from revert into flow engine's storage *and* do not remove 
it from storage when whole flow gets reverted. This is completely backward 
compatible, because currently revert doesn't return anything. And if revert has 
to record some information for further processing - this will also work.



Ok, let me see what this looks like and maybe I can have a POC in the 
next few days, I don't think its impossible to do (obviously) and 
hopefully will be useful for this.



[1] https://review.openstack.org/#/c/154920/



__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-

requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][all] Request from Oslo team for Liberty Cycle

2015-06-16 Thread Davanum Srinivas
Hello fellow stackers,

The Oslo team came up with a handful of requests to the projects that
use Oslo-*. Here they are:

0. Check if your project has a Oslo Liaison

Please see https://wiki.openstack.org/wiki/CrossProjectLiaisons#Oslo
and volunteer for your project. We meet once a week to go over specs,
issues with releases, etc. If you can't attend the meetings, review
the logs and send questions/feedback to the -dev mailing list or hop
onto #openstack-oslo channel.

If you filter the -dev mailing list, include the "[oslo]" topic in
your whitelist to ensure you see team announcements.

1. Update files from oslo-incubator

Check what files you have listed in [my_project]/openstack-common.conf
and under [my_project]/openstack/common/* tree. You can run the
update.py script in oslo-incubator
(https://github.com/openstack/oslo-incubator/blob/master/update.py) to
refresh the files in your project. You may see that some of the files
you may have already graduated into a library, in which case you will
need to switch to the library.

2. Use oslo.context with oslo.log

Several projects still have a custom RequestContext. For oslo.log to
log the details stored in the RequestContext, you will need to extend
your custom RequestContext from the one in oslo.context. See example
in Nova - https://github.com/openstack/nova/blob/master/nova/context.py

3. Switch to oslo-config-generator

The discovery mechanism in the old style generator.py is fragile and
hence we have replaced it with a better (at least in our eyes!)
solution. Please see
http://specs.openstack.org/openstack/oslo-specs/specs/juno/oslo-config-generator.html.
This will help generate configuration files for different services
with different content/options as well.

4. Review new libraries to be added in Liberty and older ones from Kilo

Please see the specs we have for Liberty -
http://specs.openstack.org/openstack/oslo-specs/ We have a handful of
new libraries from existing oslo-incubator code as well as some brand
new ones like futurist and automaton that are not oslo specific and
very useful (Don't forget Debtcollector, tooz, taskflow from Kilo).
Projects like oslo.versionedobjects is getting a lot of traction as
well. So please review what's useful to your project and let us know
if you need more information.

Thanks,
The Oslo Team

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Adopting ironic-lib in Ironic

2015-06-16 Thread Ruby Loo
On 16 June 2015 at 03:12, Dmitry Tantsur  wrote:

> On 06/16/2015 08:58 AM, Ramakrishnan G wrote:
>
>>
>> Hi All,
>>
>> Some time back we created a new repository[1] to move all the reusable
>> code components of Ironic to a separate library.  The branched out code
>> has changed and there has been a review out to sync it [2].  But
>> unfortunately, it has got stale again as some more changes have gone in
>> to the branched out code.  To avoid repeated efforts of such syncing, I
>> suggest we sync the latest code from Ironic to ironic-lib (in
>> appropriate files) and immediately change Ironic to start using it.
>>
>> I suggest we can do the following:
>> 1) Decide on a timeline for the change (1 or 2 days)
>>
>
> Now is a good time, IMO, I don't think we're in pressing need to change
> this code.
>
>  2) Stop +Aing changes in Ironic to the files/code being moved to
>> ironic-lib
>> 3) Sync the latest code in ironic-lib and merge it
>> 4) Make a new release of ironic-lib
>> 5) Make changes in Ironic to use ironic-lib and make sure gate is back
>> up and running again (I can't think of anything that will break gate on
>> switching to ironic-lib as it's just a pip install)
>>
>
> Note that this will need adding ironic-lib to global-requirements, which
> will take time, unless you grab a couple of g-r cores to do it asap.
>
>  6) Make new reviews in ironic-lib for any pending reviews in Ironic
>>
>> If we come to an agreement on #1 and #2 above, Syed Ismail Faizan
>> Barmawer can continue to work on #3 - #5
>>
>> Let me know if it will work out or if there are any better plans (or I
>> am missing something)
>>
>
> Otherwise plan LGTM
>
>
>> Thanks.
>>
>> [1] https://github.com/openstack/ironic-lib
>> [2] https://review.openstack.org/#/c/162162/
>>
>> Regards,
>> Ramesh
>>
>
If we haven't yet released a version of ironic-lib, I suggest taking a more
conservative (but more work) approach:
0.1. sync the latest code in ironic-lib (this is optional)
0.2. make a first release of ironic-lib
0.3. add ironic-lib to global-requirements

Then the steps you suggested Ramesh. (Changes need to be made to IPA too?
Not sure what code is being copied.)

Hopefully that will get any kinks out of the process, and will give us an
idea of how long that process might take. (Eg, there are only certain
people that can do releases, and if we can get things set up in
global-requirements sooner rather than later, that is one less thing to
do).

--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] When do we import aodh?

2015-06-16 Thread Chris Dent

On Tue, 16 Jun 2015, Julien Danjou wrote:


To me the next step is to:
1. Someone cares and review what I've done in the repository
2. import the code into openstack/aodh


Assuming that we'll do whatever is required to finish things after
moving it under openstack/ then whatever you've done in step one
doesn't matter all that much, it's just a stepping stone in the
process.

My cursory look just now says "yeah, let's do it" assuming the
additional steps below (which we need to clarify) don't disappear.


3. enable gate jobs (unit tests at least)


yah


4. enable and fix devstack gating (probably writing a devstack plugin
  for aodh)


yah

and:

5. anything in tempest to worry about?
6. what's that stuff in the "ceilometer" dir?
   6.1. Looks like migration artifacts, what about migration in
general?
7. removing all the rest of the cruft (whatever it might be)
8. awareness of and attention to downstream packaging concerns
9. the inevitable several steps we've forgotten

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] ironic-lib library

2015-06-16 Thread Ruby Loo
Hi,

I haven't paid any attention to ironic-lib; I just knew that we wanted to
have a library of common code so that we didn't cut/paste. I just took a
look[1] and there are files there from 2 months ago. So far, everything is
under ironic_lib (ie, no subdirectories to group things). Going forward,
are there guidelines as to where/what goes into this library?


I think it would be good to note down the process wrt using this library.
I'm guessing that having this library will most certainly delay things wrt
development. Changes will need to be made to the library first, then need
to wait until a new version is released, then possibly update the min
version in global-requirements, then use (and profit) in ironic-related
projects.


With the code in ironic, we were able to do things like change the
arguments to methods etc. With the library -- do we need to worry about
backwards compatibility?


How frequently were we thinking of releasing a new version? (Depends on
whether anything was changed there that is needed really soon?)


Anything else that we should keep in mind when making changes to the
library?

--ruby

[1] https://github.com/openstack/ironic-lib
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

2015-06-16 Thread Steven Dake (stdake)
Its unanimous!  Welcome to the core reviewer team Harm!

Regards
-steve


From: Steven Dake mailto:std...@cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Sunday, June 14, 2015 at 10:48 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

Hey folks,

I am proposing Harm Waites for the Kolla core team.  He did a fantastic job 
implementing Designate in a container[1] which I’m sure was incredibly 
difficult and never gave up even though there were 13 separate patch reviews :) 
 Beyond Harm’s code contributions, he is responsible for 32% of the 
“independent” reviews[1] where independents compose 20% of our total reviewer 
output.  I think we should judge core reviewers on more then output, and I knew 
Harm was core reviewer material with his fantastic review of the cinder 
container where he picked out 26 specific things that could be broken that 
other core reviewers may have missed ;) [3].  His other reviews are also as 
thorough as this particular review was.  Harm is active in IRC and in our 
meetings for which his TZ fits.  Finally Harm has agreed to contribute to the 
ansible-multi implementation that we will finish in the liberty-2 cycle.

Consider my proposal to count as one +1 vote.

Any Kolla core is free to vote +1, abstain, or vote –1.  A –1 vote is a veto 
for the candidate, so if you are on the fence, best to abstain :)  Since our 
core team has grown a bit, I’d like 3 core reviewer +1 votes this time around 
(vs Sam’s 2 core reviewer votes).  I will leave the voting open until June 21 
 UTC.  If the vote is unanimous prior to that time or a veto vote is 
received, I’ll close voting and make appropriate adjustments to the gerrit 
groups.

Regards
-steve

[1] https://review.openstack.org/#/c/182799/
[2] 
http://stackalytics.com/?project_type=all&module=kolla&company=%2aindependent
[3] https://review.openstack.org/#/c/170965/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] [horizon] [merlin] [refstack] Javascript Linting

2015-06-16 Thread Michael Krotscheck
Just for the sake of clarity- did the Horizon team discuss the tool
selection for JSCS with the greater community? I can't find anything on the
dev list. Furthermore, there've been situations before (karma) where a
patch was landed without appropriate upstream notifications and/or
discussion, which then resulted in a lot of unnecessary work.

Horizon isn't the only UI project anymore. While it's certainly the
elephant in the room, that doesn't mean its decisions shouldn't be up to
scrutiny.

Michael

On Tue, Jun 16, 2015 at 12:44 AM Rob Cresswell (rcresswe) <
rcres...@cisco.com> wrote:

>  So my view here is that I don’t particularly mind which plugin/ set of
> plugins Horizon uses, but the biggest deterrent is the workload. We’re
> already cleaning everything up quite productively, so I’m reluctant to
> swap. That said, the cleanup from JSCS/ JSHint should be largely relevant
> to ESLint. Michael, do you have any ideas on the numbers/ workload behind a
> possible swap?
>
>  With regards to licensing, does this mean we must stop using JSHint, or
> that we’re still okay to use it as a dev tool? Seems that if the former is
> the case, then the decision is made for us.
>
>  Rob
>
>
>
>   From: Michael Krotscheck 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, 16 June 2015 00:36
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [javascript] [horizon] [merlin] [refstack]
> Javascript Linting
>
>   I'm restarting this thread with a different subject line to get a
> broader audience. Here's the original thread:
> http://lists.openstack.org/pipermail/openstack-dev/2015-June/066040.html
>
>  The question at hand is "What will be OpenStack's javascript equivalent
> of flake8". I'm going to consider the need for common formatting rules to
> be self-evident. Here's the lay of the land so far:
>
>- Horizon currently uses JSCS.
>- Refstack uses Eslint.
>- Merlin doesn't use anything.
>- StoryBoard (deprecated) uses eslint.
>- Nobody agrees on rules.
>
>  *JSCS*
>  JSCS Stands for "JavaScript CodeStyle". Its mission is to enforce a
> style guide, yet it does not check for potential bugs, variable overrides,
> etc. For those tests, the team usually defers to (preferred) JSHint, or
> ESLint.
>
>  *JSHint*
> Ever since JSCS was extracted from JSHint, it has actively removed rules
> that enforce code style, and focused on findbug style tests instead. JSHint
> still contains the "Do no evil" license, therefore is not an option for
> OpenStack, and has been disqualified.
>
>  *ESLint*
> ESLint's original mission was to be an OSI compliant replacement for
> JSHint, before the JSCS split. It wants to be a one-tool solution.
>
>  My personal opinion/recommendation: Based on the above, I recommend we
> use ESLint. My reasoning: It's one tool, it's extensible, it does both
> codestyle things and bug finding things, and it has a good license. JSHint
> is disqualified because of the license. JSCS is disqualified because it is
> too focused, and only partially useful on its own.
>
>  I understand that this will mean some work by the Horizon team to bring
> their code in line with a new parser, however I personally consider this to
> be a good thing. If the code is good to begin with, it shouldn't be that
> difficult.
>
>  This thread is not there to argue about which rules to enforce. Right
> now I just want to nail down a tool, so that we can (afterwards) have a
> discussion about which rules to activate.
>
>  Michael
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >