Re: [openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-17 Thread Sean Dague
On 06/17/2015 01:29 PM, Clint Byrum wrote:
 Excerpts from Sean Dague's message of 2015-06-16 10:16:34 -0700:
 On 06/16/2015 12:49 PM, Clint Byrum wrote:
 Excerpts from Sean Dague's message of 2015-06-16 06:22:23 -0700:
 FYI,

 One of the things that came out of the summit for Devstack plans going
 forward is to trim it back to something more opinionated and remove a
 bunch of low use optionality in the process.

 One of those branches to be trimmed is all the support for things beyond
 RabbitMQ in the rpc layer. RabbitMQ is what's used by 95%+ of our
 community, that's what the development environment should focus on.

 The patch to remove all of this is here -
 https://review.openstack.org/#/c/192154/. Expect this to merge by the
 end of the month. If people are interested in non RabbitMQ external
 plugins, now is the time to start writing them. The oslo.messaging team
 already moved their functional test installation for alternative
 platforms off of devstack, so this should impact a very small number of
 people.


 The recent spec we added to define a policy for oslo.messaging drivers is
 intended as a way to encourage that 5% who feels a different messaging
 layer is critical to participate upstream by adding devstack-gate jobs
 and committing developers to keep them stable. This change basically
 slams the door in their face and says good luck, we don't actually care
 about accomodating you. This will drive them more into the shadows,
 and push their forks even further away from the core of the project. If
 that's your intention, then we need to have a longer conversation where
 you explain to me why you feel that's a good thing.

 I believe it is not the responsibility of the devstack team to support
 every possible backend one could imagine and carry that technical debt
 in tree, confusing new users in the process that any of these things
 might actually work. I believe that if you feel that your spec assumed
 that was going to be the case, you made a large incorrect externalities
 assumption.

 
 I agree with you, and support your desire to move things into plugins.
 
 However, your timing is problematic and the lack of coordination with
 the ongoing effort to deprecate untested messaging drivers gracefully
 is really frustrating. We've been asking (on this list) zmq interested
 parties to add devstack-gate jobs and identify themselves as contacts
 to support these drivers. Meanwhile this change and the wording around
 it suggest that they're not welcome in devstack.

So there has clearly been some disconnect here. This patch was
originally going to come later in the cycle, but some back and forth on
proton fixes with Flavio made me realize we really needed to get this
direction out in front of more people (which is why it wasn't just a
patch, it was also an email heads up). So there wasn't surprise when it
was merged.

We built the external plugin mechanism in devstack to make it very easy
to extend out of tree, and make it easy to let people consume your out
of tree stuff. It's the only way that devstack works in the big tent
world, because there just is too much stuff for the team to support.

 Also, I take issue with the value assigned to dropping it. If that 95%
 is calculated as orgs_running_on_rabbit/orgs then it's telling a really
 lop-sided story. I'd rather see compute_nodes_on_rabbit/compute_nodes.

 I'd like to propose that we leave all of this in tree to match what is
 in oslo.messaging. I think devstack should follow oslo.messaging and
 deprecate the ones that oslo.messaging deprecates. Otherwise I feel like
 we're Vizzini cutting the rope just as The Dread Pirate 0mq is about to
 climb the last 10 meters to the top of the cliffs of insanity and battle
 RabbitMQ left handed. I know, inconceivable right?

 We have an external plugin mechanism for devstack. That's a viable
 option here. People will have to own and do that work, instead of
 expecting the small devstack team to do that for them. I believe I left
 enough of a hook in place that it's possible.

 
 So lets do some communication, and ask for the qpid and zmq people to
 step up, and help them move their code into an external plugin, and add
 documentation to help their users find it. The burden should shift, but
 it still rests with devstack until it _does_ shift.

We still need to set a clock, because in the past when we haven't, the
burden never shifts.

 That would also let them control the code relevant to their plugin,
 because there is no way that devstack was going to gate against other
 backends here, so we'd end up breaking them pretty often, and it taking
 a while to fix them in tree.
 
 I love that idea. That is not what the change does though. It deletes
 with nary a word about what users of this code should do until new
 external plugins appear.

Sure, lets get folks engaged now. I'm happy to help people debug this
code to get it working. The burden of the effort does need to be on the
folks with the feature they 

Re: [openstack-dev] [tc] adding magnum-ui to the openstack git namespace

2015-06-17 Thread Adrian Otto
TC,

I authorized the addition of the new repo in the ML thread, and have recorded 
my approval on each of the reviews.

Thanks,

Adrian Otto
—
OpenStack Magnum PTL

On Jun 17, 2015, at 10:57 AM, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:

Hey TCers,

In this thread, the Magnum community made a commitment to tackle horizon 
support for our software:
http://lists.openstack.org/pipermail/openstack-dev/2015-June/066701.html

We would like to add magnum-ui to the list of repos in the openstack namespace. 
 The governance review change:
https://review.openstack.org/#/c/192804/

Andreas has requested a governance repo change here for the infrastructure 
change:
https://review.openstack.org/#/c/190998https://review.openstack.org/#/c/190998/1//

Chicken and egg ftw :)  Would appreciate fast action so the magnum-ui-core team 
can get rolling :)

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] [ Supporting swift downloads for operator languagepacks

2015-06-17 Thread Randall Burt
A bit of a tangent, but it seems like the url would be to a public Swift 
system. I am unclear if a source git repo would be relevant but, assuming 
Swift would be optional, perhaps users could host catalog LP's in git or some 
other distribution mechanism and have a method by which solum could import them 
from the catalog into that deployment's object store.

On Jun 17, 2015, at 1:58 PM, Fox, Kevin M kevin@pnnl.gov
 wrote:

 This question may be off on a tangent, or may be related.
 
 As part of the application catalog project, (http://apps.openstack.org/) 
 we're trying to provide globally accessible resources that can be easily 
 consumed in OpenStack Clouds. How would these global Language Packs fit in? 
 Would the url record in the app catalog be required to point to an Internet 
 facing public Swift system then? Or, would it point to the source git repo 
 that Solum would use to generate the LP still?
 
 Thanks,
 Kevin
 
 From: Randall Burt [randall.b...@rackspace.com]
 Sent: Wednesday, June 17, 2015 11:38 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum] Supporting swift   downloads   for   
   operatorlanguagepacks
 
 Yes. If an operator wants to make their LP publicly available outside of 
 Solum, I was thinking they could just make GET's on the container public. 
 That being said, I'm unsure if this is realistically do-able if you still 
 have to have an authenticated tenant to access the objects. Scratch that; 
 http://blog.fsquat.net/?p=40 may be helpful.
 
 On Jun 17, 2015, at 1:27 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:
 
 To be clear, Randall is referring to a swift container (directory).
 
 Murali has a good idea of attempting to use swift client first, as it has 
 performance optimizations that can speed up the process more than naive file 
 transfer tools. I did mention to him that wget does have a retiree feature, 
 and that we could see about using curl instead to allow for chunked encoding 
 as additional optimizations.
 
 Randall, are you suggesting that we could use swift client for both private 
 and public LP uses? That sounds like a good suggestion to me.
 
 Adrian
 
 On Jun 17, 2015, at 11:10 AM, Randall Burt randall.b...@rackspace.com 
 wrote:
 
 Can't an operator make the target container public therefore removing the 
 need for multiple access strategies?
 
  Original message 
 From: Murali Allada
 Date:06/17/2015 11:41 AM (GMT-06:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Solum] Supporting swift downloads for operator 
 languagepacks
 
 Hello Solum Developers,
 
 When we were designing the operator languagepack feature for Solum, we 
 wanted to make use of public urls to download operator LPs, such as those 
 available for CDN backed swift containers we have at Rackspace, or any 
 publicly accessible url. This would mean that when a user chooses to build 
 applications on to​​p of a languagepack provided by the operator, we use a 
 url to 'wget' the LP image.
 
 Recently, we have started noticing a number of failures because of 
 corrupted docker images downloaded using 'wget'. The docker images work 
 fine when we download them manually with a swift client and use them. The 
 corruption seem to be happening when we try to download a large image using 
 'wget' and there are dropped packets or intermittent network issues.
 
 My thinking is to start using the swift client to download operator LPs by 
 default instead of wget. The swift client already implements retry logic, 
 downloading large images in chunks, etc. This means we would not get the 
 niceties of using publicly accessible urls. However, the feature will be 
 more reliable and robust.
 
 The implementation would be as follows:
 • ​We'll use the existing service tenant configuration available in the 
 solum config file to authenticate and store operator languagepacks using 
 the swift client. We were using a different tenant to build and host LPs, 
 but now that we require the tenants credentials in the config file, it's 
 best to reuse the existing service tenant creds. Note: If we don't, we'll 
 have 3 separate tenants to maintain.
 • ​Service tenant
 • Operator languagepack tenant
 • Global admin tenant
 • I'll keep the option to download the operator languagepacks from a 
 publicly available url. I'll allow operators to choose which method they 
 want to use by changing a setting in the solum config file.
 FYI: In my tests, I've noticed that downloading an image using the swift 
 client is twice as fast as downloading the same image using 'wget' from a 
 CDN url.
 
 Thanks,
 Murali
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 

[openstack-dev] [Solum] Update on current status

2015-06-17 Thread Devdatta Kulkarni
Hi team,


With the recent application from our team to  be included in the big tent

(https://review.openstack.org/190949), I wanted to give a quick update on the 
state of our project.


Before I do that, here is a quick overview of the current capabilities of Solum 
and a very high-level

view of its inner workings.


All along, one of our goals has been to make it possible for application 
developers to easily deploy their

applications to OpenStack. Essentially, provide the ability to go from source 
code to running instance(s)

of an application on OpenStack without having to be greatly familiar with 
OpenStack services that

may be involved in deploying their applications.


In last two cycles we have made considerable progress towards that goal.

From application developer point of view, it is now possible to deploy their 
applications,

starting from the source code, to OpenStack clouds using Solum.

At a high-level, this is achieved in three steps:


1) Build a languagepack (LP)

2) Register the application by providing information about the source 
repository, languagepack to use, and so on

3) Deploy the application


A screencast demonstrating these steps is available on the following link:

https://wiki.openstack.org/wiki/Solum/solum_kilo_demo


Application deployment in Solum happens as follows.


Solum needs a languagepack to build an application.

A languagepack is essentially a Docker container with the specified libraries 
installed on it.

Starting from the specified languagepack, Solum builds an application-specific 
Docker container

(called DU) by adding application's source code to it (DU = LP + application 
source code).

Solum then persists this DU to configured storage backend. We currently support 
Swift, Glance,

and Docker registry. Solum then calls Heat to deploy the DU. Depending on the 
configured storage

backend, the deployment steps differ slightly. For instance, if the configured 
storage is Swift, we use

tempURLs with Heat user-data to run the DU on a VM, whereas if the backend is 
Glance we

use the DU's Glance imageId directly within the Heat template.

Languagepacks can be pre-installed by the operator in their Solum installation, 
or application

developers can create and register new languagepacks per their applications' 
needs.

Apart from building and deploying an application, Solum supports running of 
application-specific tests.

Solum also integrates with external services (currently github) and supports 
webhooks to trigger application deployments. It is also possible to consume 
services, such as databases, via the parameter injection feature.


So that is where we currently stand.


Several other features are still remaining:

- Non-destructive application updates (application updates without changing the 
application URL).

- Scale up/scale down of application DUs.

- Service add/on framework

- Support for environments (Dev, Test, Staging)


Here is a roadmap with these and other suggested features for Liberty:

https://wiki.openstack.org/wiki/Solum/HighLevelRoadmap


Please feel free to add to that list, or reply here with features that you are 
interested in seeing in Solum.

As a reminder, our weekly IRC meeting is on Tuesdays at 2100 UTC in 
openstack-meeting-alt.

Hope to see you there.


Regards,

Devdatta



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Call for testing: 2014.1.5 (last Icehouse point release) candidate tarballs

2015-06-17 Thread Alan Pevec
Hi all,

We are scheduled to publish 2014.1.5, last Icehouse point release,
on Thurs June 18th for Ceilometer, Cinder, Glance, Heat, Horizon,
Keystone, Neutron, Nova and Trove.

The list of issues fixed can be seen here:

  https://launchpad.net/ceilometer/+milestone/2014.1.5
  https://launchpad.net/cinder/+milestone/2014.1.5
  https://launchpad.net/glance/+milestone/2014.1.5
  https://launchpad.net/heat/+milestone/2014.1.5
  https://launchpad.net/horizon/+milestone/2014.1.5
  https://launchpad.net/keystone/+milestone/2014.1.5
  https://launchpad.net/neutron/+milestone/2014.1.5
  https://launchpad.net/nova/+milestone/2014.1.5
  https://launchpad.net/trove/+milestone/2014.1.5

We'd appreciate anyone who could test the candidate 2014.1.5 tarballs:

  http://tarballs.openstack.org/ceilometer/ceilometer-stable-icehouse.tar.gz
  http://tarballs.openstack.org/cinder/cinder-stable-icehouse.tar.gz
  http://tarballs.openstack.org/glance/heat-stable-icehouse.tar.gz
  http://tarballs.openstack.org/heat/heat-stable-icehouse.tar.gz
  http://tarballs.openstack.org/horizon/horizon-stable-icehouse.tar.gz
  http://tarballs.openstack.org/keystone/keystone-stable-icehouse.tar.gz
  http://tarballs.openstack.org/neutron/neutron-stable-icehouse.tar.gz
  http://tarballs.openstack.org/nova/nova-stable-icehouse.tar.gz
  http://tarballs.openstack.org/trove/trove-stable-icehouse.tar.gz

Thanks,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] [ Supporting swift downloads for operator languagepacks

2015-06-17 Thread Fox, Kevin M
This question may be off on a tangent, or may be related.

As part of the application catalog project, (http://apps.openstack.org/) we're 
trying to provide globally accessible resources that can be easily consumed in 
OpenStack Clouds. How would these global Language Packs fit in? Would the url 
record in the app catalog be required to point to an Internet facing public 
Swift system then? Or, would it point to the source git repo that Solum would 
use to generate the LP still?

Thanks,
Kevin

From: Randall Burt [randall.b...@rackspace.com]
Sent: Wednesday, June 17, 2015 11:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Supporting swift   downloads   for 
operatorlanguagepacks

Yes. If an operator wants to make their LP publicly available outside of Solum, 
I was thinking they could just make GET's on the container public. That being 
said, I'm unsure if this is realistically do-able if you still have to have an 
authenticated tenant to access the objects. Scratch that; 
http://blog.fsquat.net/?p=40 may be helpful.

On Jun 17, 2015, at 1:27 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 To be clear, Randall is referring to a swift container (directory).

 Murali has a good idea of attempting to use swift client first, as it has 
 performance optimizations that can speed up the process more than naive file 
 transfer tools. I did mention to him that wget does have a retiree feature, 
 and that we could see about using curl instead to allow for chunked encoding 
 as additional optimizations.

 Randall, are you suggesting that we could use swift client for both private 
 and public LP uses? That sounds like a good suggestion to me.

 Adrian

 On Jun 17, 2015, at 11:10 AM, Randall Burt randall.b...@rackspace.com 
 wrote:

 Can't an operator make the target container public therefore removing the 
 need for multiple access strategies?

  Original message 
 From: Murali Allada
 Date:06/17/2015 11:41 AM (GMT-06:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Solum] Supporting swift downloads for operator 
 languagepacks

 Hello Solum Developers,

 When we were designing the operator languagepack feature for Solum, we 
 wanted to make use of public urls to download operator LPs, such as those 
 available for CDN backed swift containers we have at Rackspace, or any 
 publicly accessible url. This would mean that when a user chooses to build 
 applications on to​​p of a languagepack provided by the operator, we use a 
 url to 'wget' the LP image.

 Recently, we have started noticing a number of failures because of corrupted 
 docker images downloaded using 'wget'. The docker images work fine when we 
 download them manually with a swift client and use them. The corruption seem 
 to be happening when we try to download a large image using 'wget' and there 
 are dropped packets or intermittent network issues.

 My thinking is to start using the swift client to download operator LPs by 
 default instead of wget. The swift client already implements retry logic, 
 downloading large images in chunks, etc. This means we would not get the 
 niceties of using publicly accessible urls. However, the feature will be 
 more reliable and robust.

 The implementation would be as follows:
  • ​We'll use the existing service tenant configuration available in the 
 solum config file to authenticate and store operator languagepacks using the 
 swift client. We were using a different tenant to build and host LPs, but 
 now that we require the tenants credentials in the config file, it's best to 
 reuse the existing service tenant creds. Note: If we don't, we'll have 3 
 separate tenants to maintain.
  • ​Service tenant
  • Operator languagepack tenant
  • Global admin tenant
  • I'll keep the option to download the operator languagepacks from a 
 publicly available url. I'll allow operators to choose which method they 
 want to use by changing a setting in the solum config file.
 FYI: In my tests, I've noticed that downloading an image using the swift 
 client is twice as fast as downloading the same image using 'wget' from a 
 CDN url.

 Thanks,
 Murali

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development 

[openstack-dev] [Solum][Mistral] Help with a patch

2015-06-17 Thread Devdatta Kulkarni
Hi Mistral team,


Solum's devstack gate is running into an issue probably due to the change

in location of mistral's repositories.


Here is the bug:

https://bugs.launchpad.net/mistral/+bug/1466149


There is a patch which tries to fix the issue:

https://review.openstack.org/#/c/192754/1


If we can get your help with reviewing it and moving it forward, it would be 
greatly appreciated.


Thanks,

Devdatta
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-17 Thread Dan Smith
 Every change like this makes it harder for newcomers to participate.
 Frankly, it makes it harder for everyone because it means there are
 more moving parts, but in this specific case many of the people
 involved in these messaging drivers are relatively new, so I point
 that out.

I dunno about this. Having devstack migrate away from being an
opinionated tool for getting a test environment up that was eminently
readable to what it is today hasn't really helped anyone, IMHO. Having
some clear plug points such that we _can_ plug in the bits we need for
testing without having every possible option be embedded in the core
seems like goodness to me. I'd like to get back to the days where people
actually knew what was going on in devstack. That helps participation too.

I think having devstack deploy what the 90% (or, being honest, 99%) are
running, with the ability to plug in the 1% bits when necessary is much
more in line with what the goal of the tool is.

 The already difficult task of setting up sufficient
 functional tests has now turned into figure out devstack, too.

Yep, my point exactly. I think having clear points where you can setup
your thing and get it plugged in is much easier.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday June 18th at 17:00 UTC

2015-06-17 Thread David Kranz

Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, June 18th at 17:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

13:00 EDT
02:00 JST
02:30 ACST
19:00 CEST
12:00 CDT
10:00 PDT

-David Kranz


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Supporting swift downloads for operator languagepacks

2015-06-17 Thread Adrian Otto
To be clear, Randall is referring to a swift container (directory).

Murali has a good idea of attempting to use swift client first, as it has 
performance optimizations that can speed up the process more than naive file 
transfer tools. I did mention to him that wget does have a retiree feature, and 
that we could see about using curl instead to allow for chunked encoding as 
additional optimizations.

Randall, are you suggesting that we could use swift client for both private and 
public LP uses? That sounds like a good suggestion to me.

Adrian

On Jun 17, 2015, at 11:10 AM, Randall Burt 
randall.b...@rackspace.commailto:randall.b...@rackspace.com wrote:

Can't an operator make the target container public therefore removing the need 
for multiple access strategies?

 Original message 
From: Murali Allada
Date:06/17/2015 11:41 AM (GMT-06:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Solum] Supporting swift downloads for operator 
languagepacks

Hello Solum Developers,

When we were designing the operator languagepack feature for Solum, we wanted 
to make use of public urls to download operator LPs, such as those available 
for CDN backed swift containers we have at Rackspace, or any publicly 
accessible url. This would mean that when a user chooses to build applications 
on to​​p of a languagepack provided by the operator, we use a url to 'wget' the 
LP image.

Recently, we have started noticing a number of failures because of corrupted 
docker images downloaded using 'wget'. The docker images work fine when we 
download them manually with a swift client and use them. The corruption seem to 
be happening when we try to download a large image using 'wget' and there are 
dropped packets or intermittent network issues.

My thinking is to start using the swift client to download operator LPs by 
default instead of wget. The swift client already implements retry logic, 
downloading large images in chunks, etc. This means we would not get the 
niceties of using publicly accessible urls. However, the feature will be more 
reliable and robust.

The implementation would be as follows:

  *   ​We'll use the existing service tenant configuration available in the 
solum config file to authenticate and store operator languagepacks using the 
swift client. We were using a different tenant to build and host LPs, but now 
that we require the tenants credentials in the config file, it's best to reuse 
the existing service tenant creds. Note: If we don't, we'll have 3 separate 
tenants to maintain.
 *   ​Service tenant
 *   Operator languagepack tenant
 *   Global admin tenant
  *   I'll keep the option to download the operator languagepacks from a 
publicly available url. I'll allow operators to choose which method they want 
to use by changing a setting in the solum config file.

FYI: In my tests, I've noticed that downloading an image using the swift client 
is twice as fast as downloading the same image using 'wget' from a CDN url.

Thanks,
Murali

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-17 Thread Kyle Mestery
On Wed, Jun 17, 2015 at 2:44 PM, Sean Dague s...@dague.net wrote:

 On 06/17/2015 03:08 PM, Doug Hellmann wrote:
  Excerpts from Sean Dague's message of 2015-06-17 14:07:35 -0400:
  On 06/17/2015 01:29 PM, Clint Byrum wrote:
  Excerpts from Sean Dague's message of 2015-06-16 10:16:34 -0700:
  On 06/16/2015 12:49 PM, Clint Byrum wrote:
  Excerpts from Sean Dague's message of 2015-06-16 06:22:23 -0700:
  FYI,
 
  One of the things that came out of the summit for Devstack plans
 going
  forward is to trim it back to something more opinionated and remove
 a
  bunch of low use optionality in the process.
 
  One of those branches to be trimmed is all the support for things
 beyond
  RabbitMQ in the rpc layer. RabbitMQ is what's used by 95%+ of our
  community, that's what the development environment should focus on.
 
  The patch to remove all of this is here -
  https://review.openstack.org/#/c/192154/. Expect this to merge by
 the
  end of the month. If people are interested in non RabbitMQ external
  plugins, now is the time to start writing them. The oslo.messaging
 team
  already moved their functional test installation for alternative
  platforms off of devstack, so this should impact a very small
 number of
  people.
 
 
  The recent spec we added to define a policy for oslo.messaging
 drivers is
  intended as a way to encourage that 5% who feels a different
 messaging
  layer is critical to participate upstream by adding devstack-gate
 jobs
  and committing developers to keep them stable. This change basically
  slams the door in their face and says good luck, we don't actually
 care
  about accomodating you. This will drive them more into the shadows,
  and push their forks even further away from the core of the project.
 If
  that's your intention, then we need to have a longer conversation
 where
  you explain to me why you feel that's a good thing.
 
  I believe it is not the responsibility of the devstack team to support
  every possible backend one could imagine and carry that technical debt
  in tree, confusing new users in the process that any of these things
  might actually work. I believe that if you feel that your spec assumed
  that was going to be the case, you made a large incorrect
 externalities
  assumption.
 
 
  I agree with you, and support your desire to move things into plugins.
 
  However, your timing is problematic and the lack of coordination with
  the ongoing effort to deprecate untested messaging drivers gracefully
  is really frustrating. We've been asking (on this list) zmq interested
  parties to add devstack-gate jobs and identify themselves as contacts
  to support these drivers. Meanwhile this change and the wording around
  it suggest that they're not welcome in devstack.
 
  So there has clearly been some disconnect here. This patch was
  originally going to come later in the cycle, but some back and forth on
  proton fixes with Flavio made me realize we really needed to get this
  direction out in front of more people (which is why it wasn't just a
  patch, it was also an email heads up). So there wasn't surprise when it
  was merged.
 
  We built the external plugin mechanism in devstack to make it very easy
  to extend out of tree, and make it easy to let people consume your out
  of tree stuff. It's the only way that devstack works in the big tent
  world, because there just is too much stuff for the team to support.
 
  Every change like this makes it harder for newcomers to participate.
  Frankly, it makes it harder for everyone because it means there are
  more moving parts, but in this specific case many of the people
  involved in these messaging drivers are relatively new, so I point
  that out. The already difficult task of setting up sufficient
  functional tests has now turned into figure out devstack, too.
  The long-term Oslo team members can't do all of this work, any more
  than the devstack team can, but things were at least working in
  what we thought was a stable way so we could try to provide guidance.
 
 
  Also, I take issue with the value assigned to dropping it. If that
 95%
  is calculated as orgs_running_on_rabbit/orgs then it's telling a
 really
  lop-sided story. I'd rather see
 compute_nodes_on_rabbit/compute_nodes.
 
  I'd like to propose that we leave all of this in tree to match what
 is
  in oslo.messaging. I think devstack should follow oslo.messaging and
  deprecate the ones that oslo.messaging deprecates. Otherwise I feel
 like
  we're Vizzini cutting the rope just as The Dread Pirate 0mq is about
 to
  climb the last 10 meters to the top of the cliffs of insanity and
 battle
  RabbitMQ left handed. I know, inconceivable right?
 
  We have an external plugin mechanism for devstack. That's a viable
  option here. People will have to own and do that work, instead of
  expecting the small devstack team to do that for them. I believe I
 left
  enough of a hook in place that it's possible.
 
 
  So lets do some communication, and 

Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads for operator languagepacks

2015-06-17 Thread Murali Allada
Kevin\Keith,

Yes, we would like to use the catalog for globally available artifacts, such as 
operator languagepacks. More specifically the catalog would be a great place to 
store metadata about publicly available artifacts to make them searchable and 
easy to discover.

The catalog would point to the 'built' artifact, not the 'unbuilt' dockerfile 
in github.
The point of languagepacks is to reduce the amount of time the solum CI pipeline
spends building the users application container. We shouldn't build the 
languagepack from scratch each time.

-Murali








From: Keith Bray keith.b...@rackspace.com
Sent: Wednesday, June 17, 2015 2:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads 
for operator languagepacks

Hi Kevin,

We absolute envision languagepack artifacts being made available via
apps.openstack.org (ignoring for a moment that the name may not be a
perfect fit, particularly for things like vanilla glance images ... Is it
an OS or an App? ...  catalog.openstack.org might be more fitting).
Anyway, there are two stages for language packs, unbuilt, and built.  If
it's in an unbuilt state, then it's really a Dockerfile + any accessory
files that the Dockerfile references.   If it's in a built state, then
it's a Docker image (same as what is found on Dockerhub I believe).  I
think there will need to be more discussion to know what users prefer,
built vs. unbuilt, or both options (where unbuilt is often a collection of
files, best managed in a repo like github vs. built which are best
provided as direct links so a single source like Dockerhub).

-Keith

On 6/17/15 1:58 PM, Fox, Kevin M kevin@pnnl.gov wrote:

This question may be off on a tangent, or may be related.

As part of the application catalog project, (http://apps.openstack.org/)
we're trying to provide globally accessible resources that can be easily
consumed in OpenStack Clouds. How would these global Language Packs fit
in? Would the url record in the app catalog be required to point to an
Internet facing public Swift system then? Or, would it point to the
source git repo that Solum would use to generate the LP still?

Thanks,
Kevin

From: Randall Burt [randall.b...@rackspace.com]
Sent: Wednesday, June 17, 2015 11:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Supporting swift   downloads
for operatorlanguagepacks

Yes. If an operator wants to make their LP publicly available outside of
Solum, I was thinking they could just make GET's on the container public.
That being said, I'm unsure if this is realistically do-able if you still
have to have an authenticated tenant to access the objects. Scratch that;
http://blog.fsquat.net/?p=40 may be helpful.

On Jun 17, 2015, at 1:27 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 To be clear, Randall is referring to a swift container (directory).

 Murali has a good idea of attempting to use swift client first, as it
has performance optimizations that can speed up the process more than
naive file transfer tools. I did mention to him that wget does have a
retiree feature, and that we could see about using curl instead to allow
for chunked encoding as additional optimizations.

 Randall, are you suggesting that we could use swift client for both
private and public LP uses? That sounds like a good suggestion to me.

 Adrian

 On Jun 17, 2015, at 11:10 AM, Randall Burt
randall.b...@rackspace.com wrote:

 Can't an operator make the target container public therefore removing
the need for multiple access strategies?

  Original message 
 From: Murali Allada
 Date:06/17/2015 11:41 AM (GMT-06:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Solum] Supporting swift downloads for
operator languagepacks

 Hello Solum Developers,

 When we were designing the operator languagepack feature for Solum, we
wanted to make use of public urls to download operator LPs, such as
those available for CDN backed swift containers we have at Rackspace,
or any publicly accessible url. This would mean that when a user
chooses to build applications on to​​p of a languagepack provided by
the operator, we use a url to 'wget' the LP image.

 Recently, we have started noticing a number of failures because of
corrupted docker images downloaded using 'wget'. The docker images work
fine when we download them manually with a swift client and use them.
The corruption seem to be happening when we try to download a large
image using 'wget' and there are dropped packets or intermittent
network issues.

 My thinking is to start using the swift client to download operator
LPs by default instead of wget. The swift client already implements
retry logic, downloading large images in chunks, etc. This means we
would not get the 

[openstack-dev] [tc] adding magnum-ui to the openstack git namespace

2015-06-17 Thread Steven Dake (stdake)
Hey TCers,

In this thread, the Magnum community made a commitment to tackle horizon 
support for our software:
http://lists.openstack.org/pipermail/openstack-dev/2015-June/066701.html

We would like to add magnum-ui to the list of repos in the openstack namespace. 
 The governance review change:
https://review.openstack.org/#/c/192804/

Andreas has requested a governance repo change here for the infrastructure 
change:
https://review.openstack.org/#/c/190998https://review.openstack.org/#/c/190998/1//

Chicken and egg ftw :)  Would appreciate fast action so the magnum-ui-core team 
can get rolling :)

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-17 Thread Sean Dague
On 06/17/2015 03:08 PM, Doug Hellmann wrote:
 Excerpts from Sean Dague's message of 2015-06-17 14:07:35 -0400:
 On 06/17/2015 01:29 PM, Clint Byrum wrote:
 Excerpts from Sean Dague's message of 2015-06-16 10:16:34 -0700:
 On 06/16/2015 12:49 PM, Clint Byrum wrote:
 Excerpts from Sean Dague's message of 2015-06-16 06:22:23 -0700:
 FYI,

 One of the things that came out of the summit for Devstack plans going
 forward is to trim it back to something more opinionated and remove a
 bunch of low use optionality in the process.

 One of those branches to be trimmed is all the support for things beyond
 RabbitMQ in the rpc layer. RabbitMQ is what's used by 95%+ of our
 community, that's what the development environment should focus on.

 The patch to remove all of this is here -
 https://review.openstack.org/#/c/192154/. Expect this to merge by the
 end of the month. If people are interested in non RabbitMQ external
 plugins, now is the time to start writing them. The oslo.messaging team
 already moved their functional test installation for alternative
 platforms off of devstack, so this should impact a very small number of
 people.


 The recent spec we added to define a policy for oslo.messaging drivers is
 intended as a way to encourage that 5% who feels a different messaging
 layer is critical to participate upstream by adding devstack-gate jobs
 and committing developers to keep them stable. This change basically
 slams the door in their face and says good luck, we don't actually care
 about accomodating you. This will drive them more into the shadows,
 and push their forks even further away from the core of the project. If
 that's your intention, then we need to have a longer conversation where
 you explain to me why you feel that's a good thing.

 I believe it is not the responsibility of the devstack team to support
 every possible backend one could imagine and carry that technical debt
 in tree, confusing new users in the process that any of these things
 might actually work. I believe that if you feel that your spec assumed
 that was going to be the case, you made a large incorrect externalities
 assumption.


 I agree with you, and support your desire to move things into plugins.

 However, your timing is problematic and the lack of coordination with
 the ongoing effort to deprecate untested messaging drivers gracefully
 is really frustrating. We've been asking (on this list) zmq interested
 parties to add devstack-gate jobs and identify themselves as contacts
 to support these drivers. Meanwhile this change and the wording around
 it suggest that they're not welcome in devstack.

 So there has clearly been some disconnect here. This patch was
 originally going to come later in the cycle, but some back and forth on
 proton fixes with Flavio made me realize we really needed to get this
 direction out in front of more people (which is why it wasn't just a
 patch, it was also an email heads up). So there wasn't surprise when it
 was merged.

 We built the external plugin mechanism in devstack to make it very easy
 to extend out of tree, and make it easy to let people consume your out
 of tree stuff. It's the only way that devstack works in the big tent
 world, because there just is too much stuff for the team to support.
 
 Every change like this makes it harder for newcomers to participate.
 Frankly, it makes it harder for everyone because it means there are
 more moving parts, but in this specific case many of the people
 involved in these messaging drivers are relatively new, so I point
 that out. The already difficult task of setting up sufficient
 functional tests has now turned into figure out devstack, too.
 The long-term Oslo team members can't do all of this work, any more
 than the devstack team can, but things were at least working in
 what we thought was a stable way so we could try to provide guidance.
 

 Also, I take issue with the value assigned to dropping it. If that 95%
 is calculated as orgs_running_on_rabbit/orgs then it's telling a really
 lop-sided story. I'd rather see compute_nodes_on_rabbit/compute_nodes.

 I'd like to propose that we leave all of this in tree to match what is
 in oslo.messaging. I think devstack should follow oslo.messaging and
 deprecate the ones that oslo.messaging deprecates. Otherwise I feel like
 we're Vizzini cutting the rope just as The Dread Pirate 0mq is about to
 climb the last 10 meters to the top of the cliffs of insanity and battle
 RabbitMQ left handed. I know, inconceivable right?

 We have an external plugin mechanism for devstack. That's a viable
 option here. People will have to own and do that work, instead of
 expecting the small devstack team to do that for them. I believe I left
 enough of a hook in place that it's possible.


 So lets do some communication, and ask for the qpid and zmq people to
 step up, and help them move their code into an external plugin, and add
 documentation to help their users find it. The burden should shift, but
 

Re: [openstack-dev] [Solum] Supporting swift downloads for operator languagepacks

2015-06-17 Thread Randall Burt
Can't an operator make the target container public therefore removing the need 
for multiple access strategies?

 Original message 
From: Murali Allada
Date:06/17/2015 11:41 AM (GMT-06:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Solum] Supporting swift downloads for operator 
languagepacks

Hello Solum Developers,

When we were designing the operator languagepack feature for Solum, we wanted 
to make use of public urls to download operator LPs, such as those available 
for CDN backed swift containers we have at Rackspace, or any publicly 
accessible url. This would mean that when a user chooses to build applications 
on to??p of a languagepack provided by the operator, we use a url to 'wget' the 
LP image.

Recently, we have started noticing a number of failures because of corrupted 
docker images downloaded using 'wget'. The docker images work fine when we 
download them manually with a swift client and use them. The corruption seem to 
be happening when we try to download a large image using 'wget' and there are 
dropped packets or intermittent network issues.

My thinking is to start using the swift client to download operator LPs by 
default instead of wget. The swift client already implements retry logic, 
downloading large images in chunks, etc. This means we would not get the 
niceties of using publicly accessible urls. However, the feature will be more 
reliable and robust.

The implementation would be as follows:

  *   ?We'll use the existing service tenant configuration available in the 
solum config file to authenticate and store operator languagepacks using the 
swift client. We were using a different tenant to build and host LPs, but now 
that we require the tenants credentials in the config file, it's best to reuse 
the existing service tenant creds. Note: If we don't, we'll have 3 separate 
tenants to maintain.
 *   ?Service tenant
 *   Operator languagepack tenant
 *   Global admin tenant
  *   I'll keep the option to download the operator languagepacks from a 
publicly available url. I'll allow operators to choose which method they want 
to use by changing a setting in the solum config file.

FYI: In my tests, I've noticed that downloading an image using the swift client 
is twice as fast as downloading the same image using 'wget' from a CDN url.

Thanks,
Murali

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] apache wsgi application support

2015-06-17 Thread Sean Dague
On 06/16/2015 05:25 PM, Chris Dent wrote:
 On Tue, 16 Jun 2015, Sean Dague wrote:
 
 I was just looking at the patches that put Nova under apache wsgi for
 the API, and there are a few things that I think are going in the wrong
 direction. Largely I think because they were copied from the
 lib/keystone code, which we've learned is kind of the wrong direction.
 
 Yes, that's certainly what I've done the few times I've done it.
 devstack is deeply encouraging of cargo culting for reasons that are
 not entirely clear.

Yeh, hence why I decided to put the brakes on a little here and get this
on the list.

 The first is the fact that a big reason for putting {SERVICES} under
 apache wsgi is we aren't running on a ton of weird unregistered ports.
 We're running on 80 and 443 (when appropriate). In order to do this we
 really need to namespace the API urls. Which means that service catalog
 needs to be updated appropriately.
 
 So:
 
 a) I'm very glad to hear of this. I've been bristling about the weird
ports thing for the last year.
 
 b) You make it sound like there's been a plan in place to not use
those ports for quite some time and we'd get to that when we all
had some spare time. Where do I go to keep abreast of such plans?

Unfortunately, this is one of those in the ether kinds of plans. It's
been talked about for so long, but it never really got written down.
Hopefully this can be driven into the service catalog standardization
spec (or tag along somewhere close).

Or if nothing else, we're documenting it now on the mailing list as
permanent storage.

 I also think this -
 https://github.com/openstack-dev/devstack/blob/master/lib/nova#L266-L268
 is completely wrong.

 The Apache configs should instead specify access rules such that the
 installed console entry point of nova-api can be used in place as the
 WSGIScript.
 
 I'm not able to parse this paragraph in any actionable way. The lines
 you reference are one of several ways of telling mod wsgi where the
 virtualenv is, which has to happen in some fashion if you are using
 a virtualenv.
 
 This doesn't appear to have anything to do with locating the module
 that contains the WSGI app, so I'm missing the connection. Can you
 explain please?
 
 (Basically I'm keen on getting gnocchi and ceilometer wsgi servers
 in devstack aligned with whatever the end game is, so knowing the plan
 makes it a bit easier.)

Gah, the problem of linking to 'master' with line numbers. The three
lines I cared about were:

# copy proxy vhost and wsgi helper files
sudo cp $NOVA_DIR/nova/wsgi/nova-api.py $NOVA_WSGI_DIR/nova-api
sudo cp $NOVA_DIR/nova/wsgi/nova-ec2-api.py $NOVA_WSGI_DIR/nova-ec2-api

I don't think that we should be copying py files around to other
directories outside of normal pip install process. We should just have
mod_wsgi reference a thing that is installed in /usr/{local}/bin or
/usr/share via the python install process.

 This should also make lines like -
 https://github.com/openstack-dev/devstack/blob/master/lib/nova#L272 and
 L274 uneeded. (The WSGI Script will be in a known place). It will also
 make upgrades much more friendly.
 
 It sounds like maybe you are saying that the api console script and
 the module containing the wsgi 'application' variable ought to be the
 same thing. I don't reckon that's a great idea as the api console
 scripts will want to import a bunch of stuff that the wsgi application
 will not.
 
 Or I may be completely misreading you. It's been a long day, etc.

They don't need to be actually the same thing. They could be different
scripts, but they should be scripts that install via the normal pip
install process to a place, and we reference them by known name.

 I think that we need to get these things sorted before any further
 progression here. Volunteers welcomed to help get us there.
 
 Find me, happy to help. The sooner we can kill wacky port weirdness
 the better.

Agreed.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Supporting swift downloads for operator languagepacks

2015-06-17 Thread Randall Burt
Yes. If an operator wants to make their LP publicly available outside of Solum, 
I was thinking they could just make GET's on the container public. That being 
said, I'm unsure if this is realistically do-able if you still have to have an 
authenticated tenant to access the objects. Scratch that; 
http://blog.fsquat.net/?p=40 may be helpful.

On Jun 17, 2015, at 1:27 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 To be clear, Randall is referring to a swift container (directory).
 
 Murali has a good idea of attempting to use swift client first, as it has 
 performance optimizations that can speed up the process more than naive file 
 transfer tools. I did mention to him that wget does have a retiree feature, 
 and that we could see about using curl instead to allow for chunked encoding 
 as additional optimizations. 
 
 Randall, are you suggesting that we could use swift client for both private 
 and public LP uses? That sounds like a good suggestion to me.
 
 Adrian
 
 On Jun 17, 2015, at 11:10 AM, Randall Burt randall.b...@rackspace.com 
 wrote:
 
 Can't an operator make the target container public therefore removing the 
 need for multiple access strategies? 
 
  Original message 
 From: Murali Allada 
 Date:06/17/2015 11:41 AM (GMT-06:00) 
 To: OpenStack Development Mailing List (not for usage questions) 
 Subject: [openstack-dev] [Solum] Supporting swift downloads for operator 
 languagepacks
 
 Hello Solum Developers, 
  
 When we were designing the operator languagepack feature for Solum, we 
 wanted to make use of public urls to download operator LPs, such as those 
 available for CDN backed swift containers we have at Rackspace, or any 
 publicly accessible url. This would mean that when a user chooses to build 
 applications on to​​p of a languagepack provided by the operator, we use a 
 url to 'wget' the LP image.
 
 Recently, we have started noticing a number of failures because of corrupted 
 docker images downloaded using 'wget'. The docker images work fine when we 
 download them manually with a swift client and use them. The corruption seem 
 to be happening when we try to download a large image using 'wget' and there 
 are dropped packets or intermittent network issues.
 
 My thinking is to start using the swift client to download operator LPs by 
 default instead of wget. The swift client already implements retry logic, 
 downloading large images in chunks, etc. This means we would not get the 
 niceties of using publicly accessible urls. However, the feature will be 
 more reliable and robust.
 
 The implementation would be as follows:
  • ​We'll use the existing service tenant configuration available in the 
 solum config file to authenticate and store operator languagepacks using the 
 swift client. We were using a different tenant to build and host LPs, but 
 now that we require the tenants credentials in the config file, it's best to 
 reuse the existing service tenant creds. Note: If we don't, we'll have 3 
 separate tenants to maintain. 
  • ​Service tenant 
  • Operator languagepack tenant
  • Global admin tenant 
  • I'll keep the option to download the operator languagepacks from a 
 publicly available url. I'll allow operators to choose which method they 
 want to use by changing a setting in the solum config file.
 FYI: In my tests, I've noticed that downloading an image using the swift 
 client is twice as fast as downloading the same image using 'wget' from a 
 CDN url.
 
 Thanks,
 Murali
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/neutron-specs[master]: Neutron API for Service Chaining

2015-06-17 Thread Cathy Zhang
Hi Nicolas,

Thanks for your suggestion. Yes, we can add Application ID to the parameter of 
the flow classifier/filter. The next updated version will reflect this. 
Actually in its existing design, the parameter field of the flow classifier can 
be extended in the future to include more flow descriptors for more granular 
differentiation of flows.

Per earlier suggestion from Isaku etc., we can also add a “context” field to 
the service chain API. The context field will include information such as “the 
encapsulation mechanism” used by the service functions in the chain, which can 
be NSH, VLAN, none etc. so that the Service Function Forwarder (the vSwcitch) 
knows whether it should act as a SFC proxy or not and if acting as a Proxy, 
what is the chain correlation mechanism between the Service Function Forwarder 
and the Service Function.

Any comments/questions/suggestions?

Thanks,
Cathy

From: Nicolas BOUTHORS [mailto:nicolas.bouth...@qosmos.com]
Sent: Wednesday, June 17, 2015 12:03 AM
To: Armando Migliaccio; Henry Fourie
Cc: Isaku Yamahata; Gal Sagie; vishwanath jayaraman; Swaminathan Vasudevan; Ila 
Palanisamy; Adolfo Duarte; Ritesh Anand; Lynn Li; Bob Melander; Berezovsky 
Irena; Subrahmanyam Ongole; Cathy Zhang; Moshe Levi; Joe D'Andrea; Ryan 
Tidwell; Vikram Choudhary; Ruijing; Yatin Kumbhare; Miguel Angel Ajo; Numan 
Siddique; Yuriy Babenko; YujiAzama
Subject: RE: Change in openstack/neutron-specs[master]: Neutron API for Service 
Chaining


In IETF SFC draft-penno-sfc-appid-00 proposed a notion of ApplicationId, a 
generic attribute that can be included in NSH metadata.  This reflects also on  
ODL SFC wich has introduced the Application Id as a parameter that can be used 
by the Classifier to steer traffic into a chain.



I suggest we include this parameter in the Flow Filter resource, so that 
application aware service chaining can be done.



ApplicationId is typically encoded in a 32 bit field.



   Application Identification Data Format



The following table displays the Selector ID default length for the  different 
Classification Engine IDs.



Classification   Selector ID default

Engine ID Name   length (in bytes)



IANA-L3  1



PANA-L3  1



IANA-L4  2



PANA-L4  2



USER-Defined 3



PANA-L2  5



PANA-L7  3



ETHERTYPE2



LLC  1



PANA-L7-PEN  3 (*)



0   1   2   3

  0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1

  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

  |Class. Eng. ID |zero-valued upper-bits ... Selector ID |

  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+





Nicolas



-Original Message-
From: Jenkins (Code Review) [mailto:rev...@openstack.org]
Sent: mercredi 17 juin 2015 08:46
To: Armando Migliaccio; Louis Fourie
Cc: Isaku Yamahata; Gal Sagie; vishwanath jayaraman; Swaminathan Vasudevan; Ila 
Palanisamy; Adolfo Duarte; Ritesh Anand; Lynn Li; Bob Melander; Berezovsky 
Irena; Subrahmanyam Ongole; cathy; Moshe Levi; Joe D'Andrea; Ryan Tidwell; 
vikram.choudhary; Ruijing; Yatin Kumbhare; Miguel Angel Ajo; Numan Siddique; 
Yuriy Babenko; YujiAzama
Subject: Change in openstack/neutron-specs[master]: Neutron API for Service 
Chaining



Jenkins has posted comments on this change.



Change subject: Neutron API for Service Chaining 
..





Patch Set 8: Verified+1



Build succeeded (check pipeline).



- gate-neutron-specs-docs 
http://docs-draft.openstack.org/46/177946/8/check/gate-neutron-specs-docs/6955f62//doc/build/html/http://docs-draft.openstack.org/46/177946/8/check/gate-neutron-specs-docs/6955f62/doc/build/html/
 : SUCCESS in 3m 51s

- gate-neutron-specs-python27 
http://logs.openstack.org/46/177946/8/check/gate-neutron-specs-python27/271ef19/
 : SUCCESS in 2m 31s



--

To view, visit https://review.openstack.org/177946

To unsubscribe, visit https://review.openstack.org/settings



Gerrit-MessageType: comment

Gerrit-Change-Id: Ic0df6070fefd9ead6589fa2da6c49824d7ae3941

Gerrit-PatchSet: 8

Gerrit-Project: openstack/neutron-specs

Gerrit-Branch: master

Gerrit-Owner: Louis Fourie 
louis.fou...@huawei.commailto:louis.fou...@huawei.com

Gerrit-Reviewer: Adolfo Duarte 
adolfo.dua...@hp.commailto:adolfo.dua...@hp.com

Gerrit-Reviewer: Armando Migliaccio 
arma...@gmail.commailto:arma...@gmail.com

Gerrit-Reviewer: Berezovsky Irena 
irenab@gmail.commailto:irenab@gmail.com

Gerrit-Reviewer: Bob Melander 
bob.melan...@gmail.commailto:bob.melan...@gmail.com

Gerrit-Reviewer: Gal Sagie 

Re: [openstack-dev] [all] setup.py executable bit

2015-06-17 Thread Ian Cordasco


On 6/17/15, 13:53, Jeremy Stanley fu...@yuggoth.org wrote:

On 2015-06-17 14:47:48 -0400 (-0400), Doug Hellmann wrote:
 +1 both to using -x and to removing the shebang.

Agreed. We don't want anyone directly invoking this file as an
executable script in PBR-based packages, so I'm strongly in favor of
anything we can do to actively discourage that.
-- 
Jeremy Stanley

Agreed as well. Most places advocate using python setup.py [command]
[command] so using -x and removing the shebang makes perfect sense to me.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] [Nova] How to pass additional information from Flavor to an Ironic driver

2015-06-17 Thread sinval

Hi everyone,

we are developing an Ironic driver for OneView [1], an Infrastructure 
Management System (IMS) by HP and, in order to deploy the node 
correctly, the driver needs to know some specific information about the 
configuration of the physical hardware.


In OneView, there is the concept of a Server Profile, which contains 
information about the hardware configuration, like boot order, bios 
settings, network connections, firmware version, storage information 
etc. There are 150+ parameters. A Server Profile has to be assigned to 
the physical hardware before it is powered on. In this way we can make 
sure that a node is provisioned, connected to the correct network and 
using the correct storage, for example.


Therefore, to make a deployment, the driver needs to know which Server 
Profile to use as different Server Profiles can be applied to the same 
hardware type.


Our first thought was to abstract these options (the possible Server 
Profiles) into different flavors, since it contains configurations that 
can be used even to change the ammount of disk a server has (using SAN 
volumes), or the power configuration it will use. Nova flavors do have a 
'capabilities' namespace on the extra_specs field which can be used to 
pass additional information to be used by the driver, but the data in 
this namespace is mandatorily used on node matching by the Nova 
scheduler when the capability filter is used (which is default and 
important).


In our case, the driver requires additional information regarding the 
Server Profile that is contained on the flavor, but that should *not* be 
used in scheduling (since a single node could accept a number of 
different profiles).


We suggested to use a 'passthrough' namespace (note that no changes are 
required on Nova to do that) in the flavor extra_specs and update the 
Ironic's patcher (here we have a simple change in Ironic Virt Driver, 
only) to include this information in the instance_info field for the 
node, but, according to some team members of Nova, this is directly in 
conflict their goals of abstracting the compute resources that Nova 
provide.


Another option, suggested by dansmith, is to solve this problem in 
Ironic using a reservation concept (created on Ironic to abstract such 
information) and pass the type of reservation on nova boot, avoiding 
changes that could hurt the flavor concept. We think the footprint of 
such a change, not only to Ironic, but on Nova, python-novaclient and 
Horizon would be huge.


Nisha sent a spec about modify the nova-ironic-virt-driver to accept the 
json lists and dictionaries as valid values for the ironic 
node.properties['capabilities'] [2]. But, in this case the idea is not a 
solution to our scenario, because we need to pass information that 
should not be matched during the scheduling task.


Do you see another way to approach this problem? We hope that, with your 
help, we'll find out an solution that will solve our problem without 
contradicting any principle the nova team has set.


Thank you in advance!
Sinval Vieira

[1] 
http://www8.hp.com/br/pt/business-solutions/converged-systems/oneview.html

[2] https://review.openstack.org/#/c/182572


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] setup.py executable bit

2015-06-17 Thread Doug Hellmann
Excerpts from Robert Collins's message of 2015-06-18 06:40:33 +1200:
 An unintended side effect of the requirements refactoring was that we
 changed from preserving the 'x' bit on setup.py, to discarding it.
 This happened when we started writing the file atomically rather than
 in-place - a good robustness improvement.
 
 Previously the requirements sync, which enforces setup.py contents had
 made no statement about the file mode. Now it unintentionally is.
 
 We could do several things:
  - preserve the file mode (stat the old, use its mode in open on the temp 
 file)
  - force the mode to be +x
  - force the mode to be -x [the current behaviour]
 
 After a brief IRC discussion in #openstack-olso we're proposing that
 forcing the mode to be -x is appropriate.
 
 Our reasoning is as follows:
  - './setup.py XYZ' is often a bug - unless the shebang in the
 setup.py is tolerant of virtualenvs (not all are), it will do the
 wrong thing in a virtual env. Similarly with PATH.
  - we don't require or suggest users of our requirements syncronised
 packages run setup.py at all:
 - sdists and releases are made in the CI infrastructure
 - installation is exclusively via pip
 
 So it seems like a slight safety improvement to remove the x bit - and
 possibly (we haven't thought it all the way through yet) also remove
 the shebang entirely, so that the contract becomes explicitly
 'setup.py is not executable'.
 
 Please raise concerns or objections here; if there are none I'll
 likely put up a patch to remove the shebang early next week, or
 whenever I get reminded of this.

+1 both to using -x and to removing the shebang.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads for operator languagepacks

2015-06-17 Thread Keith Bray
Hi Kevin,

We absolute envision languagepack artifacts being made available via
apps.openstack.org (ignoring for a moment that the name may not be a
perfect fit, particularly for things like vanilla glance images ... Is it
an OS or an App? ...  catalog.openstack.org might be more fitting).
Anyway, there are two stages for language packs, unbuilt, and built.  If
it's in an unbuilt state, then it's really a Dockerfile + any accessory
files that the Dockerfile references.   If it's in a built state, then
it's a Docker image (same as what is found on Dockerhub I believe).  I
think there will need to be more discussion to know what users prefer,
built vs. unbuilt, or both options (where unbuilt is often a collection of
files, best managed in a repo like github vs. built which are best
provided as direct links so a single source like Dockerhub).

-Keith

On 6/17/15 1:58 PM, Fox, Kevin M kevin@pnnl.gov wrote:

This question may be off on a tangent, or may be related.

As part of the application catalog project, (http://apps.openstack.org/)
we're trying to provide globally accessible resources that can be easily
consumed in OpenStack Clouds. How would these global Language Packs fit
in? Would the url record in the app catalog be required to point to an
Internet facing public Swift system then? Or, would it point to the
source git repo that Solum would use to generate the LP still?

Thanks,
Kevin

From: Randall Burt [randall.b...@rackspace.com]
Sent: Wednesday, June 17, 2015 11:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Supporting swift   downloads
for operatorlanguagepacks

Yes. If an operator wants to make their LP publicly available outside of
Solum, I was thinking they could just make GET's on the container public.
That being said, I'm unsure if this is realistically do-able if you still
have to have an authenticated tenant to access the objects. Scratch that;
http://blog.fsquat.net/?p=40 may be helpful.

On Jun 17, 2015, at 1:27 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 To be clear, Randall is referring to a swift container (directory).

 Murali has a good idea of attempting to use swift client first, as it
has performance optimizations that can speed up the process more than
naive file transfer tools. I did mention to him that wget does have a
retiree feature, and that we could see about using curl instead to allow
for chunked encoding as additional optimizations.

 Randall, are you suggesting that we could use swift client for both
private and public LP uses? That sounds like a good suggestion to me.

 Adrian

 On Jun 17, 2015, at 11:10 AM, Randall Burt
randall.b...@rackspace.com wrote:

 Can't an operator make the target container public therefore removing
the need for multiple access strategies?

  Original message 
 From: Murali Allada
 Date:06/17/2015 11:41 AM (GMT-06:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Solum] Supporting swift downloads for
operator languagepacks

 Hello Solum Developers,

 When we were designing the operator languagepack feature for Solum, we
wanted to make use of public urls to download operator LPs, such as
those available for CDN backed swift containers we have at Rackspace,
or any publicly accessible url. This would mean that when a user
chooses to build applications on to​​p of a languagepack provided by
the operator, we use a url to 'wget' the LP image.

 Recently, we have started noticing a number of failures because of
corrupted docker images downloaded using 'wget'. The docker images work
fine when we download them manually with a swift client and use them.
The corruption seem to be happening when we try to download a large
image using 'wget' and there are dropped packets or intermittent
network issues.

 My thinking is to start using the swift client to download operator
LPs by default instead of wget. The swift client already implements
retry logic, downloading large images in chunks, etc. This means we
would not get the niceties of using publicly accessible urls. However,
the feature will be more reliable and robust.

 The implementation would be as follows:
  • ​We'll use the existing service tenant configuration available
in the solum config file to authenticate and store operator
languagepacks using the swift client. We were using a different tenant
to build and host LPs, but now that we require the tenants credentials
in the config file, it's best to reuse the existing service tenant
creds. Note: If we don't, we'll have 3 separate tenants to maintain.
  • ​Service tenant
  • Operator languagepack tenant
  • Global admin tenant
  • I'll keep the option to download the operator languagepacks
from a publicly available url. I'll allow operators to choose which
method they want to use by changing a setting in the solum config 

Re: [openstack-dev] [all] setup.py executable bit

2015-06-17 Thread Jeremy Stanley
On 2015-06-17 14:47:48 -0400 (-0400), Doug Hellmann wrote:
 +1 both to using -x and to removing the shebang.

Agreed. We don't want anyone directly invoking this file as an
executable script in PBR-based packages, so I'm strongly in favor of
anything we can do to actively discourage that.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Call for testing: 2014.1.5 (last Icehouse point release) candidate tarballs

2015-06-17 Thread Alan Pevec
   http://tarballs.openstack.org/glance/heat-stable-icehouse.tar.gz

copy-paste fail, this should be:

http://tarballs.openstack.org/glance/glance-stable-icehouse.tar.gz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]

2015-06-17 Thread Matt Riedemann



On 6/17/2015 3:53 PM, Sourabh Patwardhan wrote:

Hello,

I'm working on a new vif driver [1].
As part of the review comments, it was mentioned that a generic VIF
driver will be introduced in Liberty, which may render custom VIF
drivers obsolete.

Can anyone point me to blueprints / specs for the generic driver work?


I think that's being proposed here:

https://review.openstack.org/#/c/162468/


Alternatively, any guidance on how to proceed on my patch is most welcome.

Thanks,
Sourabh

[1] https://review.openstack.org/#/c/157616/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Re: duplicate keystone endpoints

2015-06-17 Thread Mike Dorman
We’ve had this same problem, too, and I’d agree it should fail the Puppet run 
rather than just passing.  Would you mind writing up a bug report for this at 
https://launchpad.net/puppet-openstacklib ?

I have this on my list of stuff to fix when we go to Kilo (soon), so if 
somebody else doesn’t fix it, then I will.

Thanks!


From: Black, Matthew
Reply-To: 
puppet-openst...@puppetlabs.commailto:puppet-openst...@puppetlabs.com
Date: Wednesday, June 17, 2015 at 12:54 PM
To: puppet-openst...@puppetlabs.commailto:puppet-openst...@puppetlabs.com
Subject: duplicate keystone endpoints

I was digging around in the icehouse puppet code and I found what I believe is 
the cause of a duplicate endpoint creation during a short network disruption. 
In my environments the keystone servers do not reside in the same network as 
the regions. It looks like the puppet code fails the first request, sleeps 10 
seconds, tries again and if that fails it then returns with a nil. The code 
then returns an empty array to the provider which then is assumed to mean that 
the endpoint does not exist. If the network blip is over by that point it will 
attempt to create the endpoint and thus a duplicate endpoint in the catalog.

https://github.com/openstack/puppet-keystone/blob/stable/icehouse/lib/puppet/provider/keystone.rb#L139

https://github.com/openstack/puppet-keystone/blob/stable/icehouse/lib/puppet/provider/keystone.rb#L83-L88


Looking at the juno code, which it is using the openstacklib, the issue still 
exists but in a slightly different fashion.

https://github.com/openstack/puppet-openstacklib/blob/master/lib/puppet/provider/openstack.rb#L55-L66

I believe this should be changed that instead of a breaking out of the loop it 
should throw an exception.

--


To unsubscribe from this group and stop receiving emails from it, send an email 
to 
puppet-openstack+unsubscr...@puppetlabs.commailto:puppet-openstack+unsubscr...@puppetlabs.com.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads for operator languagepacks

2015-06-17 Thread Fox, Kevin M
Would then each docker host try and redownload the the prebuilt container 
externally? If you build from source, does it build it once and then all the 
docker hosts use that one local copy? Maybe Solum needs a mechanism to pull in 
a prebuilt LP?

Thanks,
Kevin

From: Murali Allada [murali.all...@rackspace.com]
Sent: Wednesday, June 17, 2015 12:53 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads 
for operator languagepacks

Kevin\Keith,

Yes, we would like to use the catalog for globally available artifacts, such as 
operator languagepacks. More specifically the catalog would be a great place to 
store metadata about publicly available artifacts to make them searchable and 
easy to discover.

The catalog would point to the 'built' artifact, not the 'unbuilt' dockerfile 
in github.
The point of languagepacks is to reduce the amount of time the solum CI pipeline
spends building the users application container. We shouldn't build the 
languagepack from scratch each time.

-Murali








From: Keith Bray keith.b...@rackspace.com
Sent: Wednesday, June 17, 2015 2:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads 
for operator languagepacks

Hi Kevin,

We absolute envision languagepack artifacts being made available via
apps.openstack.org (ignoring for a moment that the name may not be a
perfect fit, particularly for things like vanilla glance images ... Is it
an OS or an App? ...  catalog.openstack.org might be more fitting).
Anyway, there are two stages for language packs, unbuilt, and built.  If
it's in an unbuilt state, then it's really a Dockerfile + any accessory
files that the Dockerfile references.   If it's in a built state, then
it's a Docker image (same as what is found on Dockerhub I believe).  I
think there will need to be more discussion to know what users prefer,
built vs. unbuilt, or both options (where unbuilt is often a collection of
files, best managed in a repo like github vs. built which are best
provided as direct links so a single source like Dockerhub).

-Keith

On 6/17/15 1:58 PM, Fox, Kevin M kevin@pnnl.gov wrote:

This question may be off on a tangent, or may be related.

As part of the application catalog project, (http://apps.openstack.org/)
we're trying to provide globally accessible resources that can be easily
consumed in OpenStack Clouds. How would these global Language Packs fit
in? Would the url record in the app catalog be required to point to an
Internet facing public Swift system then? Or, would it point to the
source git repo that Solum would use to generate the LP still?

Thanks,
Kevin

From: Randall Burt [randall.b...@rackspace.com]
Sent: Wednesday, June 17, 2015 11:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Supporting swift   downloads
for operatorlanguagepacks

Yes. If an operator wants to make their LP publicly available outside of
Solum, I was thinking they could just make GET's on the container public.
That being said, I'm unsure if this is realistically do-able if you still
have to have an authenticated tenant to access the objects. Scratch that;
http://blog.fsquat.net/?p=40 may be helpful.

On Jun 17, 2015, at 1:27 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 To be clear, Randall is referring to a swift container (directory).

 Murali has a good idea of attempting to use swift client first, as it
has performance optimizations that can speed up the process more than
naive file transfer tools. I did mention to him that wget does have a
retiree feature, and that we could see about using curl instead to allow
for chunked encoding as additional optimizations.

 Randall, are you suggesting that we could use swift client for both
private and public LP uses? That sounds like a good suggestion to me.

 Adrian

 On Jun 17, 2015, at 11:10 AM, Randall Burt
randall.b...@rackspace.com wrote:

 Can't an operator make the target container public therefore removing
the need for multiple access strategies?

  Original message 
 From: Murali Allada
 Date:06/17/2015 11:41 AM (GMT-06:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Solum] Supporting swift downloads for
operator languagepacks

 Hello Solum Developers,

 When we were designing the operator languagepack feature for Solum, we
wanted to make use of public urls to download operator LPs, such as
those available for CDN backed swift containers we have at Rackspace,
or any publicly accessible url. This would mean that when a user
chooses to build applications on to​​p of a languagepack provided by
the operator, we use a url to 'wget' the LP image.

 Recently, we 

[openstack-dev] [neutron] [networking-sfc] Project repo setup and ready to roll

2015-06-17 Thread Armando M.
Hi,

The infrastructure jobs are completed. The project repository [1] has been
provisioned, and it is ready to go. Spec [2] is being moved to the new
repo, with patch [3]. Any documentation/specification effort that pertains,
and/or is solely focused on SFC, should target the new repo from now on.

I imagine we'll talk more on next steps during the weekly call on SFC.

HTH,
Armando

[1] https://github.com/openstack/networking-sfc
[2] https://review.openstack.org/#/c/177946/
[3] https://review.openstack.org/#/c/192933/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-17 Thread Clint Byrum
Excerpts from Kyle Mestery's message of 2015-06-17 13:54:06 -0700:
 On Wed, Jun 17, 2015 at 3:48 PM, Doug Hellmann d...@doughellmann.com
 wrote:
 
  Excerpts from Dan Smith's message of 2015-06-17 13:16:46 -0700:
Every change like this makes it harder for newcomers to participate.
Frankly, it makes it harder for everyone because it means there are
more moving parts, but in this specific case many of the people
involved in these messaging drivers are relatively new, so I point
that out.
  
   I dunno about this. Having devstack migrate away from being an
   opinionated tool for getting a test environment up that was eminently
   readable to what it is today hasn't really helped anyone, IMHO. Having
   some clear plug points such that we _can_ plug in the bits we need for
   testing without having every possible option be embedded in the core
   seems like goodness to me. I'd like to get back to the days where people
   actually knew what was going on in devstack. That helps participation
  too.
  
   I think having devstack deploy what the 90% (or, being honest, 99%) are
   running, with the ability to plug in the 1% bits when necessary is much
   more in line with what the goal of the tool is.
  
The already difficult task of setting up sufficient
functional tests has now turned into figure out devstack, too.
  
   Yep, my point exactly. I think having clear points where you can setup
   your thing and get it plugged in is much easier.
 
  I'm not questioning the goal, or even the approach. But we spent
  the last cycle building up the teams working on these drivers in
  Oslo, and at the summit several groups were (re)motivated to be
  working on the code. Now the devstack team is yanking the rug out
  from under all of that work with this patch.
 
  I'm asking that we not set a tight deadline on doing this right
  away, to give everyone who wasn't involved in those discussions
  about the changes in devstack to understand what's actually involved
  in recovering from being kicked out of tree.
 
 
 I think people are overreacting here. Adding pluggable devstack support is
 actually quite easy, and will honestly make the life of these new messaging
 developers much easier. It's worth the time to go down this path from the
 start for both sides. I don't see it as kicking them out, but enabling them.
 

Kyle, the point is that the relationship is delicate, and this patch _IS_
deleting the code that those contributors would use to interface with
our testing system. The reaction isn't to how hard this is or whether
or not it is a good idea. It is a reaction to the heavy handed approach
which gives no consideration to the amount of time it will take for
those contributors to establish their own external plugin on top of the
already extremely daunting task of setting up gate/check jobs.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Sam Morrison

 On 17 Jun 2015, at 8:35 pm, Neil Jerram neil.jer...@metaswitch.com wrote:
 
 Hi Sam,
 
 On 17/06/15 01:31, Sam Morrison wrote:
 We at NeCTAR are starting the transition to neutron from nova-net and 
 neutron almost does what we want.
 
 We have 10 “public networks and 10 “service networks and depending on 
 which compute node you land on you get attached to one of them.
 
 In neutron speak we have multiple shared externally routed provider 
 networks. We don’t have any tenant networks or any other fancy stuff yet.
 How I’ve currently got this set up is by creating 10 networks and subsequent 
 subnets eg. public-1, public-2, public-3 … and service-1, service-2, 
 service-3 and so on.
 
 In nova we have made a slight change in allocate for instance [1] whereby 
 the compute node has a designated hardcoded network_ids for the public and 
 service network it is physically attached to.
 We have also made changes in the nova API so users can’t select a network 
 and the neutron endpoint is not registered in keystone.
 
 That all works fine but ideally I want a user to be able to choose if they 
 want a public and or service network. We can’t let them as we have 10 public 
 networks, we almost need something in neutron like a network group” or 
 something that allows a user to select “public” and it allocates them a port 
 in one of the underlying public networks.
 
 This begs the question: why have you defined 10 public-N networks, instead of 
 just one public network?

I think this has all been answered but just in case.
There are multiple reasons. We don’t have a single IPv4 range big enough for 
our cloud, don’t want the broadcast domain too be massive, the compute nodes 
are in different data centres etc. etc.
Basically it’s not how our underlying physical network is set up and we can’t 
change that.

Sam


 
 I tried going down the route of having 1 public and 1 service network in 
 neutron then creating 10 subnets under each. That works until you get to 
 things like dhcp-agent and metadata agent although this looks like it could 
 work with a few minor changes. Basically I need a dhcp-agent to be spun up 
 per subnet and ensure they are spun up in the right place.
 
 Why the 10 subnets?  Is it to do with where you actually have real L2 
 segments, in your deployment?
 
 Thanks,
   Neil
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads for operator languagepacks

2015-06-17 Thread Adrian Otto
Kevin,

 On Jun 17, 2015, at 4:03 PM, Fox, Kevin M kevin@pnnl.gov wrote:
 
 Would then each docker host try and redownload the the prebuilt container 
 externally? If you build from source, does it build it once and then all the 
 docker hosts use that one local copy? Maybe Solum needs a mechanism to pull 
 in a prebuilt LP?

On each docker server Solum downloads built LP’s from swift before the 
containers are created, so Docker has no reason to contact the public image 
repository for fetching the LP images because is has a local copy.

Adrian

 
 Thanks,
 Kevin
 
 From: Murali Allada [murali.all...@rackspace.com]
 Sent: Wednesday, June 17, 2015 12:53 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift 
 downloads for operator languagepacks
 
 Kevin\Keith,
 
 Yes, we would like to use the catalog for globally available artifacts, such 
 as operator languagepacks. More specifically the catalog would be a great 
 place to store metadata about publicly available artifacts to make them 
 searchable and easy to discover.
 
 The catalog would point to the 'built' artifact, not the 'unbuilt' dockerfile 
 in github.
 The point of languagepacks is to reduce the amount of time the solum CI 
 pipeline
 spends building the users application container. We shouldn't build the 
 languagepack from scratch each time.
 
 -Murali
 
 
 
 
 
 
 
 
 From: Keith Bray keith.b...@rackspace.com
 Sent: Wednesday, June 17, 2015 2:10 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift 
 downloads for operator languagepacks
 
 Hi Kevin,
 
 We absolute envision languagepack artifacts being made available via
 apps.openstack.org (ignoring for a moment that the name may not be a
 perfect fit, particularly for things like vanilla glance images ... Is it
 an OS or an App? ...  catalog.openstack.org might be more fitting).
 Anyway, there are two stages for language packs, unbuilt, and built.  If
 it's in an unbuilt state, then it's really a Dockerfile + any accessory
 files that the Dockerfile references.   If it's in a built state, then
 it's a Docker image (same as what is found on Dockerhub I believe).  I
 think there will need to be more discussion to know what users prefer,
 built vs. unbuilt, or both options (where unbuilt is often a collection of
 files, best managed in a repo like github vs. built which are best
 provided as direct links so a single source like Dockerhub).
 
 -Keith
 
 On 6/17/15 1:58 PM, Fox, Kevin M kevin@pnnl.gov wrote:
 
 This question may be off on a tangent, or may be related.
 
 As part of the application catalog project, (http://apps.openstack.org/)
 we're trying to provide globally accessible resources that can be easily
 consumed in OpenStack Clouds. How would these global Language Packs fit
 in? Would the url record in the app catalog be required to point to an
 Internet facing public Swift system then? Or, would it point to the
 source git repo that Solum would use to generate the LP still?
 
 Thanks,
 Kevin
 
 From: Randall Burt [randall.b...@rackspace.com]
 Sent: Wednesday, June 17, 2015 11:38 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum] Supporting swift   downloads
 for operatorlanguagepacks
 
 Yes. If an operator wants to make their LP publicly available outside of
 Solum, I was thinking they could just make GET's on the container public.
 That being said, I'm unsure if this is realistically do-able if you still
 have to have an authenticated tenant to access the objects. Scratch that;
 http://blog.fsquat.net/?p=40 may be helpful.
 
 On Jun 17, 2015, at 1:27 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:
 
 To be clear, Randall is referring to a swift container (directory).
 
 Murali has a good idea of attempting to use swift client first, as it
 has performance optimizations that can speed up the process more than
 naive file transfer tools. I did mention to him that wget does have a
 retiree feature, and that we could see about using curl instead to allow
 for chunked encoding as additional optimizations.
 
 Randall, are you suggesting that we could use swift client for both
 private and public LP uses? That sounds like a good suggestion to me.
 
 Adrian
 
 On Jun 17, 2015, at 11:10 AM, Randall Burt
 randall.b...@rackspace.com wrote:
 
 Can't an operator make the target container public therefore removing
 the need for multiple access strategies?
 
  Original message 
 From: Murali Allada
 Date:06/17/2015 11:41 AM (GMT-06:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Solum] Supporting swift downloads for
 operator languagepacks
 
 Hello Solum Developers,
 
 

[openstack-dev] [puppet] preparing Kilo release (6.0.0)

2015-06-17 Thread Emilien Macchi
As we decided at the Summit, we are in the process to prepare a Kilo
release.

All the work can be tracked here:
https://docs.google.com/spreadsheets/d/1XVrmEiLrJSdxDo-S_vFB7ljxTdYg-pe8hiMUryRor5A/edit#gid=0
https://etherpad.openstack.org/p/puppet-kilo-release

Please raise any outstanding patch in the etherpad.
Also, please review the blocking patches so we can move forward.

Thanks to the recent improvements we did in our organization (CI,
process, more people, etc), we should be able (in the future) to release
the modules closer to OpenStack releases than before.

Best regards,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Sam Morrison

 On 18 Jun 2015, at 2:59 am, Neil Jerram neil.jer...@metaswitch.com wrote:
 
 
 
 On 17/06/15 16:17, Kris G. Lindgren wrote:
 See inline.
 
 
 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.
 
 
 
 On 6/17/15, 5:12 AM, Neil Jerram neil.jer...@metaswitch.com wrote:
 
 Hi Kris,
 
 Apologies in advance for questions that are probably really dumb - but
 there are several points here that I don't understand.
 
 On 17/06/15 03:44, Kris G. Lindgren wrote:
 We are doing pretty much the same thing - but in a slightly different
 way.
   We extended the nova scheduler to help choose networks (IE. don't put
 vm's on a network/host that doesn't have any available IP address).
 
 Why would a particular network/host not have any available IP address?
 
  If a created network has 1024 ip's on it (/22) and we provision 1020 vms,
  anything deployed after that will not have an additional ip address
 because
  the network doesn't have any available ip addresses (loose some ip's to
  the network).
 
 OK, thanks, that certainly explains the particular network possibility.
 
 So I guess this applies where your preference would be for network A, but it 
 would be OK to fall back to network B, and so on.  That sounds like it could 
 be a useful general enhancement.
 
 (But, if a new VM absolutely _has_ to be on, say, the 'production' network, 
 and the 'production' network is already fully used, you're fundamentally 
 stuck, aren't you?)
 
 What about the /host part?  Is it possible in your system for a network to 
 have IP addresses available, but for them not to be usable on a particular 
 host?
 
 Then,
 we add into the host-aggregate that each HV is attached to a network
 metadata item which maps to the names of the neutron networks that host
 supports.  This basically creates the mapping of which host supports
 what
 networks, so we can correctly filter hosts out during scheduling. We do
 allow people to choose a network if they wish and we do have the neutron
 end-point exposed. However, by default if they do not supply a boot
 command with a network, we will filter the networks down and choose one
 for them.  That way they never hit [1].  This also works well for us,
 because the default UI that we provide our end-users is not horizon.
 
 Why do you define multiple networks - as opposed to just one - and why
 would one of your users want to choose a particular one of those?
 
 (Do you mean multiple as in public-1, public-2, ...; or multiple as in
 public, service, ...?)
 
  This is answered in the other email and original email as well.  But
 basically
  we have multiple L2 segments that only exists on certain switches and
 thus are
  only tied to certain hosts.  With the way neutron is currently structured
 we
  need to create a network for each L2. So that¹s why we define multiple
 networks.
 
 Thanks!  Ok, just to check that I really understand this:
 
 - You have real L2 segments connecting some of your compute hosts together - 
 and also I guess to a ToR that does L3 to the rest of the data center.
 
 - You presumably then just bridge all the TAP interfaces, on each host, to 
 the host's outwards-facing interface.
 
   + VM
   |
   +- Host + VM
   |   |
   |   + VM
   |
   |   + VM
   |   |
   +- Host + VM
   |   |
 ToR ---+   + VM
   |
   |   + VM
   |   |
   |- Host + VM
   |
   + VM
 
 - You specify each such setup as a network in the Neutron API - and hence you 
 have multiple similar networks, for your data center as a whole.
 
 Out of interest, do you do this just because it's the Right Thing according 
 to the current Neutron API - i.e. because a Neutron network is L2 - or also 
 because it's needed in order to get the Neutron implementation components 
 that you use to work correctly?  For example, so that you have a DHCP agent 
 for each L2 network (if you use the Neutron DHCP agent).
 
  For our end users - they only care about getting a vm with a single ip
 address
  in a network which is really a zone like prod or dev or test.
 They stop
  caring after that point.  So in the scheduler filter that we created we
 do
  exactly that.  We will filter down from all the hosts and networks down
 to a
  combo that intersects at a host that has space, with a network that has
 space,
  And the network that was chosen is actually available to that host.
 
 Thanks, makes perfect sense now.
 
 So I think there are two possible representations, overall, of what you are 
 looking for.
 
 1. A 'network group' of similar L2 networks.  When a VM is launched, tenant 
 specifies the network group instead of a particular L2 network, and 
 Nova/Neutron select a host and network with available 

Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads for operator languagepacks

2015-06-17 Thread Fox, Kevin M
so, to not beat up on the public facing server, the user would have to copy the 
container from the public server to the cloud's swift stoage, then the docker 
hosts could pull from there?

Thanks,
Kevin

From: Adrian Otto [adrian.o...@rackspace.com]
Sent: Wednesday, June 17, 2015 4:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads 
for operator languagepacks

Kevin,

 On Jun 17, 2015, at 4:03 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 Would then each docker host try and redownload the the prebuilt container 
 externally? If you build from source, does it build it once and then all the 
 docker hosts use that one local copy? Maybe Solum needs a mechanism to pull 
 in a prebuilt LP?

On each docker server Solum downloads built LP’s from swift before the 
containers are created, so Docker has no reason to contact the public image 
repository for fetching the LP images because is has a local copy.

Adrian


 Thanks,
 Kevin
 
 From: Murali Allada [murali.all...@rackspace.com]
 Sent: Wednesday, June 17, 2015 12:53 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift 
 downloads for operator languagepacks

 Kevin\Keith,

 Yes, we would like to use the catalog for globally available artifacts, such 
 as operator languagepacks. More specifically the catalog would be a great 
 place to store metadata about publicly available artifacts to make them 
 searchable and easy to discover.

 The catalog would point to the 'built' artifact, not the 'unbuilt' dockerfile 
 in github.
 The point of languagepacks is to reduce the amount of time the solum CI 
 pipeline
 spends building the users application container. We shouldn't build the 
 languagepack from scratch each time.

 -Murali







 
 From: Keith Bray keith.b...@rackspace.com
 Sent: Wednesday, June 17, 2015 2:10 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift 
 downloads for operator languagepacks

 Hi Kevin,

 We absolute envision languagepack artifacts being made available via
 apps.openstack.org (ignoring for a moment that the name may not be a
 perfect fit, particularly for things like vanilla glance images ... Is it
 an OS or an App? ...  catalog.openstack.org might be more fitting).
 Anyway, there are two stages for language packs, unbuilt, and built.  If
 it's in an unbuilt state, then it's really a Dockerfile + any accessory
 files that the Dockerfile references.   If it's in a built state, then
 it's a Docker image (same as what is found on Dockerhub I believe).  I
 think there will need to be more discussion to know what users prefer,
 built vs. unbuilt, or both options (where unbuilt is often a collection of
 files, best managed in a repo like github vs. built which are best
 provided as direct links so a single source like Dockerhub).

 -Keith

 On 6/17/15 1:58 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 This question may be off on a tangent, or may be related.

 As part of the application catalog project, (http://apps.openstack.org/)
 we're trying to provide globally accessible resources that can be easily
 consumed in OpenStack Clouds. How would these global Language Packs fit
 in? Would the url record in the app catalog be required to point to an
 Internet facing public Swift system then? Or, would it point to the
 source git repo that Solum would use to generate the LP still?

 Thanks,
 Kevin
 
 From: Randall Burt [randall.b...@rackspace.com]
 Sent: Wednesday, June 17, 2015 11:38 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum] Supporting swift   downloads
 for operatorlanguagepacks

 Yes. If an operator wants to make their LP publicly available outside of
 Solum, I was thinking they could just make GET's on the container public.
 That being said, I'm unsure if this is realistically do-able if you still
 have to have an authenticated tenant to access the objects. Scratch that;
 http://blog.fsquat.net/?p=40 may be helpful.

 On Jun 17, 2015, at 1:27 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 To be clear, Randall is referring to a swift container (directory).

 Murali has a good idea of attempting to use swift client first, as it
 has performance optimizations that can speed up the process more than
 naive file transfer tools. I did mention to him that wget does have a
 retiree feature, and that we could see about using curl instead to allow
 for chunked encoding as additional optimizations.

 Randall, are you suggesting that we could use swift client for both
 private and public LP uses? That sounds like a good suggestion to me.

 Adrian

 

Re: [openstack-dev] Proposing Brian Haley to Neutron L3 Core Reviewer Team

2015-06-17 Thread Carl Baldwin
It has been a week and feedback has been positive and supportive of
Brian's nomination.  Welcome to the L3 core reviewer team, Brian.

Carl

On Wed, Jun 10, 2015 at 1:11 PM, Carl Baldwin c...@ecbaldwin.net wrote:
 Folks,

 As the Neutron L3 Lieutenant [1] under the PTL, Kyle, I'd like to
 propose Brian Haley as a member of the Neutron L3 core reviewer team.
 Brian has been a long time contributor in Neutron showing expertise
 particularly in IPv6, iptables, and Linux kernel matters.  His
 knowledge and involvement will be very important especially in this
 area.  Brian has become a trusted member of our community.  His review
 stats [2][3][4] place him comfortably with other Neutron core
 reviewers.  He regularly runs proposed patches himself and gives
 insightful feedback.  He has shown a lot of interest in the success of
 Neutron.

 Existing Neutron core reviewers from the L3 area of focus, please vote
 +1/-1 for the addition of Brian to the core reviewer team.
 Specifically, I'm looking for votes from Henry, Assaf, and Mark.

 Thanks!
 Carl

 [1] 
 http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#adding-or-removing-core-reviewers
 [2] 
 https://review.openstack.org/#/q/reviewer:%22Brian+Haley+%253Cbrian.haley%2540hp.com%253E%22,n,z
 [3] http://stackalytics.com/report/contribution/neutron-group/90
 [4] http://stackalytics.com/?user_id=brian-haleymetric=marks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taskflow] Returning information from reverted flow

2015-06-17 Thread Joshua Harlow

Ok, so https://review.openstack.org/#/c/192942/ is a WIP of this.

Seems to mostly work, just need to tweak a few more engine unit tests...

-Josh

Dulko, Michal wrote:



-Original Message-
From: Joshua Harlow [mailto:harlo...@outlook.com]
Sent: Tuesday, June 16, 2015 4:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [taskflow] Returning information from reverted
flow

Dulko, Michal wrote:

-Original Message-
From: Joshua Harlow [mailto:harlo...@outlook.com]
Sent: Friday, June 12, 2015 5:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [taskflow] Returning information from
reverted flow

Dulko, Michal wrote:

Hi,

In Cinder we had merged a complicated piece of code[1] to be able to
return something from flow that was reverted. Basically outside we
needed an information if volume was rescheduled or not. Right now
this is done by injecting information needed into exception thrown
from the flow. Another idea was to use notifications mechanism of

TaskFlow.

Both ways are rather workarounds than real solutions.

Unsure about notifications being a workaround (basically u are
notifying to some other entities that rescheduling happened, which
seems like exactly what it was made for) but I get the point ;)

Please take a look at this review -

https://review.openstack.org/#/c/185545/. Notifications cannot help if some
further revert decision needs to be based on something that happened
earlier.

That sounds like conditional reverting, which seems like it should be handled
differently anyway, or am I misunderstanding something?


Current version of the patch takes another approach which I think handles it 
correctly. So you were probably right. :)


I wonder if TaskFlow couldn't provide a mechanism to mark stored
element to not be removed when revert occurs. Or maybe another way
of returning something from reverted flow?

Any thoughts/ideas?

I have a couple, I'll make some paste(s) and see what people think,

How would this look (as pseudo-code or other) to you, what would be
your ideal, and maybe we can work from there (maybe u could do some
paste(s) to and we can prototype it), just storing information that
is returned from revert() somewhere? Or something else? There has
been talk about task 'local storage' (or something like that/along
those lines) that could also be used for this similar purpose.

I think that the easiest idea from the perspective of an end user would be

to save items returned from revert into flow engine's storage *and* do not
remove it from storage when whole flow gets reverted. This is completely
backward compatible, because currently revert doesn't return anything. And
if revert has to record some information for further processing - this will also
work.
Ok, let me see what this looks like and maybe I can have a POC in the next
few days, I don't think its impossible to do (obviously) and hopefully will be
useful for this.


Great!

[1] https://review.openstack.org/#/c/154920/



__



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-

requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__


 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Kris G. Lindgren

On 6/17/15, 10:59 AM, Neil Jerram neil.jer...@metaswitch.com wrote:



On 17/06/15 16:17, Kris G. Lindgren wrote:
 See inline.
 

 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.



 On 6/17/15, 5:12 AM, Neil Jerram neil.jer...@metaswitch.com wrote:

 Hi Kris,

 Apologies in advance for questions that are probably really dumb - but
 there are several points here that I don't understand.

 On 17/06/15 03:44, Kris G. Lindgren wrote:
 We are doing pretty much the same thing - but in a slightly different
 way.
We extended the nova scheduler to help choose networks (IE. don't
put
 vm's on a network/host that doesn't have any available IP address).

 Why would a particular network/host not have any available IP address?

   If a created network has 1024 ip's on it (/22) and we provision 1020
vms,
   anything deployed after that will not have an additional ip address
 because
   the network doesn't have any available ip addresses (loose some ip's
to
   the network).

OK, thanks, that certainly explains the particular network possibility.

So I guess this applies where your preference would be for network A,
but it would be OK to fall back to network B, and so on.  That sounds
like it could be a useful general enhancement.

(But, if a new VM absolutely _has_ to be on, say, the 'production'
network, and the 'production' network is already fully used, you're
fundamentally stuck, aren't you?)

Yes - this would be a scheduling failure - and I am ok with that.  It does
no good to have a vm on a network that doesn't work.


What about the /host part?  Is it possible in your system for a
network to have IP addresses available, but for them not to be usable on
a particular host?

Yes this is also a possibility.  That the network allocated to a set of
hosts has IP's available but no compute capacity to spin up vms on it.
Again - I am ok with this.


 Then,
 we add into the host-aggregate that each HV is attached to a network
 metadata item which maps to the names of the neutron networks that
host
 supports.  This basically creates the mapping of which host supports
 what
 networks, so we can correctly filter hosts out during scheduling. We
do
 allow people to choose a network if they wish and we do have the
neutron
 end-point exposed. However, by default if they do not supply a boot
 command with a network, we will filter the networks down and choose
one
 for them.  That way they never hit [1].  This also works well for us,
 because the default UI that we provide our end-users is not horizon.

 Why do you define multiple networks - as opposed to just one - and why
 would one of your users want to choose a particular one of those?

 (Do you mean multiple as in public-1, public-2, ...; or multiple as in
 public, service, ...?)

   This is answered in the other email and original email as well.  But
 basically
   we have multiple L2 segments that only exists on certain switches and
 thus are
   only tied to certain hosts.  With the way neutron is currently
structured
 we
   need to create a network for each L2. So that¹s why we define multiple
 networks.

Thanks!  Ok, just to check that I really understand this:

- You have real L2 segments connecting some of your compute hosts
together - and also I guess to a ToR that does L3 to the rest of the
data center.

Correct.



- You presumably then just bridge all the TAP interfaces, on each host,
to the host's outwards-facing interface.

+ VM
|
+- Host + VM
|   |
|   + VM
|
|   + VM
|   |
+- Host + VM
|   |
ToR ---+   + VM
|
|   + VM
|   |
|- Host + VM
|
+ VM

Also correct, we are using flat provider networks (shared=true) -
however provider vlan networks would work as well.


- You specify each such setup as a network in the Neutron API - and
hence you have multiple similar networks, for your data center as a whole.

Out of interest, do you do this just because it's the Right Thing
according to the current Neutron API - i.e. because a Neutron network is
L2 - or also because it's needed in order to get the Neutron
implementation components that you use to work correctly?  For example,
so that you have a DHCP agent for each L2 network (if you use the
Neutron DHCP agent).

Somewhat both.  It was a how do I get neutron to handle this without
making drastic changes to the base level neutron concepts.  We currently
do have dhcp-agents and nova-metadata agent running in each L2 and we
specifically assign them to hosts in that L2 space.  We are currently
working on ways to remove this requirement.


   For our end users - they only care about getting a vm with a single ip
 address
   

Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads for operator languagepacks

2015-06-17 Thread Adrian Otto
Kevin,

Magnum has a plan for dealing with that. Solum will likely have a Magnum 
integration that will leverage it:

https://blueprints.launchpad.net/magnum/+spec/registryv2-in-master

With that said, yes, you could also optimize the performance of the upstream by 
caching it locally in swift. You’d want an async proceed to keep it continually 
updated though.

Adrian

 On Jun 17, 2015, at 4:30 PM, Fox, Kevin M kevin@pnnl.gov wrote:
 
 so, to not beat up on the public facing server, the user would have to copy 
 the container from the public server to the cloud's swift stoage, then the 
 docker hosts could pull from there?
 
 Thanks,
 Kevin
 
 From: Adrian Otto [adrian.o...@rackspace.com]
 Sent: Wednesday, June 17, 2015 4:21 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift 
 downloads for operator languagepacks
 
 Kevin,
 
 On Jun 17, 2015, at 4:03 PM, Fox, Kevin M kevin@pnnl.gov wrote:
 
 Would then each docker host try and redownload the the prebuilt container 
 externally? If you build from source, does it build it once and then all the 
 docker hosts use that one local copy? Maybe Solum needs a mechanism to pull 
 in a prebuilt LP?
 
 On each docker server Solum downloads built LP’s from swift before the 
 containers are created, so Docker has no reason to contact the public image 
 repository for fetching the LP images because is has a local copy.
 
 Adrian
 
 
 Thanks,
 Kevin
 
 From: Murali Allada [murali.all...@rackspace.com]
 Sent: Wednesday, June 17, 2015 12:53 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift 
 downloads for operator languagepacks
 
 Kevin\Keith,
 
 Yes, we would like to use the catalog for globally available artifacts, such 
 as operator languagepacks. More specifically the catalog would be a great 
 place to store metadata about publicly available artifacts to make them 
 searchable and easy to discover.
 
 The catalog would point to the 'built' artifact, not the 'unbuilt' 
 dockerfile in github.
 The point of languagepacks is to reduce the amount of time the solum CI 
 pipeline
 spends building the users application container. We shouldn't build the 
 languagepack from scratch each time.
 
 -Murali
 
 
 
 
 
 
 
 
 From: Keith Bray keith.b...@rackspace.com
 Sent: Wednesday, June 17, 2015 2:10 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift 
 downloads for operator languagepacks
 
 Hi Kevin,
 
 We absolute envision languagepack artifacts being made available via
 apps.openstack.org (ignoring for a moment that the name may not be a
 perfect fit, particularly for things like vanilla glance images ... Is it
 an OS or an App? ...  catalog.openstack.org might be more fitting).
 Anyway, there are two stages for language packs, unbuilt, and built.  If
 it's in an unbuilt state, then it's really a Dockerfile + any accessory
 files that the Dockerfile references.   If it's in a built state, then
 it's a Docker image (same as what is found on Dockerhub I believe).  I
 think there will need to be more discussion to know what users prefer,
 built vs. unbuilt, or both options (where unbuilt is often a collection of
 files, best managed in a repo like github vs. built which are best
 provided as direct links so a single source like Dockerhub).
 
 -Keith
 
 On 6/17/15 1:58 PM, Fox, Kevin M kevin@pnnl.gov wrote:
 
 This question may be off on a tangent, or may be related.
 
 As part of the application catalog project, (http://apps.openstack.org/)
 we're trying to provide globally accessible resources that can be easily
 consumed in OpenStack Clouds. How would these global Language Packs fit
 in? Would the url record in the app catalog be required to point to an
 Internet facing public Swift system then? Or, would it point to the
 source git repo that Solum would use to generate the LP still?
 
 Thanks,
 Kevin
 
 From: Randall Burt [randall.b...@rackspace.com]
 Sent: Wednesday, June 17, 2015 11:38 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum] Supporting swift   downloads
 for operatorlanguagepacks
 
 Yes. If an operator wants to make their LP publicly available outside of
 Solum, I was thinking they could just make GET's on the container public.
 That being said, I'm unsure if this is realistically do-able if you still
 have to have an authenticated tenant to access the objects. Scratch that;
 http://blog.fsquat.net/?p=40 may be helpful.
 
 On Jun 17, 2015, at 1:27 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:
 
 To be clear, Randall is referring to a swift container (directory).
 
 

Re: [openstack-dev] [Nova]

2015-06-17 Thread Sourabh Patwardhan
Thanks for the pointer, Matt.

On Wed, Jun 17, 2015 at 2:40 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 6/17/2015 3:53 PM, Sourabh Patwardhan wrote:

 Hello,

 I'm working on a new vif driver [1].
 As part of the review comments, it was mentioned that a generic VIF
 driver will be introduced in Liberty, which may render custom VIF
 drivers obsolete.

 Can anyone point me to blueprints / specs for the generic driver work?


 I think that's being proposed here:

 https://review.openstack.org/#/c/162468/

  Alternatively, any guidance on how to proceed on my patch is most welcome.

 Thanks,
 Sourabh

 [1] https://review.openstack.org/#/c/157616/


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] service chaining feature development meeting at 10am pacific time June 18

2015-06-17 Thread Cathy Zhang
Hello everyone,

Our next weekly IRC meeting for the OpenStack service chain feature development 
is 10am pacific time June 18 (UTC 1700)  Following is the meeting info:

Weekly on Thursday at 1700 
UTChttp://www.timeanddate.com/worldclock/fixedtime.html?hour=18min=00sec=0 
in #openstack-meeting-4

You can also find the meeting info at 
http://eavesdrop.openstack.org/#Neutron_Service_Chaining_meeting

Agenda:

1.  Update on repository creation

2.  Finalize the SFC Feature project scope: functional module breakdown and 
ownership as well as the types of service functions that will be chained in 
this feature development. Here is a summary of functional module ownership 
sign-up based on last IRC meeting and email confirmation.



* Integration with Neutron/devstack, CLI, Horizon, Heat--Mohankumar 
and Ramanjaneya

* Neutron Service chain API Extension- Cathy,LouisF

* Flow Classifier API  LouisF,Vikram, Yuji, nbouthors

* Service chain Plugin: API handling and Data Base- LouisF, 
Cathy,Swami,

* Service Chain Driver managerCathy, Brian

* OVS Driver LouisF, Brian

* Flow Classifier on the data Path--- nbouthors_, Yuji

* OVS with NSH encapsulation for verification of the OpenStack Service 
Chain API  Plugin functionality---LouisF, Swami,



3.  Service chain API and spec discussion and try to finalize the service 
chain API design so that we can start implementing it.

4.  Deep dive into technical questions

5.  Tentative time line for development of each module.


Anyone who would like to contribute to this feature development is welcome to 
join the meeting. Hope the time is good for most people.





Thanks,

Cathy



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] setup.py executable bit

2015-06-17 Thread Robert Collins
An unintended side effect of the requirements refactoring was that we
changed from preserving the 'x' bit on setup.py, to discarding it.
This happened when we started writing the file atomically rather than
in-place - a good robustness improvement.

Previously the requirements sync, which enforces setup.py contents had
made no statement about the file mode. Now it unintentionally is.

We could do several things:
 - preserve the file mode (stat the old, use its mode in open on the temp file)
 - force the mode to be +x
 - force the mode to be -x [the current behaviour]

After a brief IRC discussion in #openstack-olso we're proposing that
forcing the mode to be -x is appropriate.

Our reasoning is as follows:
 - './setup.py XYZ' is often a bug - unless the shebang in the
setup.py is tolerant of virtualenvs (not all are), it will do the
wrong thing in a virtual env. Similarly with PATH.
 - we don't require or suggest users of our requirements syncronised
packages run setup.py at all:
- sdists and releases are made in the CI infrastructure
- installation is exclusively via pip

So it seems like a slight safety improvement to remove the x bit - and
possibly (we haven't thought it all the way through yet) also remove
the shebang entirely, so that the contract becomes explicitly
'setup.py is not executable'.

Please raise concerns or objections here; if there are none I'll
likely put up a patch to remove the shebang early next week, or
whenever I get reminded of this.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-17 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2015-06-17 14:07:35 -0400:
 On 06/17/2015 01:29 PM, Clint Byrum wrote:
  Excerpts from Sean Dague's message of 2015-06-16 10:16:34 -0700:
  On 06/16/2015 12:49 PM, Clint Byrum wrote:
  Excerpts from Sean Dague's message of 2015-06-16 06:22:23 -0700:
  FYI,
 
  One of the things that came out of the summit for Devstack plans going
  forward is to trim it back to something more opinionated and remove a
  bunch of low use optionality in the process.
 
  One of those branches to be trimmed is all the support for things beyond
  RabbitMQ in the rpc layer. RabbitMQ is what's used by 95%+ of our
  community, that's what the development environment should focus on.
 
  The patch to remove all of this is here -
  https://review.openstack.org/#/c/192154/. Expect this to merge by the
  end of the month. If people are interested in non RabbitMQ external
  plugins, now is the time to start writing them. The oslo.messaging team
  already moved their functional test installation for alternative
  platforms off of devstack, so this should impact a very small number of
  people.
 
 
  The recent spec we added to define a policy for oslo.messaging drivers is
  intended as a way to encourage that 5% who feels a different messaging
  layer is critical to participate upstream by adding devstack-gate jobs
  and committing developers to keep them stable. This change basically
  slams the door in their face and says good luck, we don't actually care
  about accomodating you. This will drive them more into the shadows,
  and push their forks even further away from the core of the project. If
  that's your intention, then we need to have a longer conversation where
  you explain to me why you feel that's a good thing.
 
  I believe it is not the responsibility of the devstack team to support
  every possible backend one could imagine and carry that technical debt
  in tree, confusing new users in the process that any of these things
  might actually work. I believe that if you feel that your spec assumed
  that was going to be the case, you made a large incorrect externalities
  assumption.
 
  
  I agree with you, and support your desire to move things into plugins.
  
  However, your timing is problematic and the lack of coordination with
  the ongoing effort to deprecate untested messaging drivers gracefully
  is really frustrating. We've been asking (on this list) zmq interested
  parties to add devstack-gate jobs and identify themselves as contacts
  to support these drivers. Meanwhile this change and the wording around
  it suggest that they're not welcome in devstack.
 
 So there has clearly been some disconnect here. This patch was
 originally going to come later in the cycle, but some back and forth on
 proton fixes with Flavio made me realize we really needed to get this
 direction out in front of more people (which is why it wasn't just a
 patch, it was also an email heads up). So there wasn't surprise when it
 was merged.
 
 We built the external plugin mechanism in devstack to make it very easy
 to extend out of tree, and make it easy to let people consume your out
 of tree stuff. It's the only way that devstack works in the big tent
 world, because there just is too much stuff for the team to support.

Every change like this makes it harder for newcomers to participate.
Frankly, it makes it harder for everyone because it means there are
more moving parts, but in this specific case many of the people
involved in these messaging drivers are relatively new, so I point
that out. The already difficult task of setting up sufficient
functional tests has now turned into figure out devstack, too.
The long-term Oslo team members can't do all of this work, any more
than the devstack team can, but things were at least working in
what we thought was a stable way so we could try to provide guidance.

 
  Also, I take issue with the value assigned to dropping it. If that 95%
  is calculated as orgs_running_on_rabbit/orgs then it's telling a really
  lop-sided story. I'd rather see compute_nodes_on_rabbit/compute_nodes.
 
  I'd like to propose that we leave all of this in tree to match what is
  in oslo.messaging. I think devstack should follow oslo.messaging and
  deprecate the ones that oslo.messaging deprecates. Otherwise I feel like
  we're Vizzini cutting the rope just as The Dread Pirate 0mq is about to
  climb the last 10 meters to the top of the cliffs of insanity and battle
  RabbitMQ left handed. I know, inconceivable right?
 
  We have an external plugin mechanism for devstack. That's a viable
  option here. People will have to own and do that work, instead of
  expecting the small devstack team to do that for them. I believe I left
  enough of a hook in place that it's possible.
 
  
  So lets do some communication, and ask for the qpid and zmq people to
  step up, and help them move their code into an external plugin, and add
  documentation to help their users find it. The burden 

[openstack-dev] [neutron][api] Neutron micro-versioning update

2015-06-17 Thread Salvatore Orlando
As you are probably aware an api-wg guideline for microversioning is under
review [1]
Needless to say, neutron developers interested in this work should have a
look at [1] - if nothing else because we need to ensure we are aligned -
and influence the guideline were appropriate.

Experimental APIs are one item where Neutron is not already aligned with
the proposed guideline - and with the project already implementing
microversioning.
While it is known that nova chose to adopt experimental APIs only as a
temporary mechanism [2], the idea of experimental APIs got pretty much
slammed down unanimously in an Ironic meeting (in [3] it sounds like the
word 'experimental' really tickles the Ironic development team).
Therefore, Neutron needs to rethink the proposed API evolution strategy
without experimental APIs. Every new API introduced will be versioned.
While versioning still allows us to evolve the API as we wish, the drawback
is that we'll have to expect several backward incompatible changes while
new APIs stabilise after being introduced.

On the practical stuff matter, I am going to add soon a list of todo items
to spec [4] (which we'll probably amend anyway to reflect outcome the
discussion on [1]). If you're interested in cooperating in this effort,
please pick one item. If we achieve a decent number of volunteers we'll
try and set up a weekly meeting.

One aspect where general feedback would be welcome is whether the
microversioning work should be based on master or piggyback on the pecan
switch effort - therefore implementing versioning directly in the new
framework. The pecan switch is being implemented in a feature branch [5]

Thanks for your attention,
Salvatore

[1] https://review.openstack.org/#/c/187112/
[2]
http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.htm
[3]
http://eavesdrop.openstack.org/meetings/ironic/2015/ironic.2015-06-15-17.02.log.html
[4]
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/microversioning.html
[5]
https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:feature/pecan,n,z
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads for operator languagepacks

2015-06-17 Thread Murali Allada
Thanks Christopher. Will do for sure.

-Murali



From: Christopher Aedo ca...@mirantis.com
Sent: Wednesday, June 17, 2015 3:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads 
for operator languagepacks

On Wed, Jun 17, 2015 at 12:53 PM, Murali Allada
murali.all...@rackspace.com wrote:
 Kevin\Keith,

 Yes, we would like to use the catalog for globally available artifacts, such 
 as operator languagepacks. More specifically the catalog would be a great 
 place to store metadata about publicly available artifacts to make them 
 searchable and easy to discover.

 The catalog would point to the 'built' artifact, not the 'unbuilt' dockerfile 
 in github.
 The point of languagepacks is to reduce the amount of time the solum CI 
 pipeline
 spends building the users application container. We shouldn't build the 
 languagepack from scratch each time.

Murali, this is great to hear.  It fits perfectly with where I'd like
to see the app catalog head - which is to more easily host anything
that could be run on OpenStack.  Hopefully you can join us in the
weeks to come (IRC or mailing list) and we can start sketching out the
changes we'll need to make to allow the catalog to expand in this
directly.  I'm looking forward to seeing Solum assets in there!

-Christopher


 -Murali







 
 From: Keith Bray keith.b...@rackspace.com
 Sent: Wednesday, June 17, 2015 2:10 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift 
 downloads for operator languagepacks

 Hi Kevin,

 We absolute envision languagepack artifacts being made available via
 apps.openstack.org (ignoring for a moment that the name may not be a
 perfect fit, particularly for things like vanilla glance images ... Is it
 an OS or an App? ...  catalog.openstack.org might be more fitting).
 Anyway, there are two stages for language packs, unbuilt, and built.  If
 it's in an unbuilt state, then it's really a Dockerfile + any accessory
 files that the Dockerfile references.   If it's in a built state, then
 it's a Docker image (same as what is found on Dockerhub I believe).  I
 think there will need to be more discussion to know what users prefer,
 built vs. unbuilt, or both options (where unbuilt is often a collection of
 files, best managed in a repo like github vs. built which are best
 provided as direct links so a single source like Dockerhub).

 -Keith

 On 6/17/15 1:58 PM, Fox, Kevin M kevin@pnnl.gov wrote:

This question may be off on a tangent, or may be related.

As part of the application catalog project, (http://apps.openstack.org/)
we're trying to provide globally accessible resources that can be easily
consumed in OpenStack Clouds. How would these global Language Packs fit
in? Would the url record in the app catalog be required to point to an
Internet facing public Swift system then? Or, would it point to the
source git repo that Solum would use to generate the LP still?

Thanks,
Kevin

From: Randall Burt [randall.b...@rackspace.com]
Sent: Wednesday, June 17, 2015 11:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Supporting swift   downloads
for operatorlanguagepacks

Yes. If an operator wants to make their LP publicly available outside of
Solum, I was thinking they could just make GET's on the container public.
That being said, I'm unsure if this is realistically do-able if you still
have to have an authenticated tenant to access the objects. Scratch that;
http://blog.fsquat.net/?p=40 may be helpful.

On Jun 17, 2015, at 1:27 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 To be clear, Randall is referring to a swift container (directory).

 Murali has a good idea of attempting to use swift client first, as it
has performance optimizations that can speed up the process more than
naive file transfer tools. I did mention to him that wget does have a
retiree feature, and that we could see about using curl instead to allow
for chunked encoding as additional optimizations.

 Randall, are you suggesting that we could use swift client for both
private and public LP uses? That sounds like a good suggestion to me.

 Adrian

 On Jun 17, 2015, at 11:10 AM, Randall Burt
randall.b...@rackspace.com wrote:

 Can't an operator make the target container public therefore removing
the need for multiple access strategies?

  Original message 
 From: Murali Allada
 Date:06/17/2015 11:41 AM (GMT-06:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Solum] Supporting swift downloads for
operator languagepacks

 Hello Solum Developers,

 When we were designing the operator languagepack feature for Solum, we
wanted to make use of 

Re: [openstack-dev] [all] setup.py executable bit

2015-06-17 Thread Davanum Srinivas
+1 both to using -x and to removing the shebang.

On Wed, Jun 17, 2015 at 2:47 PM, Doug Hellmann d...@doughellmann.com wrote:
 Excerpts from Robert Collins's message of 2015-06-18 06:40:33 +1200:
 An unintended side effect of the requirements refactoring was that we
 changed from preserving the 'x' bit on setup.py, to discarding it.
 This happened when we started writing the file atomically rather than
 in-place - a good robustness improvement.

 Previously the requirements sync, which enforces setup.py contents had
 made no statement about the file mode. Now it unintentionally is.

 We could do several things:
  - preserve the file mode (stat the old, use its mode in open on the temp 
 file)
  - force the mode to be +x
  - force the mode to be -x [the current behaviour]

 After a brief IRC discussion in #openstack-olso we're proposing that
 forcing the mode to be -x is appropriate.

 Our reasoning is as follows:
  - './setup.py XYZ' is often a bug - unless the shebang in the
 setup.py is tolerant of virtualenvs (not all are), it will do the
 wrong thing in a virtual env. Similarly with PATH.
  - we don't require or suggest users of our requirements syncronised
 packages run setup.py at all:
 - sdists and releases are made in the CI infrastructure
 - installation is exclusively via pip

 So it seems like a slight safety improvement to remove the x bit - and
 possibly (we haven't thought it all the way through yet) also remove
 the shebang entirely, so that the contract becomes explicitly
 'setup.py is not executable'.

 Please raise concerns or objections here; if there are none I'll
 likely put up a patch to remove the shebang early next week, or
 whenever I get reminded of this.

 +1 both to using -x and to removing the shebang.

 Doug

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] New Juno (5.1.0) release

2015-06-17 Thread Emilien Macchi


On 06/17/2015 09:57 AM, Emilien Macchi wrote:
(...)
 
 All modules having stable/juno will be released 5.1.0 both in OpenStack
 and Puppet forge.

This is done.

Puppet modules have now 5.1.0 release in both OpenStack repositories and
Puppetforge.

Special kudos to our team for their help to make it today.

Thanks,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads for operator languagepacks

2015-06-17 Thread Christopher Aedo
On Wed, Jun 17, 2015 at 12:53 PM, Murali Allada
murali.all...@rackspace.com wrote:
 Kevin\Keith,

 Yes, we would like to use the catalog for globally available artifacts, such 
 as operator languagepacks. More specifically the catalog would be a great 
 place to store metadata about publicly available artifacts to make them 
 searchable and easy to discover.

 The catalog would point to the 'built' artifact, not the 'unbuilt' dockerfile 
 in github.
 The point of languagepacks is to reduce the amount of time the solum CI 
 pipeline
 spends building the users application container. We shouldn't build the 
 languagepack from scratch each time.

Murali, this is great to hear.  It fits perfectly with where I'd like
to see the app catalog head - which is to more easily host anything
that could be run on OpenStack.  Hopefully you can join us in the
weeks to come (IRC or mailing list) and we can start sketching out the
changes we'll need to make to allow the catalog to expand in this
directly.  I'm looking forward to seeing Solum assets in there!

-Christopher


 -Murali







 
 From: Keith Bray keith.b...@rackspace.com
 Sent: Wednesday, June 17, 2015 2:10 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift 
 downloads for operator languagepacks

 Hi Kevin,

 We absolute envision languagepack artifacts being made available via
 apps.openstack.org (ignoring for a moment that the name may not be a
 perfect fit, particularly for things like vanilla glance images ... Is it
 an OS or an App? ...  catalog.openstack.org might be more fitting).
 Anyway, there are two stages for language packs, unbuilt, and built.  If
 it's in an unbuilt state, then it's really a Dockerfile + any accessory
 files that the Dockerfile references.   If it's in a built state, then
 it's a Docker image (same as what is found on Dockerhub I believe).  I
 think there will need to be more discussion to know what users prefer,
 built vs. unbuilt, or both options (where unbuilt is often a collection of
 files, best managed in a repo like github vs. built which are best
 provided as direct links so a single source like Dockerhub).

 -Keith

 On 6/17/15 1:58 PM, Fox, Kevin M kevin@pnnl.gov wrote:

This question may be off on a tangent, or may be related.

As part of the application catalog project, (http://apps.openstack.org/)
we're trying to provide globally accessible resources that can be easily
consumed in OpenStack Clouds. How would these global Language Packs fit
in? Would the url record in the app catalog be required to point to an
Internet facing public Swift system then? Or, would it point to the
source git repo that Solum would use to generate the LP still?

Thanks,
Kevin

From: Randall Burt [randall.b...@rackspace.com]
Sent: Wednesday, June 17, 2015 11:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Supporting swift   downloads
for operatorlanguagepacks

Yes. If an operator wants to make their LP publicly available outside of
Solum, I was thinking they could just make GET's on the container public.
That being said, I'm unsure if this is realistically do-able if you still
have to have an authenticated tenant to access the objects. Scratch that;
http://blog.fsquat.net/?p=40 may be helpful.

On Jun 17, 2015, at 1:27 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 To be clear, Randall is referring to a swift container (directory).

 Murali has a good idea of attempting to use swift client first, as it
has performance optimizations that can speed up the process more than
naive file transfer tools. I did mention to him that wget does have a
retiree feature, and that we could see about using curl instead to allow
for chunked encoding as additional optimizations.

 Randall, are you suggesting that we could use swift client for both
private and public LP uses? That sounds like a good suggestion to me.

 Adrian

 On Jun 17, 2015, at 11:10 AM, Randall Burt
randall.b...@rackspace.com wrote:

 Can't an operator make the target container public therefore removing
the need for multiple access strategies?

  Original message 
 From: Murali Allada
 Date:06/17/2015 11:41 AM (GMT-06:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Solum] Supporting swift downloads for
operator languagepacks

 Hello Solum Developers,

 When we were designing the operator languagepack feature for Solum, we
wanted to make use of public urls to download operator LPs, such as
those available for CDN backed swift containers we have at Rackspace,
or any publicly accessible url. This would mean that when a user
chooses to build applications on top of a languagepack provided by
the operator, we use a url to 'wget' the LP image.

 Recently, we have started noticing a number of 

Re: [openstack-dev] [Glance] [all] Proposal for Weekly Glance Drivers meeting.

2015-06-17 Thread Nikhil Komawar
Erno, thanks for raising the concerns.

tl;dr;
* The meeting allows people to focus on spec reviews.
* It also encourages people to think about some of the blockers on a weekly 
basis and drive the project forward.
* It does NOT intend to discourage open discussion on gerrit or ML.

The specs can and are encouraged to be raised during the weekly meeting but our 
meetings do not have enough time to discuss all the specs and some of the specs 
can sit around for a long time. (Meeting can help with such triage) Also, specs 
are most welcome to be discussed at any of the platforms supported and/or used 
by OpenStack team. However, the speed at which the specs are getting reviewed 
is concerning. It is also an issues to get multiple drivers online at the same 
time.

You are correct in the way that this meeting encourages sync between the 
drivers. Also, it helps people to show up at the meeting on a regular basis 
along with fellow Glancers, reserve a dedicated time slot for themselves when 
they will do spec reviews. Also, we can put a freeze on merge (after an 
agreement has been reached) and send email to ML asking for input before the 
date mentioned but depending on further feedback at one of the meeting -- this 
will enable those not in the timezone to provide input. (Channels are logged to 
get more context)

This is a Drivers' meeting after all and preference to the time was given as 
per the current list of Drivers' timezones. Let's see if more people start 
showing up at the meeting, or are interested and not able to attend due to TZ 
issues, we may think about alternating timeslot. In the interest of providing 
regular feedback on the specs and continue to keep the review list small, we 
need this meeting.

So, your assumption was indeed different than my intent. Hope that helps.

P.S. The timeslot isn't that bad as many of the current openstack projects', 
drivers', wg meetings are held in nearby slots. It also ensures meeting isn't 
very very early for those on the western side of the planet, where we have 
decent number of active community members.

Thanks,
-Nikhil


From: Kuvaja, Erno kuv...@hp.com
Sent: Wednesday, June 17, 2015 10:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] [all] Proposal for Weekly Glance Drivers 
meeting.

As this Fri Jun 19th did not seem to have weight, I just express my opinion to 
the mailing list for the records.

Personally I think this is bad idea, but as not being Glance Driver I can't say 
how much need there is for such meeting. The specs should be raised during our 
weekly meeting and/or discussed on the mailing list and gerrit. Having another 
irc meeting just for these discussions (specially at this time) is giving quite 
clear signal that the input from Eastern EMEA  APJ is not needed nor desired.  
Based on the description from Nikhil and the weekly nature of this meeting I 
would assume that the intention was not just have a quick sync between the 
drivers, which I could have understood.

I'd be happy to be told to be wrong on the assumptions above ;)

- Erno

 -Original Message-
 From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
 Sent: 16 June 2015 18:23
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Glance] [all] Proposal for Weekly Glance
 Drivers meeting.

 FYI, We will be closing the vote on Friday, June 19 at 1700 UTC.

 On 6/15/15 7:41 PM, Nikhil Komawar wrote:
  Hi,
 
  As per the discussion during the last weekly Glance meeting
  (14:51:42at
  http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-06-
 11-
  14.00.log.html ), we will begin a short drivers' meeting where anyone
  can come and get more feedback.
 
  The purpose is to enable those who need multiple drivers in the same
  place; easily co-ordinate, schedule  collaborate on the specs, get
  core-reviewers assigned to their specs etc. This will also enable more
  synchronous style feedback, help with more collaboration as well as
  with dedicated time for giving quality input on the specs. All are
  welcome to attend and attendance from drivers is not mandatory but
 encouraged.
  Initially it would be a 30 min meeting and if need persists we will
  extend the period.
 
  Please vote on the proposed time and date:
  https://review.openstack.org/#/c/192008/ (Note: Run the tests for your
  vote to ensure we are considering feasible  non-conflicting times.)
  We will start the meeting next week unless there are strong conflicts.
 

 --

 Thanks,
 Nikhil


 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not 

Re: [openstack-dev] [nova][ceilometer] proposal to send bulk hypervisor stats data in periodic notifications

2015-06-17 Thread gordon chung
not familiar/smart enough to comment on design within Nova but i'm very 
much in favour if this is possible. the polling option in Ceilometer was 
always a means to get information not readily available via notifications.


that said, i don't think we can completely do away with polling. i'm 
aware of some cases where in addition to polling for measurements, some 
users leverage the polling agent as a means of performing health checks 
as well but the less load our polls generate the better.


On 17/06/2015 11:52 AM, Matt Riedemann wrote:
Without getting into the details from the etherpad [1], a few of us in 
IRC today were talking about how the ceilometer compute-agent polls 
libvirt directly for guest VM statistics and how ceilometer should 
really be getting this information from nova via notifications sent 
from a periodic task in the nova compute manager.


Nova already has the get_instance_diagnostics virt driver API which is 
nice in that it has structured versioned instance diagnostic 
information regardless of virt driver (unlike the v2 
os-server-diagnostics API which is a free-form bag of goodies 
depending on which virt driver is used, which makes it mostly 
untestable and not portable).  The problem is the 
get_instance_diagnostics virt driver API is per-instance, so it's not 
efficient in the case that you want bulk instance data for a given 
compute host.


So the idea is to add a new virt driver API to get the bulk data and 
emit that via a structured versioned payload similar to 
get_instance_diagnostics but for all instances.


Eventually the goal is for nova to send what ceilometer is collecting 
today [2] and then ceilometer can just consume that notification 
rather than doing the direct hypervisor polling it has today.


Anyway, this is the high level idea, the details/notes are in the 
etherpad along with next steps.


Feel free to chime in now with reasons why this is crazy and will 
never work and we shouldn't waste our time on it.


[1] https://etherpad.openstack.org/p/nova-hypervisor-bulk-stats-notify
[2] 
http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-compute-meters.html




--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] When do we import aodh?

2015-06-17 Thread gordon chung

i've no idea how to add comments to github so i'll ask here.


On 16/06/2015 11:12 AM, Julien Danjou wrote:

On Tue, Jun 16 2015, Chris Dent wrote:


5. anything in tempest to worry about?

Yes, we need to adapt and reenable tempest after.


6. what's that stuff in the ceilometer dir?
6.1. Looks like migration artifacts, what about migration in
 general?

That's a rest of one of the many rebases I've made during these last
weeks, I just fixed it.

I removed all the migration as we should start fresh on Alembic.


don't we need an initial migration still? to create all the base tables?




7. removing all the rest of the cruft (whatever it might be)

In Ceilometer you mean?


any thought to using ceilometer repo as a base rather than duplicating 
code?  ie 
https://github.com/jd/aodh/blob/master/aodh/storage/sqlalchemy/utils.py


or does that cause weird cyclic madness?





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposing Brian Haley to Neutron L3 Core Reviewer Team

2015-06-17 Thread Edgar Magana
Congratulations Brian!  Welcome to the team!

Edgar




On 6/17/15, 3:59 PM, Carl Baldwin c...@ecbaldwin.net wrote:

It has been a week and feedback has been positive and supportive of
Brian's nomination.  Welcome to the L3 core reviewer team, Brian.

Carl

On Wed, Jun 10, 2015 at 1:11 PM, Carl Baldwin c...@ecbaldwin.net wrote:
 Folks,

 As the Neutron L3 Lieutenant [1] under the PTL, Kyle, I'd like to
 propose Brian Haley as a member of the Neutron L3 core reviewer team.
 Brian has been a long time contributor in Neutron showing expertise
 particularly in IPv6, iptables, and Linux kernel matters.  His
 knowledge and involvement will be very important especially in this
 area.  Brian has become a trusted member of our community.  His review
 stats [2][3][4] place him comfortably with other Neutron core
 reviewers.  He regularly runs proposed patches himself and gives
 insightful feedback.  He has shown a lot of interest in the success of
 Neutron.

 Existing Neutron core reviewers from the L3 area of focus, please vote
 +1/-1 for the addition of Brian to the core reviewer team.
 Specifically, I'm looking for votes from Henry, Assaf, and Mark.

 Thanks!
 Carl

 [1] 
 http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#adding-or-removing-core-reviewers
 [2] 
 https://review.openstack.org/#/q/reviewer:%22Brian+Haley+%253Cbrian.haley%2540hp.com%253E%22,n,z
 [3] http://stackalytics.com/report/contribution/neutron-group/90
 [4] http://stackalytics.com/?user_id=brian-haleymetric=marks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Network path between admin network and shares

2015-06-17 Thread Ben Swartzlander

On 06/11/2015 04:52 PM, Rodrigo Barbieri wrote:

Hello all,

There has been a lot of discussion around Share Migration lately. This 
feature has two main code paths:


- Driver Migration: optimized migration of shares from backend A to 
backend B where both backends belong to the same driver vendor. The 
driver is responsible for migrating and just returns a model update 
dictionary with necessary changes to DB entry.


- Generic Migration: This is the universal fallback for migrating a 
share from backend A to backend B, from any vendor to any vendor. In 
order to do this we have the approach where a machine in the admin 
network mounts both shares (source and destination) and copy the 
files. The problem is that it has been unusual so far in Manila design 
for a machine in the admin network to access shares which are served 
inside the cloud, a network path must exist for this to happen.


I was able to code this change for the generic driver in the Share 
Migration prototype (https://review.openstack.org/#/c/179791/).


We are not sure if all driver vendors are able to accomplish this. We 
would like to ask you to reply to this email if you are not able (or 
even not sure) to create a network path from your backend to the admin 
network so we can better think on the feasability of this feature.


I don't think that there will be any issue for drivers that don't handle 
share servers -- those driver will have static network configurations 
and accessibility between the node responsible for data copying and the 
backend can be an exercise for the administrator. The same is true for 
drivers that do handle share servers if a flat-network plugin is being 
used. Connectivity between the tenant networks and the flat network used 
for shares is left to the admin.


The real problem is for driver that handle share servers and create 
segmented network interfaces. Those interfaces will usually not be 
reachable from the backend network where the data copying node will 
usually live. I'm *not* in favor using VMs to bridge this gap. VMs are 
not something we can assume to exist in every Manila deployment, and I 
would be disappointed if share migration ended up depending on Nova when 
the rest of Manila's features don't.


I think we can solve this problem by allowing drivers that handle share 
servers to create an additional admin network interface for the 
purpose of migrations, and providing additional export locations on that 
admin network interface. This would require us to create a way to flag 
each export location as tenant facing, or admin facing, or both. Also, 
drivers would need a second network plugin to supply IP addresses for 
this admin network. Fortunately the network plugin could be the same for 
all backends because there should only be 1 admin network, so we'd only 
need a single new config flag in manila.conf.


The only downside I can think of with this approach is that it consumes 
more network resources on the backends and could negatively affect 
scalability. Given the high value of migration though, and the lack of a 
workable alternative, I'd like to pursue this approach.


-Ben



More information in blueprint: 
https://blueprints.launchpad.net/manila/+spec/share-migration



Regards,
--
Rodrigo Barbieri
Computer Scientist
Federal University of São Carlos
+55 (11) 96889 3412


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Changing DB regarding IDs for future migration/replication/AZ support

2015-06-17 Thread Ben Swartzlander



On 06/03/2015 09:35 AM, Rodrigo Barbieri wrote:

Hello guys,

I would like to bring everyone up to speed on this topic, since we 
have a weekly meeting tomorrow and I would like to further discuss 
this, either here or tomorrow at the meeting, since this is something 
that is a pre-requisite for future features planned for liberty.


We had a discussion on IRC last week about possible improvements to 
Share Migration concerning the IDs and additional temporary DB row. So 
far, our conclusion has been that we prefer to have the additional DB 
row, but we must deal with the fact that current architecture does not 
expect a Share to have two separate IDs, the API ID and the Driver 
ID. We have came up with several ways to improve this, and we would 
like to continue the discussion and decide how we can better improve 
it thinking about the future features such as Replication and AZ.


Current scenario (as of prototype):
- Migration creates a totally new share in destination backend, copy 
data, copy new DB values (such as destination export location) to the 
original DB entry, and then deletes the new DB entry, and the source 
physical share. The result is the original DB entry with the new DB 
values (such as destination export location). In this prototype, the 
export location is being used as Driver ID, because it is derived 
from the API ID. After migration, the migrated Share has API ID X 
and export location Y, because Y was derived from the temporary DB row 
created for the destination share.


Proposal 1: Use Private Driver Storage to store Driver ID. This will 
require all drivers to follow the guideline as implemented in the 
generic driver, which manages the volume ID (Driver ID for this 
driver) separate from the API ID.


Proposal 2: Use additional DB column so we have separate IDs in each 
column. This will require less effort from drivers, because this 
column value can be transferred from the temporary DB row to the 
original DB entry, similar to what is done with the export location 
column in the prototype. Drivers can manage the value in this column 
if they want, but if they do not, we can derive from the API ID if we 
continue to use the approach currently implemented for Share 
Migration, and keep in mind that for replication or other features, we 
need to fill this field with a value as if we are creating a new 
share. This approach also has the disadvantage of being confusing for 
debugging and require more changes in Manila Core code, but at least 
this is handled by Manila Core code instead of Driver code.


Additionally, proposal 1 can be mixed with proposal 2, if the Manila 
Core code attempts to store the Driver ID value in Private Share 
Data instead of column, but we argued that Manila Core should not 
touch Private Share Data, we have not come to a conclusion on this.


Proposal 3: Create new table instances that will be linked to the 
API ID, so a share can have several instances, which have their own 
ID, and only one is considered Active. This approach sounds very 
interesting for future features, the admin can find the ID for which 
instances are in the backend through a manila share-instances-show 
share_id command. There has been a lot of discussion regarding how 
we use the Instance ID, if we provide them directly to drivers as if 
it was the API ID, or include in a field in the Share object so the 
driver can continue to use the API ID and reads the Instance ID if it 
wants (which makes it similar to proposal 1). It was stated that for 
replication, drivers need to see the instance IDs, so providing the 
Instance ID as if it was the API ID would not make much sense here. 
This approach will also require a lot of changes on Manila Core code, 
and depending on what we decide to do with the Instance ID regarding 
drivers, may require no changes or minimal changes to drivers.


Proposal 4: Allow drivers to change the API ID. The 
advantages/disadvanges of this proposal are not very clear to me, it 
would fix part of Share Migration problem (not sure if replication 
would need to do the same), but I see as it breaking the concept that 
we were trying to migrate a share, it becomes cloning shares and 
erasing the original, we do not know how it would impact users, and 
certainly would be much less transparent.


I think that from here we can proceed on expressing our concerns, 
disadvantages or advantages or each approach, for other features as 
well (Unfortunately I am familiar with migration only), develop each 
proposal further with diagrams if that's the case, so we can decide on 
which one is best for our future features.


I see 2 possible paths forward on the share ID problem for migrations. 
It's not clear that the 2 options I see neatly map to your proposal 
numbers so I'll call them A and B to avoid confusion.


Option A: Shares continue to have 1 ID, but we allow a share's ID to 
change in special cases.
I believe this option is most similar to proposals 1 and 

Re: [openstack-dev] [opensatck-dev][trove]redis replication

2015-06-17 Thread 李田清
Thanks a lot Mariam
 
 
-- Original --
From:  Mariam Johnmari...@us.ibm.com;
Date:  Wed, Jun 17, 2015 08:41 PM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [opensatck-dev][trove]redis replication

 
 
Have you checked the blueprint for this at: 
https://review.openstack.org/#/c/189445/.
 
 Hope that helps.
 
 Regards,
 Mariam.
 
 
 李田清 ---06/17/2015 02:06:39 AM---Hello, Right now we can create one 
replication once, but it is not suitable for redis. What we w
 
 From:  李田清 tianq...@unitedstack.com
 To:openstack-dev openstack-dev@lists.openstack.org
 Date:  06/17/2015 02:06 AM
 Subject:   [openstack-dev] [opensatck-dev][trove]redis replication
 


 
 
 Hello,
 Right now we can create one replication once, but it is not suitable for 
redis. What we will do for this?
 And if time permit can the assigner of redis replication tell about the 
process for redis replication? Thanks a 
lot.__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Difference between Sahara and CloudBrak

2015-06-17 Thread Chris Buccella
I tried (or tried to try) Cloudbreak recently, as I need to deploy a newer
version of HDP than Sahara supports.

The interface is slick, but lacks the ability to make some choices about
your OpenStack installation. The heat template the software generated
wouldn't work with my deployment, and there wasn't a way to fix it. I think
they are primarily targeting public cloud providers.

I see Sahara vs. CloudBreak like this:

Sahara - Hadoop distro agnostic deployment for OpenStack
CloudBreak - Cloud agnostic deployment of HDP


-Chris

On Mon, Jun 15, 2015 at 12:36 PM, Andrew Lazarev alaza...@mirantis.com
wrote:

 Hi Jay,

 Cloudbreak is a Hadoop installation tool driven by Hortonworks. The main
 difference with Sahara is a point of control. In Hortonworks world you have
 Ambari and different planforms (AWS, OpenStack, etc.) to run Hadoop. Sahara
 point of view - you have OpenStack cluster and want to control everything
 from horizon (Hadoop of any vendor, Murano apps, etc.).

 So,
 If you tied with Hortonworks, spend most working time in Ambari and run
 Hadoop on different types of clouds - choose CloudBreak.
 If you have OpenStack infrastructure and want to run Hadoop on top of it -
 choose Sahara.

 Thanks,
 Andrew.

 On Mon, Jun 15, 2015 at 9:03 AM, Jay Lau jay.lau@gmail.com wrote:

 Hi Sahara Team,

 Just notice that the CloudBreak (https://github.com/sequenceiq/cloudbreak)
 also support running on top of OpenStack, can anyone show me some
 difference between Sahara and CloudBreak when both of them using OpenStack
 as Infrastructure Manager?

 --
 Thanks,

 Jay Lau (Guangya Liu)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Midcycle meetup dates

2015-06-17 Thread Ben Swartzlander
The Manila midcycle meetup will be July 29-30 at NetApp's office in 
Durham North Carolina. For those who can't attend in person there will 
be video conference (subject to limited slots) and audio conference.


We will work on the agenda for the meetup in coming weeks.

-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads for operator languagepacks

2015-06-17 Thread Fox, Kevin M
That would work, but would be a per tenant thing? So if you had lots of tenants 
using the same image, it would redownloaded lots of times. Are there any plans 
for glance integration so the cloud deployer could cache it in the image 
catalog? I seem to remember a version of docker that could use glance directly?

Thanks,
Kevin


From: Adrian Otto
Sent: Wednesday, June 17, 2015 5:31:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads 
for operator languagepacks

Kevin,

Magnum has a plan for dealing with that. Solum will likely have a Magnum 
integration that will leverage it:

https://blueprints.launchpad.net/magnum/+spec/registryv2-in-master

With that said, yes, you could also optimize the performance of the upstream by 
caching it locally in swift. You’d want an async proceed to keep it continually 
updated though.

Adrian

 On Jun 17, 2015, at 4:30 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 so, to not beat up on the public facing server, the user would have to copy 
 the container from the public server to the cloud's swift stoage, then the 
 docker hosts could pull from there?

 Thanks,
 Kevin
 
 From: Adrian Otto [adrian.o...@rackspace.com]
 Sent: Wednesday, June 17, 2015 4:21 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift 
 downloads for operator languagepacks

 Kevin,

 On Jun 17, 2015, at 4:03 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 Would then each docker host try and redownload the the prebuilt container 
 externally? If you build from source, does it build it once and then all the 
 docker hosts use that one local copy? Maybe Solum needs a mechanism to pull 
 in a prebuilt LP?

 On each docker server Solum downloads built LP’s from swift before the 
 containers are created, so Docker has no reason to contact the public image 
 repository for fetching the LP images because is has a local copy.

 Adrian


 Thanks,
 Kevin
 
 From: Murali Allada [murali.all...@rackspace.com]
 Sent: Wednesday, June 17, 2015 12:53 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift 
 downloads for operator languagepacks

 Kevin\Keith,

 Yes, we would like to use the catalog for globally available artifacts, such 
 as operator languagepacks. More specifically the catalog would be a great 
 place to store metadata about publicly available artifacts to make them 
 searchable and easy to discover.

 The catalog would point to the 'built' artifact, not the 'unbuilt' 
 dockerfile in github.
 The point of languagepacks is to reduce the amount of time the solum CI 
 pipeline
 spends building the users application container. We shouldn't build the 
 languagepack from scratch each time.

 -Murali







 
 From: Keith Bray keith.b...@rackspace.com
 Sent: Wednesday, June 17, 2015 2:10 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift 
 downloads for operator languagepacks

 Hi Kevin,

 We absolute envision languagepack artifacts being made available via
 apps.openstack.org (ignoring for a moment that the name may not be a
 perfect fit, particularly for things like vanilla glance images ... Is it
 an OS or an App? ...  catalog.openstack.org might be more fitting).
 Anyway, there are two stages for language packs, unbuilt, and built.  If
 it's in an unbuilt state, then it's really a Dockerfile + any accessory
 files that the Dockerfile references.   If it's in a built state, then
 it's a Docker image (same as what is found on Dockerhub I believe).  I
 think there will need to be more discussion to know what users prefer,
 built vs. unbuilt, or both options (where unbuilt is often a collection of
 files, best managed in a repo like github vs. built which are best
 provided as direct links so a single source like Dockerhub).

 -Keith

 On 6/17/15 1:58 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 This question may be off on a tangent, or may be related.

 As part of the application catalog project, (http://apps.openstack.org/)
 we're trying to provide globally accessible resources that can be easily
 consumed in OpenStack Clouds. How would these global Language Packs fit
 in? Would the url record in the app catalog be required to point to an
 Internet facing public Swift system then? Or, would it point to the
 source git repo that Solum would use to generate the LP still?

 Thanks,
 Kevin
 
 From: Randall Burt [randall.b...@rackspace.com]
 Sent: Wednesday, June 17, 2015 11:38 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] 

Re: [openstack-dev] [Ironic][Horizon][Tuskar-ui] Making a dashboard for Ironic‏

2015-06-17 Thread niuzhenguo
Hi Krotscheck,

Sorry for not attending the last meeting due to TZ.


Yes, Horizon is moving towards an Angular application, but for now there’s no 
any Angular Dashboard landed. I think it’s high time that we should make a 
standard for other projects which want to horizon compatible on whether they 
should based on Angular Dashboard or the current Horizon framework. This is 
important for the new Magnum and Ironic UI, personally, I’d prefer to use the 
current framework and  move to Angular Dashboard when it’s mature.



And after a quick look at your JS project, I think it’s totally a standalone UI 
not based on Horizon Angular Dashboard (correct me if I missed something), and 
seems there’s no any update over a month, are you planning to push you repo to 
stackforge or openstack?

Anyway, it’s clear that we should make an Ironic dashboard, it’s a good start.


Regards
-zhenguo

From: Michael Krotscheck [mailto:krotsch...@gmail.com]
Sent: Wednesday, June 17, 2015 11:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic][Horizon][Tuskar-ui] Making a dashboard 
for Ironic‏

Hey there!

Yes, we are duplicating effort. I've spent quite a bit of effort over the past 
few months landing features inside openstack that will make it possible for a 
JavaScript client to be imported to horizon as a dependency. This includes 
CORS, configuration, caching, infra tooling, etc, with the end goal being a 
maximum amount of code reusability between the standalone UI and Horizon. While 
it may not appear that way, I _am_ actively working on this project, though I'm 
currently focused on javascript infrastructure tooling and oslo middleware than 
the ironic webclient itself.

With Horizon also moving towards an angular application, I feel it makes far 
more sense to build components for the new Horizon than the old one.

Michael

On Tue, Jun 16, 2015 at 9:02 PM NiuZhenguo 
niuzhenguo...@hotmail.commailto:niuzhenguo...@hotmail.com wrote:
hi folks,

I'm planning to propose a new horizon plugin ironic-dashboard to fill the gap 
that ironic doesn't have horizon support. I know there's a nodes panel on 
infrastructure dashboard handled by tuskar-ui, but it's specifically geared 
towards TripleO. Ironic needs a separate dashboard to present an interface for 
querying and managing ironic's resources (Drivers, Nodes, and Ports).

After discussion with the ironic community, I pushed an ironic-dashboard 
project to stackforge [1].

Also there's an existing JS UI for ironic in developing now [2], we may try to 
resolve the same goals, but as an integrated openstack project, there's clear 
needs to have horizon support.

I'd like to get what's your suggestion, thanks in advance.


[1] https://review.openstack.org/#/c/191131/
[2] https://github.com/krotscheck/ironic-webclient


Regards
-zhenguo
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads for operator languagepacks

2015-06-17 Thread Adrian Otto
Kevin,

Yes, the solution to slow performance on public registry servers is to cache 
content locally.

Arranging that is not difficult, but we are not to that point yet. Basically 
you set up the bay models to use heat templates that set Docker Distribution to 
use a cloud local upstream. That upstream would use the public registry as an 
upstream. We can make this really easy.

In addition to the Magnum registry feature I referenced, we would also need a 
feature to in Magnum to allow a BayModel to have an override parameter to allow 
setting the bay node's distribution upstream server, and a configuration 
directive to set a default value for that setting in the main Magnum 
configuration file.

I see no need to conflate Glance images and Docker container images when there 
is already a protocol implementation that will support swift cloud storage on a 
per tenant basis. We plan to use the prevailing open source implementation of 
registry v2 as a first iteration, and decide if that needs to be further 
refined later.

Adrian


 Original message 
From: Fox, Kevin M kevin@pnnl.gov
Date: 06/17/2015 6:35 PM (GMT-08:00)
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads 
for operator languagepacks

That would work, but would be a per tenant thing? So if you had lots of tenants 
using the same image, it would redownloaded lots of times. Are there any plans 
for glance integration so the cloud deployer could cache it in the image 
catalog? I seem to remember a version of docker that could use glance directly?

Thanks,
Kevin


From: Adrian Otto
Sent: Wednesday, June 17, 2015 5:31:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift downloads 
for operator languagepacks

Kevin,

Magnum has a plan for dealing with that. Solum will likely have a Magnum 
integration that will leverage it:

https://blueprints.launchpad.net/magnum/+spec/registryv2-in-master

With that said, yes, you could also optimize the performance of the upstream by 
caching it locally in swift. You’d want an async proceed to keep it continually 
updated though.

Adrian

 On Jun 17, 2015, at 4:30 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 so, to not beat up on the public facing server, the user would have to copy 
 the container from the public server to the cloud's swift stoage, then the 
 docker hosts could pull from there?

 Thanks,
 Kevin
 
 From: Adrian Otto [adrian.o...@rackspace.com]
 Sent: Wednesday, June 17, 2015 4:21 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift 
 downloads for operator languagepacks

 Kevin,

 On Jun 17, 2015, at 4:03 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 Would then each docker host try and redownload the the prebuilt container 
 externally? If you build from source, does it build it once and then all the 
 docker hosts use that one local copy? Maybe Solum needs a mechanism to pull 
 in a prebuilt LP?

 On each docker server Solum downloads built LP’s from swift before the 
 containers are created, so Docker has no reason to contact the public image 
 repository for fetching the LP images because is has a local copy.

 Adrian


 Thanks,
 Kevin
 
 From: Murali Allada [murali.all...@rackspace.com]
 Sent: Wednesday, June 17, 2015 12:53 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift 
 downloads for operator languagepacks

 Kevin\Keith,

 Yes, we would like to use the catalog for globally available artifacts, such 
 as operator languagepacks. More specifically the catalog would be a great 
 place to store metadata about publicly available artifacts to make them 
 searchable and easy to discover.

 The catalog would point to the 'built' artifact, not the 'unbuilt' 
 dockerfile in github.
 The point of languagepacks is to reduce the amount of time the solum CI 
 pipeline
 spends building the users application container. We shouldn't build the 
 languagepack from scratch each time.

 -Murali







 
 From: Keith Bray keith.b...@rackspace.com
 Sent: Wednesday, June 17, 2015 2:10 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum][app-catalog] [ Supporting swift 
 downloads for operator languagepacks

 Hi Kevin,

 We absolute envision languagepack artifacts being made available via
 apps.openstack.org (ignoring for a moment that the name may not be a
 perfect fit, particularly for things like vanilla glance images ... Is it
 an OS or an App? ...  catalog.openstack.org might be more fitting).
 Anyway, 

Re: [openstack-dev] [Congress] Mid-cycle sprint

2015-06-17 Thread Adam Young
How many people do you think you will have?  We have a midcycle in 
Boston University, July 15-17 and you are welcome to join in.  I am 
pretty sure we will have more than enough capacity, considering the size 
of the Congress team.


Hotels might be a bit of an issue, as we are getting close, and they are 
starting to book up, but the BU admin has let us know that we can get 
dorm space is people so desire.



On 06/16/2015 05:13 PM, Tim Hinrichs wrote:

Hi all,

In the last couple of IRCs we've been talking about running a 
mid-cycle sprint focused on enabling our message bus to span multiple 
processes and multiple hosts.  The message bus is what allows the 
Congress policy engine to communicate with the Congress wrappers 
around external services like Nova, Neutron.  This cross-process, 
cross-host message bus is the platform we'll use to build version 2.0 
of our distributed architecture.


If you're interested in participating, drop me a note. Once we know 
who's interested we'll work out date/time/location details.


Thanks!
Tim



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Gnocchi] question on integration with time-series databases

2015-06-17 Thread gordon chung



On 17/06/2015 12:57 PM, Chris Dent wrote:

On Tue, 16 Jun 2015, Simon Pasquier wrote:


I'm still struggling to see how these optimizations would be implemented
since the current Gnocchi design has separate backends for indexing and
storage which means that datapoints (id + timestamp + value) and metric
metadata (tenant_id, instance_id, server group, ...) are stored into
different places. I'd be interested to hear from the Gnocchi team how 
this

is going to be tackled. For instance, does it imply modifications or
extensions to the existing Gnocchi API?


I think there's three things to keep in mind:

a) The plan is to figure it out and make it work well, production
   ready even. That will require some iteration. At the moment the
   overlap between InfluxDB python driver maturity and someone-to-do-the-
   work is not great. When it is I'm sure the full variety of
   optimizations will be explored, with actual working code and test
   cases.


just curious but what bugs are we waiting on for the influxdb driver? 
i'm hoping Paul Dix has prioritised them?




b) Gnocchi has separate _interfaces_ for indexing and storage. This
   is not the same as having separate _backends_[1]. If it turns out
   that the right way to get InfluxDB working is for it to be the
   same backend to the two separate interfaces then that will be
   okay.


i'll straddle the middle line here and say i think we need to wait for a 
viable driver before we can start making the appropriate adjustments. 
having said that, i think once we have the gaps resolved, i think we 
should make all effort to conform to the rules of the db (whether it is 
influxdb, kairosdb, opentsdb). we faced a similar issue with the 
previous data storage design where we generically applied a design for 
one driver across all drivers and that led to terribly inefficient 
design everywhere.




c) The future is unknown and the present is not made of stone. There
   could be modifications and extensions to the existing stuff. We
   don't know. Yet.

[1] Yes the existing implementations use SQL for the indexer and
various subclasses of the carbonara abstraction as two backends
for the two interfaces. That's an accident of history not a design
requirement.


--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ask for help on supportting the 3-rd party CI for HDFS driver

2015-06-17 Thread Ben Swartzlander

On 06/11/2015 10:34 AM, Jeremy Stanley wrote:

On 2015-06-11 07:51:55 +0200 (+0200), Philipp Marek wrote:
[...]

I still stand by my opinion (as voiced in Vancouver) that for such
one-off things (that contributors are not likely to repeat over
and over again) it might make sense to have -infra simply *do*
them[3].

[...]

To reiterate my response from the summit, it's a neat idea but not
one the Infra team has the bandwidth to entertain at the moment. As
you've noticed we're understaffed and while we're continually trying
to grow the team it takes many months to a year or more of full-time
exposure to our current systems to bring new people up to speed to
help us run it. Also we don't actually have a holistic view of the
underlying tests being run by the jobs... for that you need to
elicit assistance from the QA team who maintain DevStack/Tempest and
did the plugin design for things like out-of-tree driver testing,
and also the project teams for the software at which these drivers
and backends are targeted.

So while I and others are happy to have our CI run jobs to test
OpenStack drivers for other free software backends, don't expect the
actual work and learning curve to necessarily be any less than
building your own CI system from scratch (just different).


It doesn't make sense to require people to learn about things they
will never use again - and the amount of time spent answering the
questions, diagnosing problems and so on is quite a bit higher
than doing it simply right the first time.

This is, I think, also a common misconception. The people who write
these jobs to run in our CI need to stick around or induct
successors to help maintain them and avoid bitrot as our systems
constantly change and evolve. I know the same goes for the drivers
themselves... if people don't keep them current with the OpenStack
software into which they're integrating, support will quickly be
dropped due to quality control concerns.


I strongly agree here. I think that the cinder community has shown that 
the one of the main values of universal vendor CI is that it keeps 
driver maintainers engaged with the community and aware of ongoing 
development. OpenStack projects are not a static target which you can 
write a driver for once and be done. We are adding new features and 
making enhancements every release, and some of those changes require 
drivers to evolve too. At the very least CI systems allow us to validate 
that the introduction of a new feature didn't break any existing drivers.


For a pure-software based storage backend like HDFS, we can leverage the 
compute resources of openstack-infra, but the development resources 
still need to come from the Manila team -- the same group people 
responsible for maintaining the driver and fixing bugs should have some 
understanding of the automated test system because it will be finding 
bugs and we'll have to reproduce failure and debug them. If nobody is 
willing to do this on an ongoing basis for a backend like HDFS, then 
eventually we won't be able to support it anymore. The CI requirement 
just makes this fact more explicit and forces us to either commit the 
resources or remove the driver rather than waiting until the driver is 
horribly broken in a few years.




And if it's *that* often needed, why not write a small script
that, given a name, does the needed changes, so that only a commit
 review is needed?

[...]

Definitely something that people who have experience writing these
could collaborate on contributing. As I mentioned, the Infra team
doesn't have the complete picture, but the people who have sweated
and bled to get their drivers tested and integrated do, at least to
a greater extent than we do.

This is all to say I understand the frustration, but I don't have a
simple solution for it unfortunately.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tacker][Telco][NFV]: Reminder - weekly IRC meeting tomorrow Jun 18th

2015-06-17 Thread Sridhar Ramaswamy
Meeting on #openstack-meeting @ 1600UTC (9am PDT)

Agenda can be found here:
https://wiki.openstack.org/wiki/Meetings/Tacker#Meeting_June_18.2C_2015

Feel free to update the agenda if needed.

thanks,
Sridhar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][murano] Developing puppet module for Murano

2015-06-17 Thread Matt Fischer
I am planning on looking into Murano later this year so I'd be interested
in helping review this code. I'm puppet-core so feel free to add me to
reviews and I can look as time permits.

On Wed, Jun 17, 2015 at 9:49 AM, Serg Melikyan smelik...@mirantis.com
wrote:

 Emilien,

 Thank you for your proposal, I completely agree with it:

  * Move the module under the big tent
 Proposed change in infra [1] and corresponding change in governance [2]

  * Adding Puppet OpenStack group part of core permissions
 I've added puppet-manager-core to puppet-murano-core

  * Keep Puppet Murano folks part of core permissions for now
 I've added Denis Egorenko to puppet-murano-core (group was empty
 previously).

  * Do not merge a patch without at least one review from both groups
  * Collaborate to make the module compliant
 Denis will be responsible for initial review from Murano side in order
 to not overburden OpenStack Puppet with helping with Puppet basics. We
 will make sure to not merge anything without +2 from someone from
 puppet-manager-core. We will start with basic repository structure and
 will move existing module bit by bit.

  * When the module is compliant, we only keep Puppet OpenStack
group managing the module, like it's done for other modules.
 Sure!

 Once again thank you for your help and concerns!

 References:
 [1] https://review.openstack.org/192730
 [2] https://review.openstack.org/192727

 On Wed, Jun 17, 2015 at 5:03 PM, Emilien Macchi emil...@redhat.com
 wrote:
 
 
  On 06/17/2015 09:50 AM, Serg Melikyan wrote:
  Thank you for sharing link to list of things that new module should
  satisfy to! It will be really helpful even if list will change over
  time. At least we have pointers how to start making our module
  compliant.
 
  Regarding figuring out permissions - I don't mind if we will set
  puppet-core as group responsible for the repository, I believe that
  through contributing Murano module authors will get enough
  creditability to be included to the puppet-core. This will help to
  ensure that module is developed according all rules of Puppet
  OpenStack Community and nothing will be merged that does not satisfy
  adopted way of doing things. Emilien, if you agree with this approach
  I will send appropriate change to review.
 
 
  I like Monty's proposal.
 
  I propose:
  * Move the module under the big tent
  * Adding Puppet OpenStack group part of core permissions
  * Keep Puppet Murano folks part of core permissions for now
  * Do not merge a patch without at least one review from both groups
  * Collaborate to make the module compliant
  * When the module is compliant, we only keep Puppet OpenStack group
  managing the module, like it's done for other modules.
 
 
 
 
 
  On Wed, Jun 17, 2015 at 4:08 PM, Monty Taylor mord...@inaugust.com
 wrote:
  On 06/17/2015 08:53 AM, Emilien Macchi wrote:
  Hi Serg,
 
  On 06/17/2015 05:35 AM, Serg Melikyan wrote:
  Hi Emilien,
 
  I would like to answer your question regarding
  stackforge/puppet-murano repository asked in different thread:
 
  Someone from Fuel team created first the module in Fuel, 6
  months ago [1] and 3 months later someone from Fuel team
  created an empty repository in Stackforge [2]. By the way,
  Puppet OpenStack community does not have core permissions on
  this module and it's own by Murano team.
 
  Murano was included to Fuel around 2 years ago, our first
  official release as part of Fuel was Icehouse - yes, we have
  puppet module for Murano for a long time now. But until recently
  we didn't had a Big Tent in place and that is why we never
  thought that we able to upstream our module.
 
  Once policy regarding upstream puppet modules in Fuel changed and
  Big Tent model was adopted we decided to upstream module for
  Murano. I am really sorry that I didn't contact you for more
  information how to do that properly and just created
  corresponding repository.
 
  Well, in fact, I'm sorry for you; you could not benefit of Puppet
  OpenStack community. Let's fix that.
 
  I didn't give permission to Puppet OpenStack community for this
  repository because it would be strange, given I didn't even
  contact you. We thought that we would upstream what we have now
  and then make sure that this repo will be integrated with Puppet
  OpenStack ecosystem.
 
  We still have big desire to upstream our puppet module. Fuel is
  not only user of this module, there are other projects who would
  like to use Murano as part of they solution and use puppet module
  from Fuel for deployment.
 
  Can you advise how we should proceed further?
 
  The more recent patch to add a module in OpenStack is zaqar:
  https://review.openstack.org/#/c/191942/
 
  Two things we need to solve is the fact if you move your module to
  the big tent: * bring the module compliant (I'm working on a
  blueprint to explain what is that, but you can already read what we
  said at the Summit:
 
 

Re: [openstack-dev] Developers in the Bay Area

2015-06-17 Thread Fabrizio Soppelsa

Lisette, what is this about?
http://it.linkedin.com/in/fsoppelsa


On 06/17/2015 07:10 AM, Lisette Sheehan wrote:


Hi!

I work for an events market research company and am in need of 
recruiting young developers in the San Francisco Bay Area to answer a 
short survey to see if they qualify for an in-depth interview 
regarding an upcoming developer event in San Francisco. Is this 
something I can post your page or send out to your mailing list?


*Lisette*

*Lisette Sheehan, *Senior Research Manager

*Exhibit Surveys**, Inc.*

7 Hendrickson Ave., Red Bank, NJ 07701

T: 732.704.1329   |  Mobile: 732.598.9313   |  F: 732.741.5704

E: _lise...@exhibitsurveys.com mailto:lise...@exhibitsurveys.com_

*Be a Knowbody*: www.exhibitsurveys.com

*Follow us on *Facebook 
http://www.facebook.com/pages/Exhibit-Surveys-Inc/77619067606*and 
*Twitter http://twitter.com/exhibitsurveys




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][db] online schema upgrades

2015-06-17 Thread Mike Bayer



On 6/17/15 12:40 PM, Ihar Hrachyshka wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 06/17/2015 11:27 AM, Anna Kamyshnikova wrote:

Ihar, thanks for bringing this up!

This is very interesting and I think it worth trying. I'm +1 on
that and want to participate in this work.


Awesome.


In fact a lot *not strict* migrations are removed with
juno_initial, so I hope it won't be so hard for now to apply
stricter rules for migration. But what is the plan for those
migrations that are *not strict* now?

Still, we have several live data migrations from Juno to Kilo:

- - 14be42f3d0a5_default_sec_group_table.py: populates db with default
security groups;

- - 16cdf118d31d_extra_dhcp_options_ipv6_support.py: populates
extraqdhcpopts with default ip_version = 4;

- - 2d2a8a565438_hierarchical_binding.py: populates db with
port_binding_levels objects, then drops old tables;

- - 35a0f3365720_add_port_security_in_ml2.py: port security field is
populated with True for ports and networks;

- - 034883111f_remove_subnetpool_allow_overlap.py: drops allow_overlap
column from subnetpools: probably unused so we can be ok with it?..

In any case, the plan for existing migration rules is: don't touch
them. Their presence in N release just indicates that we cannot get
online db migration in N+1. That's why we should adopt strict rules
the earlier the better, so that opportunity does not slip to N+X where
X is too far.

The patches currently in review that look suspicious in this regard are:
- - I4ff7db0f5fa12b0fd1c6b10318e9177fde0210d7: moves data from one table
into another;
- - Iecb3e168a805fc5f3d59d894c3f0d9298505e872: fills new columns with
default server values (why is it even needed?..);
- - Icde55742aa78ed995bac0896c01c80c9d28aa0cf: alter_column(). Not sure
we are ok with it;
- - I66b3ee8c2f9fa6f04b9e89dc49d1a3d277d63191: probably not an issue
though since it touches existing live data impact rule?


I made a list of migrations from juno- kilo that are non expansive or 
do data migrations:


*these contain drop column:*

034883111f_remove_subnetpool_allow_overlap.py
2d2a8a565438_hierarchical_binding.py

*these contain drop table:*

28c0ffb8ebbd_remove_mlnx_plugin.py
2b801560a332_remove_hypervneutronplugin_tables.py
408cfbf6923c_remove_ryu_plugin.py
57086602ca0a_scrap_nsx_adv_svcs_models.py

*these contain data migrations:*

14be42f3d0a5_default_sec_group_table.py
16cdf118d31d_extra_dhcp_options_ipv6_support.py
2b801560a332_remove_hypervneutronplugin_tables.py
2d2a8a565438_hierarchical_binding.py
35a0f3365720_add_port_security_in_ml2.py

*Example of failure:*

neutron/db/migration/alembic_migrations/versions/2d2a8a565438_hierarchical_binding.py 
- drops the following columns:


op.drop_constraint(fk_name_dvr[0], 'ml2_dvr_port_bindings', 'foreignkey')
op.drop_column('ml2_dvr_port_bindings', 'cap_port_filter')
op.drop_column('ml2_dvr_port_bindings', 'segment')
op.drop_column('ml2_dvr_port_bindings', 'driver')

op.drop_constraint(fk_name[0], 'ml2_port_bindings', 'foreignkey')
op.drop_column('ml2_port_bindings', 'driver')
op.drop_column('ml2_port_bindings', 'segment')
which then causes a failure in Juno:
OperationalError: (OperationalError) (1054, Unknown column
'ml2_port_bindings_1.driver' in 'field list')






(the list can be incomplete)


I think that we should try to use Alembic as much we could as Mike
is going to support us in that and we have time to make some change
in Alembic directly.

Yes, sure, I'm looking forward to see Mike's proposal in public.


We should undoubtedly plan this work for M release because there
will be some issues that will appear in the process.


Sure.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVgaL4AAoJEC5aWaUY1u57oZgH/34pgV7AqOiq4XnWOOmQ9HA9
jL+8E9jv8pUSW3X4v0Rm5mDuWJyWscrgy61Om+sWsmqBFAmm/gmLWm+NNADbYM5e
6hsoaO5WmuvRc03MwIwsa0NEgfPc8EhT5JiZmYRjOBc85ZCs6+UOKUHBAI2EVTg8
t8YKdTdzxlrZQEOng1lbsUQYkHnNUZTbsREnpangfaHXBk3xmilH/ebGsz3CRUCe
OBrpp6q8N7mgZgK/UQKb04eS5bCna7eVmv6q7PvIO0SlYhhDbrL3+dv/SZpqQWZ/
Hek2Oig0IYyPygVrGc4BpT9MIaKisGoxXMn1rRB2g8us8jM58VyzqgXwEH2H4Aw=
=TqHb
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-17 Thread Clint Byrum
Excerpts from Sean Dague's message of 2015-06-16 10:16:34 -0700:
 On 06/16/2015 12:49 PM, Clint Byrum wrote:
  Excerpts from Sean Dague's message of 2015-06-16 06:22:23 -0700:
  FYI,
 
  One of the things that came out of the summit for Devstack plans going
  forward is to trim it back to something more opinionated and remove a
  bunch of low use optionality in the process.
 
  One of those branches to be trimmed is all the support for things beyond
  RabbitMQ in the rpc layer. RabbitMQ is what's used by 95%+ of our
  community, that's what the development environment should focus on.
 
  The patch to remove all of this is here -
  https://review.openstack.org/#/c/192154/. Expect this to merge by the
  end of the month. If people are interested in non RabbitMQ external
  plugins, now is the time to start writing them. The oslo.messaging team
  already moved their functional test installation for alternative
  platforms off of devstack, so this should impact a very small number of
  people.
 
  
  The recent spec we added to define a policy for oslo.messaging drivers is
  intended as a way to encourage that 5% who feels a different messaging
  layer is critical to participate upstream by adding devstack-gate jobs
  and committing developers to keep them stable. This change basically
  slams the door in their face and says good luck, we don't actually care
  about accomodating you. This will drive them more into the shadows,
  and push their forks even further away from the core of the project. If
  that's your intention, then we need to have a longer conversation where
  you explain to me why you feel that's a good thing.
 
 I believe it is not the responsibility of the devstack team to support
 every possible backend one could imagine and carry that technical debt
 in tree, confusing new users in the process that any of these things
 might actually work. I believe that if you feel that your spec assumed
 that was going to be the case, you made a large incorrect externalities
 assumption.
 

I agree with you, and support your desire to move things into plugins.

However, your timing is problematic and the lack of coordination with
the ongoing effort to deprecate untested messaging drivers gracefully
is really frustrating. We've been asking (on this list) zmq interested
parties to add devstack-gate jobs and identify themselves as contacts
to support these drivers. Meanwhile this change and the wording around
it suggest that they're not welcome in devstack.

  Also, I take issue with the value assigned to dropping it. If that 95%
  is calculated as orgs_running_on_rabbit/orgs then it's telling a really
  lop-sided story. I'd rather see compute_nodes_on_rabbit/compute_nodes.
  
  I'd like to propose that we leave all of this in tree to match what is
  in oslo.messaging. I think devstack should follow oslo.messaging and
  deprecate the ones that oslo.messaging deprecates. Otherwise I feel like
  we're Vizzini cutting the rope just as The Dread Pirate 0mq is about to
  climb the last 10 meters to the top of the cliffs of insanity and battle
  RabbitMQ left handed. I know, inconceivable right?
 
 We have an external plugin mechanism for devstack. That's a viable
 option here. People will have to own and do that work, instead of
 expecting the small devstack team to do that for them. I believe I left
 enough of a hook in place that it's possible.
 

So lets do some communication, and ask for the qpid and zmq people to
step up, and help them move their code into an external plugin, and add
documentation to help their users find it. The burden should shift, but
it still rests with devstack until it _does_ shift.

 That would also let them control the code relevant to their plugin,
 because there is no way that devstack was going to gate against other
 backends here, so we'd end up breaking them pretty often, and it taking
 a while to fix them in tree.

I love that idea. That is not what the change does though. It deletes
with nary a word about what users of this code should do until new
external plugins appear.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Fox, Kevin M
Awesome! Thanks. :)

Kevin

From: Ihar Hrachyshka [ihrac...@redhat.com]
Sent: Wednesday, June 17, 2015 9:41 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-operators] [nova] [neutron] Re: How do 
your end users use networking?

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 06/17/2015 05:55 PM, Fox, Kevin M wrote:
 The biggest issue we have run into with multiple public networks is
 restricting which users can use which networks. We have the same
 issue, where we may have an internal public network for the
 datacenter, but also, say, a DMZ network we want to put some vm's
 on, but can't currently extend that network easily there because
 too many tenants will be able to launch vm's attached to the DMZ
 that don't have authorization. Quota's or acls or something on
 public networks are really needed.


...and that (acls for networks) is to be handled in Liberty with:
https://blueprints.launchpad.net/neutron/+spec/rbac-networks

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVgaNBAAoJEC5aWaUY1u57CN0IANEZh3t7da/7zpqNvgYMr10O
mHwxx0HVTrchi4+xOuQm5Ibx+CtRdRS2rgGIKdjZVyuanYsbdFPDrJ32dFRU7EIJ
5oFtZvs5iev1jGpOs4jzMwAdfLN4XFmci1Vm+eNy0uatiiTOjt93RdArRMNQGQlI
cJfLmzS88oG0nVoEthHd6YD4Lk8+mf2e64hHNAW8yz7ZTofHff2xRU4QdnMAOEDk
aXU4R1L32zsI9lmC6ANutTzXkA+LWk14PqrA4zHzdwurDJKQQ0oq/M4jjsdG4JyX
k83FvBk47ht40rV1s0kEnOQWVPR7NcAt6zDwu/l+a0uf5jTIJbl2SKE0GLwTAtY=
=uzgm
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-17 Thread Daniel P. Berrange
On Wed, Jun 17, 2015 at 11:37:04AM -0500, Matt Riedemann wrote:
 
 
 On 6/17/2015 4:46 AM, Daniel P. Berrange wrote:
 On Tue, Jun 16, 2015 at 04:21:16PM -0500, Matt Riedemann wrote:
 The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all very
 similar.
 
 I want to extract a common base class that abstracts some of the common code
 and then let the sub-classes provide overrides where necessary.
 
 As part of this, I'm wondering if we could just have a single
 'mount_point_base' config option rather than one per backend like we have
 today:
 
 nfs_mount_point_base
 glusterfs_mount_point_base
 smbfs_mount_point_base
 quobyte_mount_point_base
 
 With libvirt you can only have one of these drivers configured per compute
 host right?  So it seems to make sense that we could have one option used
 for all 4 different driver implementations and reduce some of the config
 option noise.
 
 Doesn't cinder support multiple different backends to be used ? I was always
 under the belief that it did, and thus Nova had to be capable of using any
 of its volume drivers concurrently.
 
 Yeah, I forgot about this and it was pointed out elsewhere in this thread so
 I'm going to drop the common mount_point_base option idea.
 
 
 Are there any concerns with this?
 
 Not a concern, but since we removed the 'volume_drivers' config parameter,
 we're now free to re-arrange the code too. I'd like use to create a subdir
 nova/virt/libvirt/volume and create one file in that subdir per driver
 that we have.
 
 Sure, I'll do that as part of this work, the remotefs and quobyte modules
 can probably also live in there.  We could also arguably move the
 nova.virt.libvirt.lvm and nova.virt.libvirt.dmcrypt modules into
 nova/virt/libvirt/volume as well.

I'd actually prefer to keep separation of the volume driver impls,
from these storage management modules, as they can be used by both
the volume and image management code.

So perhaps use nova/virt/libvirt/storage  for the things like lvm.py
and dmcrypt.py

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Docker Native Networking

2015-06-17 Thread Adrian Otto
Team,

This blueprint needs an assignee:
https://blueprints.launchpad.net/magnum/+spec/native-docker-network

I have moved the whiteboard discussion to the etherpad:
https://etherpad.openstack.org/p/magnum-native-docker-network

Please take a moment to put your input on the ether pad so we can draft an 
actionable plan for getting this feature going.

Thanks,

Adrian

On Jun 12, 2015, at 11:05 AM, Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:

Team,

OpenStack Networking support for Magnum Bays was an important topic for us in 
Vancouver at the design summit. Here is one blueprint that requires discussion 
that’s beyond the scope of what we can easily fit in the BP whiteboard:

https://blueprints.launchpad.net/magnum/+spec/native-docker-network

Before we dive into implementation planning, I'll offer these as guardrails to 
use as a starting point:

1) Users of the Swarm bay type have the ability to create containers. Those 
containers may reside on different hosts (Nova instances). We want those 
containers to be able to communicate with each other over a network similar to 
the way that they can over the Flannel network used with Kubernetes.

2) We should leverage community work as much as possible, combining the best of 
Docker and OpenStack to produce an integrated solution that is easy to use, and 
exhibits performance that's suitable for common use cases.

3) Recognize that our Docker community is working on libnetwork [1] which will 
allow for the creation of logical networks similar to links that allow 
containers to communicate with each other across host boundaries. The 
implementation is pluggable, and we decided in Vancouver that working on a 
Neutron plugin for libnetwork could potentially make the user experience  
consistent whether you are using Docker within Magnum or not.

4) We would like to plug in Neutron to Flannel as a modular option for 
Kubernetes Bays, so both solutions leverage OpenStack networking, and users can 
use familiar, native tools.

References:
[1] https://github.com/docker/libnetwork

Please let me know what you think of this approach. I’d like to re-state the 
Blueprint description, clear the whiteboard, and put up a spec that will 
accommodate in-line comments so we can work on the implementation specifics 
better in context.

Adrian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-17 Thread Doug Hellmann
Excerpts from Dan Smith's message of 2015-06-17 13:16:46 -0700:
  Every change like this makes it harder for newcomers to participate.
  Frankly, it makes it harder for everyone because it means there are
  more moving parts, but in this specific case many of the people
  involved in these messaging drivers are relatively new, so I point
  that out.
 
 I dunno about this. Having devstack migrate away from being an
 opinionated tool for getting a test environment up that was eminently
 readable to what it is today hasn't really helped anyone, IMHO. Having
 some clear plug points such that we _can_ plug in the bits we need for
 testing without having every possible option be embedded in the core
 seems like goodness to me. I'd like to get back to the days where people
 actually knew what was going on in devstack. That helps participation too.
 
 I think having devstack deploy what the 90% (or, being honest, 99%) are
 running, with the ability to plug in the 1% bits when necessary is much
 more in line with what the goal of the tool is.
 
  The already difficult task of setting up sufficient
  functional tests has now turned into figure out devstack, too.
 
 Yep, my point exactly. I think having clear points where you can setup
 your thing and get it plugged in is much easier.

I'm not questioning the goal, or even the approach. But we spent
the last cycle building up the teams working on these drivers in
Oslo, and at the summit several groups were (re)motivated to be
working on the code. Now the devstack team is yanking the rug out
from under all of that work with this patch.

I'm asking that we not set a tight deadline on doing this right
away, to give everyone who wasn't involved in those discussions
about the changes in devstack to understand what's actually involved
in recovering from being kicked out of tree.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] FYI - dropping non RabbitMQ support in devstack

2015-06-17 Thread Kyle Mestery
On Wed, Jun 17, 2015 at 3:48 PM, Doug Hellmann d...@doughellmann.com
wrote:

 Excerpts from Dan Smith's message of 2015-06-17 13:16:46 -0700:
   Every change like this makes it harder for newcomers to participate.
   Frankly, it makes it harder for everyone because it means there are
   more moving parts, but in this specific case many of the people
   involved in these messaging drivers are relatively new, so I point
   that out.
 
  I dunno about this. Having devstack migrate away from being an
  opinionated tool for getting a test environment up that was eminently
  readable to what it is today hasn't really helped anyone, IMHO. Having
  some clear plug points such that we _can_ plug in the bits we need for
  testing without having every possible option be embedded in the core
  seems like goodness to me. I'd like to get back to the days where people
  actually knew what was going on in devstack. That helps participation
 too.
 
  I think having devstack deploy what the 90% (or, being honest, 99%) are
  running, with the ability to plug in the 1% bits when necessary is much
  more in line with what the goal of the tool is.
 
   The already difficult task of setting up sufficient
   functional tests has now turned into figure out devstack, too.
 
  Yep, my point exactly. I think having clear points where you can setup
  your thing and get it plugged in is much easier.

 I'm not questioning the goal, or even the approach. But we spent
 the last cycle building up the teams working on these drivers in
 Oslo, and at the summit several groups were (re)motivated to be
 working on the code. Now the devstack team is yanking the rug out
 from under all of that work with this patch.

 I'm asking that we not set a tight deadline on doing this right
 away, to give everyone who wasn't involved in those discussions
 about the changes in devstack to understand what's actually involved
 in recovering from being kicked out of tree.


I think people are overreacting here. Adding pluggable devstack support is
actually quite easy, and will honestly make the life of these new messaging
developers much easier. It's worth the time to go down this path from the
start for both sides. I don't see it as kicking them out, but enabling them.

Thanks,
Kyle


 Doug

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova]

2015-06-17 Thread Sourabh Patwardhan
Hello,

I'm working on a new vif driver [1].
As part of the review comments, it was mentioned that a generic VIF driver
will be introduced in Liberty, which may render custom VIF drivers obsolete.

Can anyone point me to blueprints / specs for the generic driver work?
Alternatively, any guidance on how to proceed on my patch is most welcome.

Thanks,
Sourabh

[1] https://review.openstack.org/#/c/157616/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] [Ceilometer] Ceilometer Python API Documentation

2015-06-17 Thread Hassaan Ali
Hi,

I am looking for Openstack Ceilometer Python API Documentation that
includes example codes and guidelines for using it. So far I have been able
to find the following link but it is not that much helpful:

http://docs.openstack.org/developer/python-heatclient/

Your help will be highly appreciated.

Thanks.

-- 
Regards,

*Hassaan *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-17 Thread Andrey Kurilin
Why does alternative implementation need to implement all 50 versions?
As far as I understand, API side should not support all versions, that is
why version info returns min and max versions
https://github.com/openstack/nova/blob/master/doc/api_samples/versions/versions-get-resp.json#L25-L26

On Tue, Jun 16, 2015 at 11:36 AM, Alex Xu sou...@gmail.com wrote:



 2015-06-16 5:24 GMT+08:00 Clint Byrum cl...@fewbar.com:

 Excerpts from Sean Dague's message of 2015-06-15 14:00:43 -0700:
  On 06/15/2015 04:50 PM, Jim Rollenhagen wrote:
   On Mon, Jun 15, 2015 at 01:07:39PM -0400, Jay Pipes wrote:
   It has come to my attention in [1] that the microversion spec for
 Nova [2]
   and Ironic [3] have used the project name -- i.e. Nova and Ironic --
 instead
   of the name of the API -- i.e. OpenStack Compute and OpenStack
 Bare
   Metal -- in the HTTP header that a client passes to indicate a
 preference
   for or knowledge of a particular API microversion.
  
   The original spec said that the HTTP header should contain the name
 of the
   service type returned by the Keystone service catalog (which is also
 the
   official name of the REST API). I don't understand why the spec was
 changed
   retroactively and why Nova has been changed to return
   X-OpenStack-Nova-API-Version instead of
 X-OpenStack-Compute-API-Version HTTP
   headers [4].
  
   To be blunt, Nova is the *implementation* of the OpenStack Compute
 API.
   Ironic is the *implementation* of the OpenStack BareMetal API.
  
   While I tend to agree in principle, do we reasonably expect that other
   implementations of these APIs will implement every one of these
   versions? Can we even reasonably expect another implementation of
 these
   APIs?
  
   // jim
 
  Yeh, honestly, I'm not really convinced that thinking we are doing this
  for alternative implementations is really the right approach (or even
  desireable). Honestly, the transition to microversions makes alternative
  implementations harder because there isn't a big frozen API for a long
  period of time.
 

 Actually that makes an alternative implementation more valuable. Without
 microversions those alternative implementations would have to wait a long
 time to implement fixes to the API, but now can implement and publish
 the fix as soon as the microversion lands. This means that alternative
 implementations will lag _less_ behind the primary.


 So if our min_version is 2.1 and the max_version is 2.50. That means
 alternative implementations need implement all the 50 versions api...that
 sounds pain...



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Neil Jerram

Hi Kris,

Apologies in advance for questions that are probably really dumb - but 
there are several points here that I don't understand.


On 17/06/15 03:44, Kris G. Lindgren wrote:

We are doing pretty much the same thing - but in a slightly different way.
  We extended the nova scheduler to help choose networks (IE. don't put
vm's on a network/host that doesn't have any available IP address).


Why would a particular network/host not have any available IP address?


Then,
we add into the host-aggregate that each HV is attached to a network
metadata item which maps to the names of the neutron networks that host
supports.  This basically creates the mapping of which host supports what
networks, so we can correctly filter hosts out during scheduling. We do
allow people to choose a network if they wish and we do have the neutron
end-point exposed. However, by default if they do not supply a boot
command with a network, we will filter the networks down and choose one
for them.  That way they never hit [1].  This also works well for us,
because the default UI that we provide our end-users is not horizon.


Why do you define multiple networks - as opposed to just one - and why 
would one of your users want to choose a particular one of those?


(Do you mean multiple as in public-1, public-2, ...; or multiple as in 
public, service, ...?)



We currently only support one network per HV via this configuration, but
we would like to be able to expose a network type or group via neutron
in the future.

I believe what you described below is also another way of phrasing the ask
that we had in [2].  That you want to define multiple top level networks
in neutron: 'public' and 'service'.  That is made up by multiple desperate


desperate? :-)  I assume you probably meant separate here.


L2 networks: 'public-1', 'public2,' ect which are independently
constrained to a specific set of hosts/switches/datacenter.


If I'm understanding correctly, this is one of those places where I get 
confused about the difference between Neutron-as-an-API and 
Neutron-as-a-software-implementation.  I guess what you mean here is 
that your deployment hardware is really providing those L2 segments 
directly, and hence you aren't using Neutron's software-based simulation 
of L2 segments.  Is that right?



We have talked about working around this under our configuration one of
two ways.  First, is to use availability zones to provide the separation
between: 'public' and 'service', or in our case: 'prod', 'pki','internal',
ect, ect.


Why are availability zones involved here?  Assuming you had 'prod', 
'pki','internal' etc. networks set up and represented as such in 
Neutron, why wouldn't you just say which of those networks each instance 
should connect to, when creating each instance?


Regards,
Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Neil Jerram
Couple more dumb comments here - sorry that I'm processing this thread 
backwards!


On 16/06/15 15:20, Jay Pipes wrote:

Adding -dev because of the reference to the Neutron Get me a network
spec. Also adding [nova] and [neutron] subject markers.

Comments inline, Kris.

On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:

During the Openstack summit this week I got to talk to a number of other
operators of large Openstack deployments about how they do networking.
  I was happy, surprised even, to find that a number of us are using a
similar type of networking strategy.  That we have similar challenges
around networking and are solving it in our own but very similar way.
  It is always nice to see that other people are doing the same things
as you or see the same issues as you are and that you are not crazy.
So in that vein, I wanted to reach out to the rest of the Ops Community
and ask one pretty simple question.

Would it be accurate to say that most of your end users want almost
nothing to do with the network?


That was my experience at ATT, yes. The vast majority of end users
could not care less about networking, as long as the connectivity was
reliable, performed well, and they could connect to the Internet (and
have others connect from the Internet to their VMs) when needed.


In my experience what the majority of them (both internal and external)
want is to consume from Openstack a compute resource, a property of
which is it that resource has an IP address.  They, at most, care about
which network they are on.  Where a network is usually an arbitrary
definition around a set of real networks, that are constrained to a
location, in which the company has attached some sort of policy.  For
example, I want to be in the production network vs's the xyz lab
network, vs's the backup network, vs's the corp network.  I would say
for Godaddy, 99% of our use cases would be defined as: I want a compute
resource in the production network zone, or I want a compute resource in
this other network zone.


Kris - this looks like the answer to my question why you define multiple 
networks.  If that's right, no need to answer that question there.



 The end user only cares that the IP the vm
receives works in that zone, outside of that they don't care any other
property of that IP.  They do not care what subnet it is in, what vlan
it is on, what switch it is attached to, what router its attached to, or
how data flows in/out of that network.  It just needs to work.


Agreed.  I'm not a deployer, but my team is in contact with many 
deployers who say similar things.


Regards,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [openstack-qa] [tempest] UUIDs and names in tempest.conf file

2015-06-17 Thread Tikkanen, Viktor (Nokia - FI/Espoo)
 -Original Message-
 From: ext Matthew Treinish [mailto:mtrein...@kortar.org]
 Sent: Tuesday, June 16, 2015 4:49 PM
 To: openstack-dev@lists.openstack.org; All Things QA.
 Subject: Re: [openstack-qa] [QA] [tempest] UUIDs and names in tempest.conf
 file
 
 So I need to point out that the openstack-qa list isn't used anymore. We
 only
 keep it around so we have a place to send for periodic test results. In
 the
 future you should just send things to the openstack-dev ML with a [QA] tag
 in the subject.
 
 On Tue, Jun 16, 2015 at 05:25:30AM +, Tikkanen, Viktor (Nokia -
 FI/Espoo) wrote:
  Hi!
 
  I have a question regarding usage of UUIDs and names in the tempest.conf
 file. Are there some common ideas/reasons (except unambiguousness and
 making test cases simpler) why some parameters (e.g. public_network_id,
 flavor_ref, image_ref, ...) are designed so that they require entity UUIDs
 but others (e.g. fixed_network_name, floating_network_name, ...) require
 entity names?
 
 So this is mostly a historical artifact from before I even started working
 on
 the project, my guess is this was done because not all resources require
 unique
 names, but that's just my guess. Config options to tell tempest resources
 to use
 which were added more recently use a name because it's hard for people to
 deal
 with uuids. That being said there is a spec still under review to
 rationalize
 how we specify resources in tempest to make things a bit simpler and more
 consistent: https://review.openstack.org/173334 Once the details are
 ironed out
 in the spec review and implementation begins we'll deprecate most of the
 existing options in favor of the new format for specifying resources.

There were no activities with this spec since middle of April but anyway
thank you for the clarification. 

Currently there seems to be a number of functions like get_network_from_name
(/opt/tempest/tempest/common/fixed_network.py) for UUID/name conversion
available but as you said there is a need for more consistent test resource
management...

-VT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-17 Thread Duncan Thomas
On 17 June 2015 at 00:21, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

 The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all very
 similar.

 I want to extract a common base class that abstracts some of the common
 code and then let the sub-classes provide overrides where necessary.

 As part of this, I'm wondering if we could just have a single
 'mount_point_base' config option rather than one per backend like we have
 today:

 nfs_mount_point_base
 glusterfs_mount_point_base
 smbfs_mount_point_base
 quobyte_mount_point_base

 With libvirt you can only have one of these drivers configured per compute
 host right?  So it seems to make sense that we could have one option used
 for all 4 different driver implementations and reduce some of the config
 option noise.


I can't claim to have tried it, but from a cinder PoV there is nothing
stopping you having both e.g. an NFS and a gluster backend at the same
time, and I'd expect nova to work with it. If it doesn't, I'd consider it a
bug.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-17 Thread Alex Xu
2015-06-17 19:46 GMT+08:00 Andrey Kurilin akuri...@mirantis.com:

 Why does alternative implementation need to implement all 50 versions?
 As far as I understand, API side should not support all versions, that is
 why version info returns min and max versions
 https://github.com/openstack/nova/blob/master/doc/api_samples/versions/versions-get-resp.json#L25-L26


If we raise min_versions randomly...that may means we have 50
back-incompatible APIs in the world. So min_version will be raise rarely
for keep back-compatible




 On Tue, Jun 16, 2015 at 11:36 AM, Alex Xu sou...@gmail.com wrote:



 2015-06-16 5:24 GMT+08:00 Clint Byrum cl...@fewbar.com:

 Excerpts from Sean Dague's message of 2015-06-15 14:00:43 -0700:
  On 06/15/2015 04:50 PM, Jim Rollenhagen wrote:
   On Mon, Jun 15, 2015 at 01:07:39PM -0400, Jay Pipes wrote:
   It has come to my attention in [1] that the microversion spec for
 Nova [2]
   and Ironic [3] have used the project name -- i.e. Nova and Ironic
 -- instead
   of the name of the API -- i.e. OpenStack Compute and OpenStack
 Bare
   Metal -- in the HTTP header that a client passes to indicate a
 preference
   for or knowledge of a particular API microversion.
  
   The original spec said that the HTTP header should contain the name
 of the
   service type returned by the Keystone service catalog (which is
 also the
   official name of the REST API). I don't understand why the spec was
 changed
   retroactively and why Nova has been changed to return
   X-OpenStack-Nova-API-Version instead of
 X-OpenStack-Compute-API-Version HTTP
   headers [4].
  
   To be blunt, Nova is the *implementation* of the OpenStack Compute
 API.
   Ironic is the *implementation* of the OpenStack BareMetal API.
  
   While I tend to agree in principle, do we reasonably expect that
 other
   implementations of these APIs will implement every one of these
   versions? Can we even reasonably expect another implementation of
 these
   APIs?
  
   // jim
 
  Yeh, honestly, I'm not really convinced that thinking we are doing this
  for alternative implementations is really the right approach (or even
  desireable). Honestly, the transition to microversions makes
 alternative
  implementations harder because there isn't a big frozen API for a long
  period of time.
 

 Actually that makes an alternative implementation more valuable. Without
 microversions those alternative implementations would have to wait a long
 time to implement fixes to the API, but now can implement and publish
 the fix as soon as the microversion lands. This means that alternative
 implementations will lag _less_ behind the primary.


 So if our min_version is 2.1 and the max_version is 2.50. That means
 alternative implementations need implement all the 50 versions api...that
 sounds pain...




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best regards,
 Andrey Kurilin.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [Ironic] [Inspector] Where should integration tests for non-core projects live now? (Was: Toward 2.0.0 release)

2015-06-17 Thread Dmitry Tantsur

On 06/17/2015 06:54 AM, Ken'ichi Ohmichi wrote:

2015-06-17 12:38 GMT+09:00 Yuiko Takada yuikotakada0...@gmail.com:


Then, as you and Matt and Dimitry talked about this on IRC few days ago,
We can add Ironic/Ironic-inspector tests into Tempest still, right?
So that I've started to implement a test in Tempest,
but I'm facing another issue.
As you know, Ironic API has microversions, and Ironic-inspector can run
with microversion  1.6.
But currently there is no feature testing specific Ironic API microversions
on Tempest, right?

So that, we have to think about some solutions.

(1) Make testing specific Ironic API microversions on Tempest possible
adam_g is posting this patch set.
https://review.openstack.org/166386

(2)Using tempest_lib instead of adding tests into Tempest
Is tempest_lib available already?
Or do we need to wait for something will be merged?


I guess the above question seems multiple factors are mixed.
You want to test ironic-inspector behaviors by
   * using ironic-inspector REST APIs directly without Ironic
   * using Ironic REST APIs which need newer microversion
right?


Hi, thanks for clarifying, let me jump in :)

The former is more or less covered by functional testing, so I'd like us 
to concentrate on the latter, and run it voting on inspector repo and 
non-voting on Ironic for the time being.




For the first test, you can implement without considering microversion.
The test just calls ironic-inspector REST APIs directly and checks its behavior.
You can implement the test on Tempest/ironic-inspector repository.
Current tempest-lib seems enough to implement tests in
ironic-inspector repository as features, but it is better to wait for
Tempest's external interface spec[1] approval.
It is trying to define directory structure of Tempest-like tests on
each project repository and Tempest will discover tests based on the
directory structure and run them.
So if implementing tests on ironic-inspector repository before the
spec approval, you will need to change the directory structure again
in the future.


This wait part bothers me to some extend, because gate absence badly 
affects us for some time, but fine. Thanks for heads up anyway.




For the second test, microversions support is necessary on Tempest
side and adam_g's patch seems good for implementing it.
My main concern of microversions tests is how to run multiple
microversions on the gate.
We have discussed that in Nova design session of Vancouver summit and
the conclusion is
  * Minimum microversion
  * Maximum microversion
  * Interesting microversions
as the gate test.


Facepalm. That's what I was talking about (and what we actually ended up 
in Ironic with): we're introducing a ton of non-tested (and thus 
presumably broken) microversions, because it's cool to do. Ok, that's 
another thread :)



IMO Interesting microversions would be the last microversions of
each release(Kilo, Liberty, ..) I feel.


With Ironic intermediate release, it will be more, I estimate it to be 
5-6 per year, but of course I can't tell for sure.



I have qa-spec[2] for testing microversions on the gate, but that is
not complete yet.
That will affect how to specify/run microversions tests on Tempest.
So now I'm not sure yet the way to specify microversion on current
adam_g's patch is the best.

So my recommendation/hope is that we concentrate on Tempest's external
interface spec[1] and make it better together, then we can implement
Tempest-like tests on each repository after that.
As the next step, we will test microversions on the same way between
projects based on conclusion of the spec[2].


What I'd prefer us to start with is a gate test, which just sets 
devstack with our plugin and runs a shell script testing a couple of 
basic things. This will be a HUGE leap forward for inspector, compared 
to only limited functional testing we have now.


So maybe we should start with it, and keep an eye on the tempest-lib 
stuff, wdyt?





(3)Make Ironic-inspector available even if microversion  1.6
Dmitry is posting this patch set.
https://review.openstack.org/192196
# I don't mean asking you to review this, don't worry :p


I've reviewed it already :)

Thanks
Ken Ohmichi

---
[1]: https://review.openstack.org/#/c/184992/
[2]: https://review.openstack.org/#/c/169126/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-17 Thread Dmitry Tantsur

On 06/17/2015 03:35 AM, Ken'ichi Ohmichi wrote:

2015-06-16 21:16 GMT+09:00 Jay Pipes jaypi...@gmail.com:

On 06/16/2015 08:00 AM, Dmitry Tantsur wrote:



16 июня 2015 г. 13:52 пользователь Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com написал:
  
   On 06/16/2015 04:36 AM, Alex Xu wrote:
  
   So if our min_version is 2.1 and the max_version is 2.50. That means
   alternative implementations need implement all the 50 versions
   api...that sounds pain...
  
  
   Yes, it's pain, but it's no different than someone who is following
the Amazon EC2 API, which cuts releases at a regular (sometimes every
2-3 weeks) clip.
  
   In Amazon-land, the releases are date-based, instead of
microversion/incrementing version-based, but the idea is essentially the
same.
  
   There is GREAT value to having an API mean ONE thing and ONE thing
only. It means that developers can code against something that isn't
like quicksand -- constantly changing meanings.

Being one of such developers, I only see this value for breaking
changes.



Sorry, Dmitry, I'm not quite following you. Could you elaborate on what you
mean by above?


I guess maybe he is thinking the value of microversions is just for
backwards incompatible changes and backwards compatible changes are
unnecessary to be managed by microversions because he is proposing it
as Ironic patch.


Exactly. That's not only my thinking, that's my experience from Kilo as 
both Ironic developer, and developer *for* Ironic (i.e. the very person 
you're trying to make happy).




Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Security] the need about implementing a MAC security hook framework for OpenStack

2015-06-17 Thread Yang Luo
Hi list,

  I'd like to know the need about implementing a MAC (Mandatory Access
Control) security hook framework for OpenStack, just like the Linux
Security Module to Linux. It can be used to help construct a security
module that mediates the communications between OpenStack nodes and
controls distribution of resources (i.e., images, network, shared disks).
This security hook framework should be cluster-wide, dynamic policy
updating supported, non-intrusive implemented and with low performance
overhead. The famous module in LSM, SELinux can also be imported into this
security hook framework. In my point, as OpenStack has become a leading
cloud operating system, it needs some kind of security architecture as
standard OS.

I am a Ph.D student who has been following OpenStack security closely for
nearly 1 year. This is just my initial idea and I know this project won't
be small, so before I actually work on it, I'd like to hear your
suggestions or objections about it. Thanks!

Best,
Yang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Armando M.
On 16 June 2015 at 22:36, Sam Morrison sorri...@gmail.com wrote:


 On 17 Jun 2015, at 10:56 am, Armando M. arma...@gmail.com wrote:



 On 16 June 2015 at 17:31, Sam Morrison sorri...@gmail.com wrote:

 We at NeCTAR are starting the transition to neutron from nova-net and
 neutron almost does what we want.

 We have 10 “public networks and 10 “service networks and depending on
 which compute node you land on you get attached to one of them.

 In neutron speak we have multiple shared externally routed provider
 networks. We don’t have any tenant networks or any other fancy stuff yet.
 How I’ve currently got this set up is by creating 10 networks and
 subsequent subnets eg. public-1, public-2, public-3 … and service-1,
 service-2, service-3 and so on.

 In nova we have made a slight change in allocate for instance [1] whereby
 the compute node has a designated hardcoded network_ids for the public and
 service network it is physically attached to.
 We have also made changes in the nova API so users can’t select a network
 and the neutron endpoint is not registered in keystone.

 That all works fine but ideally I want a user to be able to choose if
 they want a public and or service network. We can’t let them as we have 10
 public networks, we almost need something in neutron like a network group”
 or something that allows a user to select “public” and it allocates them a
 port in one of the underlying public networks.

 I tried going down the route of having 1 public and 1 service network in
 neutron then creating 10 subnets under each. That works until you get to
 things like dhcp-agent and metadata agent although this looks like it could
 work with a few minor changes. Basically I need a dhcp-agent to be spun up
 per subnet and ensure they are spun up in the right place.

 I’m not sure what the correct way of doing this. What are other people
 doing in the interim until this kind of use case can be done in Neutron?


 Would something like [1] be adequate to address your use case? If not, I'd
 suggest you to file an RFE bug (more details in [2]), so that we can keep
 the discussion focused on this specific case.

 HTH
 Armando

 [1] https://blueprints.launchpad.net/neutron/+spec/rbac-networks


 That’s not applicable in this case. We don’t care about what tenants are
 when in this case.

 [2]
 https://github.com/openstack/neutron/blob/master/doc/source/policies/blueprints.rst#neutron-request-for-feature-enhancements


 The bug Kris mentioned outlines all I want too I think.


I don't know what you're referring to.



 Sam






 Cheers,
 Sam

 [1]
 https://github.com/NeCTAR-RC/nova/commit/1bc2396edc684f83ce471dd9dc9219c4635afb12



  On 17 Jun 2015, at 12:20 am, Jay Pipes jaypi...@gmail.com wrote:
 
  Adding -dev because of the reference to the Neutron Get me a network
 spec. Also adding [nova] and [neutron] subject markers.
 
  Comments inline, Kris.
 
  On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
  During the Openstack summit this week I got to talk to a number of
 other
  operators of large Openstack deployments about how they do networking.
   I was happy, surprised even, to find that a number of us are using a
  similar type of networking strategy.  That we have similar challenges
  around networking and are solving it in our own but very similar way.
   It is always nice to see that other people are doing the same things
  as you or see the same issues as you are and that you are not crazy.
  So in that vein, I wanted to reach out to the rest of the Ops Community
  and ask one pretty simple question.
 
  Would it be accurate to say that most of your end users want almost
  nothing to do with the network?
 
  That was my experience at ATT, yes. The vast majority of end users
 could not care less about networking, as long as the connectivity was
 reliable, performed well, and they could connect to the Internet (and have
 others connect from the Internet to their VMs) when needed.
 
  In my experience what the majority of them (both internal and external)
  want is to consume from Openstack a compute resource, a property of
  which is it that resource has an IP address.  They, at most, care about
  which network they are on.  Where a network is usually an arbitrary
  definition around a set of real networks, that are constrained to a
  location, in which the company has attached some sort of policy.  For
  example, I want to be in the production network vs's the xyz lab
  network, vs's the backup network, vs's the corp network.  I would say
  for Godaddy, 99% of our use cases would be defined as: I want a compute
  resource in the production network zone, or I want a compute resource
 in
  this other network zone.  The end user only cares that the IP the vm
  receives works in that zone, outside of that they don't care any other
  property of that IP.  They do not care what subnet it is in, what vlan
  it is on, what switch it is attached to, what router its attached to,
 or
  how data flows in/out 

Re: [openstack-dev] [Ironic] ironic-lib library

2015-06-17 Thread Ramakrishnan G
Seems to me like we can keep ironic-lib git repository as a git submodule
of the ironic and ironic-python-agent repositories.  Any commit in Ironic
or Ironic-python-agent can change ironic-lib independently.  Also, looks
like our CI system supports it by automatically pushing commits in the
subscribed projects [1].  Sounds like that should be better instead of
making a new release of ironic-lib and waiting for it to be published to
make changes in Ironic or Ironic-python-agent.

[1] https://review.openstack.org/Documentation/user-submodules.html


On Tue, Jun 16, 2015 at 9:24 PM, Lucas Alvares Gomes lucasago...@gmail.com
wrote:

 Hi,

  I haven't paid any attention to ironic-lib; I just knew that we wanted to
  have a library of common code so that we didn't cut/paste. I just took a
  look[1] and there are files there from 2 months ago. So far, everything
 is
  under ironic_lib (ie, no subdirectories to group things). Going forward,
 are
  there guidelines as to where/what goes into this library?

 I don't think we have guidelines for the struct of the project, we
 should of course try to organize it well.

 About what goes into this library, AFAICT, this is place where code
 which is used in more than one project under the Ironic umbrella
 should go. For example, both Ironic and IPA (ironic-python-agent)
 deals with disk partitioning, so we should create a module for disk
 partitioning in the ironic-libs repository which both Ironic and IPA
 will import and use.


  I think it would be good to note down the process wrt using this library.
  I'm guessing that having this library will most certainly delay things
 wrt
  development. Changes will need to be made to the library first, then
 need to
  wait until a new version is released, then possibly update the min
 version
  in global-requirements, then use (and profit) in ironic-related projects.
 
 
  With the code in ironic, we were able to do things like change the
 arguments
  to methods etc. With the library -- do we need to worry about backwards
  compatibility?

 I would say so, those are things that we have to take in account when
 creating a shared library. But it also brings benefits:

 1. Code sharing
 2. Bug are fixed in one place only
 3. Flexibility, I believe that more projects using the same code will
 require it to be more flexible

  How frequently were we thinking of releasing a new version? (Depends on
  whether anything was changed there that is needed really soon?)

 Yes, just like the python-ironicclient a release can be cut when needed.

 Thanks for starting this thread, it would be good to the community
 evaluate whether we should go forward with ironic-libs or not.

 Cheers,
 Lucas

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for changing 1600UTC meeting to 1700 UTC

2015-06-17 Thread Sam Yaple
1600 is late for me as it is. The earlier the better is my vote. I would
make 1630 work, 1700 is too late.

Sam Yaple
864-901-0012

On Tue, Jun 16, 2015 at 2:09 PM, Harm Weites h...@weites.com wrote:

 I'm ok with moving to 16:30 UTC instead of staying at 16:00.

 I actually prefer it in my evening schedule :) Moving to 16:30 would
 already be a great improvement to the current schedule and should at least
 allow me to not miss everything.

 - harmw

 Op 12-06-15 om 15:44 schreef Steven Dake (stdake):

 Even though 7am is not ideal for the west coast, I¹d be willing to go back
 that far.  That would put the meeting at the morning school rush for the
 west coast folks though (although we are in summer break in the US and we
 could renegotiate a time in 3 months when school starts up again if its a
 problem) - so creating different set of problems for different set of
 people :)

 This would be a 1400 UTC meeting.

 While I wake up prior to 7am, (usually around 5:30) I am not going to put
 people through the torture of a 6am meeting in any timezone if I can help
 it so 1400 is the earliest we can go :)

 Regards
 -steve


 On 6/12/15, 4:37 AM, Paul Bourke paul.bou...@oracle.com wrote:

  I'm fairly easy on this but, if the issue is that the meeting is running
 into people's evening schedules (in EMEA), would it not make sense to
 push it back an hour or two into office hours, rather than forward?

 On 10/06/15 18:20, Ryan Hallisey wrote:

 After some upstream discussion, moving the meeting from 1600 to 1700
 UTC does not seem very popular.
 It was brought up that changing the time to 16:30 UTC could accommodate
 more people.

 For the people that attend the 1600 UTC meeting time slot can you post
 further feedback to address this?

 Thanks,
 Ryan

 - Original Message -
 From: Jeff Peeler jpee...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Sent: Tuesday, June 9, 2015 2:19:00 PM
 Subject: Re: [openstack-dev] [kolla] Proposal for changing 1600UTC
 meeting to 1700 UTC

 On Mon, Jun 08, 2015 at 05:15:54PM +, Steven Dake (stdake) wrote:

 Folks,

 Several people have messaged me from EMEA timezones that 1600UTC fits
 right into the middle of their family life (ferrying kids from school
 and what-not) and 1700UTC while not perfect, would be a better fit
 time-wise.

 For all people that intend to attend the 1600 UTC, could I get your
 feedback on this thread if a change of the 1600UTC timeslot to 1700UTC
 would be acceptable?  If it wouldn¹t be acceptable, please chime in as
 well.

 Both 1600 and 1700 UTC are fine for me.

 Jeff



 _
 _
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 _
 _
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taskflow] Returning information from reverted flow

2015-06-17 Thread Dulko, Michal


 -Original Message-
 From: Joshua Harlow [mailto:harlo...@outlook.com]
 Sent: Tuesday, June 16, 2015 4:52 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [taskflow] Returning information from reverted
 flow
 
 Dulko, Michal wrote:
  -Original Message-
  From: Joshua Harlow [mailto:harlo...@outlook.com]
  Sent: Friday, June 12, 2015 5:49 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [taskflow] Returning information from
  reverted flow
 
  Dulko, Michal wrote:
  Hi,
 
  In Cinder we had merged a complicated piece of code[1] to be able to
  return something from flow that was reverted. Basically outside we
  needed an information if volume was rescheduled or not. Right now
  this is done by injecting information needed into exception thrown
  from the flow. Another idea was to use notifications mechanism of
 TaskFlow.
  Both ways are rather workarounds than real solutions.
  Unsure about notifications being a workaround (basically u are
  notifying to some other entities that rescheduling happened, which
  seems like exactly what it was made for) but I get the point ;)
 
  Please take a look at this review -
 https://review.openstack.org/#/c/185545/. Notifications cannot help if some
 further revert decision needs to be based on something that happened
 earlier.
 
 That sounds like conditional reverting, which seems like it should be handled
 differently anyway, or am I misunderstanding something?

Current version of the patch takes another approach which I think handles it 
correctly. So you were probably right. :)

 
  I wonder if TaskFlow couldn't provide a mechanism to mark stored
  element to not be removed when revert occurs. Or maybe another way
  of returning something from reverted flow?
 
  Any thoughts/ideas?
  I have a couple, I'll make some paste(s) and see what people think,
 
  How would this look (as pseudo-code or other) to you, what would be
  your ideal, and maybe we can work from there (maybe u could do some
  paste(s) to and we can prototype it), just storing information that
  is returned from revert() somewhere? Or something else? There has
  been talk about task 'local storage' (or something like that/along
  those lines) that could also be used for this similar purpose.
 
  I think that the easiest idea from the perspective of an end user would be
 to save items returned from revert into flow engine's storage *and* do not
 remove it from storage when whole flow gets reverted. This is completely
 backward compatible, because currently revert doesn't return anything. And
 if revert has to record some information for further processing - this will 
 also
 work.
 
 
 Ok, let me see what this looks like and maybe I can have a POC in the next
 few days, I don't think its impossible to do (obviously) and hopefully will be
 useful for this.

Great!
 
  [1] https://review.openstack.org/#/c/154920/
 
 
 
 __
  
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: OpenStack-dev-
  requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
  
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: OpenStack-dev-
  requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 
   OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Quota enforcement

2015-06-17 Thread Kevin Benton
Ok. So if I understand it correctly, every update operation we do could
result in a deadlock then? Or is it just ones with where criteria that
became invalid.

On Tue, Jun 16, 2015 at 8:58 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 On Tue, Jun 16, 2015 at 5:17 PM, Kevin Benton blak...@gmail.com wrote:
  There seems to be confusion on what causes deadlocks. Can one of you
 explain
  to me how an optimistic locking strategy (a.k.a. compare-and-swap)
 results
  in deadlocks?
 
  Take the following example where two workers want to update a record:
 
  Worker1: UPDATE items set value=newvalue1 where value=oldvalue
  Worker2: UPDATE items set value=newvalue2 where value=oldvalue
 
  Then each worker checks the count of rows affected by the query. The one
  that modified 1 gets to proceed, the one that modified 0 must retry.

 Here's my understanding:  In a Galera cluster, if the two are run in
 parallel on different masters, then the second one gets a write
 certification failure after believing that it had succeeded *and*
 reading that 1 row was modified.  The transaction -- when it was all
 prepared for commit -- is aborted because the server finds out from
 the other masters that it doesn't really work.  This failure is
 manifested as a deadlock error from the server that lost.  The code
 must catch this deadlock error and retry the entire thing.

 I just learned about Mike Bayer's DBFacade from this thread which will
 apparently make the db behave as an active/passive for writes which
 should clear this up.  This is new information to me.

 I hope my understanding is sound and that it makes sense.

 Carl

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-17 Thread Thomas Goirand
On 06/15/2015 10:43 PM, Paul Belanger wrote:
 On 06/15/2015 03:03 PM, Allison Randal wrote:
 On 06/15/2015 11:48 AM, Thomas Goirand wrote:
 On 06/15/2015 04:55 PM, James Page wrote:
 The problem of managing delta and allowing a good level of
 distribution independence is still going to continue to exist and will
 be more difficult to manage due to the tighter coupling of development
 process and teams than we have today.

 On this basis, we're -1 on taking this proposal forward.

 That said, we do appreciate that the Ubuntu packaging for OpenStack is
 not as accessible as it might be using Bazaar as a VCS. In order to
 provide a more familiar experience to developers and operators looking
 to contribute to the wider Openstack ecosystem we will be moving our
 OpenStack packaging branches over to the new Git support in Launchpad
 in the next few weeks.
 [...]
 During our discussions at the Summit, you seemed to be enthusiastic
 about pushing our packaging to Stackforge. Then others told me to push
 it to the /openstack namespace to make it more big tent-ish, which
 made me very excited about the idea.

 So far, I've been very happy of the reboot of our collaboration, and
 felt like it was just awesome new atmosphere. So I have to admit I'm a
 bit disappointed to read the above, even though I do understand the
 reasoning.

 James is right. This discussion thread put a lot of faith in the
 possibility that moving packaging efforts under the OpenStack umbrella
 would magically solve our key blocking issues. (I'm guilty of it as much
 as anyone else.) But really, we the collaborators are the ones who have
 to solve those blocking issues, and we'll have to do it together, no
 matter what banner we do it under.

 Anyway, does this mean that you don't want to push packaging to
 /stackforge either, which was the idea we shared at the summit?

 I'm a bit lost on what I should do now, as what was exciting was
 enabling operation people to contribute. I'll think about it and see
 what to do next.

 It doesn't really matter where the repos are located, we can still
 collaborate. Just moving Ubuntu's openstack repos to git and the Debian
 Python Modules Team repos to git will be a massive step forward.

 While I agree those points are valid, and going to be helpful, moving
 under OpenStack (even Stackforge) does also offer the chance to get more
 test integration upstream (not saying this was the original scope).
 However, this could also be achieved by 3rd party integration too.
 
 I'm still driving forward with some -infra specific packaging for Debian
 / Fedora ATM (zuul packaging). Mostly because of -infra needs for
 packages. Not saying that is a reason to reconsider, but there is the
 need for -infra to consume packages from upstream.
 
 Thomas where does this leave you (Debian). Are you still considering the
 move to upstream?

Hi Paul,

FYI, I tried packaging Zuul and Nodepool for Debian, and saw the work
which has been done packaging it for -infra. I do plan to contribute to
that soon (as I have already some changes done). So I will at least help
for this.

As for moving packages to stackforge, I hope to start this effort, yes,
but probably not for the packages we share with Ubuntu, as this would be
problematic. So I have to start thinking about how to do it without
destroying the ongoing collaboration with James Page team. I also need
to have meetings with the Mirantis MOS packaging team, which will happen
in Moscow at the end of the month.

So, maybe, the best way to start would be with Zuul and Nodepool, as you
need that. Your contribution is very valuable. I'm not sure how I can
help you to help, and how to start... Let's agree to catch up on IRC and
discuss that, ok?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-17 Thread Thomas Goirand
On 06/16/2015 06:41 PM, Allison Randal wrote:
 On 06/15/2015 01:43 PM, Paul Belanger wrote:
 While I agree those points are valid, and going to be helpful, moving
 under OpenStack (even Stackforge) does also offer the chance to get more
 test integration upstream (not saying this was the original scope).
 However, this could also be achieved by 3rd party integration too.
 
 Nod, 3rd party integration is worth exploring.
 
 I'm still driving forward with some -infra specific packaging for Debian
 / Fedora ATM (zuul packaging). Mostly because of -infra needs for
 packages. Not saying that is a reason to reconsider, but there is the
 need for -infra to consume packages from upstream.
 
 I suspect that, at least initially, the needs of -infra specific
 packaging will be quite different than the needs of general-purpose
 packaging in Debian/Fedora distros. Trying to tightly couple the two
 will just bog you down in trying to solve far too many problems for far
 too many people. But, I also suspect that -infra packaging will be quite
 minimal and intended for the services to be configured by puppet, so
 there's a very good chance that if you sprint ahead and just do it, your
 style of packaging will end up feeding back into future packaging in the
 distros.
 
 Allison

As I wrote, I intend to contribute to that, and get the resulting
packages uploaded to Debian. Currently, there's a few issues about
missing dependencies in Debian, which I'm trying to fix first (I don't
maintain these packages, and as we have strong package ownership in
Debian, I have to get in touch with the maintainer first... and that can
take some time!).

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet][murano] Developing puppet module for Murano

2015-06-17 Thread Serg Melikyan
Hi Emilien,

I would like to answer your question regarding
stackforge/puppet-murano repository asked in different thread:

 Someone from Fuel team created first the module in Fuel, 6 months ago
 [1] and 3 months later someone from Fuel team  created an empty
 repository in Stackforge [2]. By the way, Puppet OpenStack community
 does not have core permissions on this module and it's own by Murano team.

Murano was included to Fuel around 2 years ago, our first official
release as part of Fuel was Icehouse - yes, we have puppet module for
Murano for a long time now. But until recently we didn't had a Big
Tent in place and that is why we never thought that we able to
upstream our module.

Once policy regarding upstream puppet modules in Fuel changed and Big
Tent model was adopted we decided to upstream module for Murano. I am
really sorry that I didn't contact you for more information how to do
that properly and just created corresponding repository.

I didn't give permission to Puppet OpenStack community for this
repository because it would be strange, given I didn't even contact
you. We thought that we would upstream what we have now and then make
sure that this repo will be integrated with Puppet OpenStack
ecosystem.

We still have big desire to upstream our puppet module. Fuel is not
only user of this module, there are other projects who would like to
use Murano as part of they solution and use puppet module from Fuel
for deployment.

Can you advise how we should proceed further?

References:
[1] 
https://github.com/stackforge/fuel-library/tree/master/deployment/puppet/murano/
[2] https://review.openstack.org/155688

-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Fuel API settings reference

2015-06-17 Thread Oleg Gelbukh
As this topic is getting some traction, I will register corresponding
blueprint in Fuel and try to decompose the work based on what Andrew
proposed.

--
Best regards,
Oleg Gelbukh

On Tue, Jun 16, 2015 at 3:54 PM, Oleg Gelbukh ogelb...@mirantis.com wrote:

 Andrew,

 I've also noticed that incompatible changes are being introduced in JSON
 schemas for different objects in almost every release. I hope that explicit
 reference that lists and explains all parameters will discourage such
 modifications, or at least will increase their visibility and allow to
 understand justifications for them.

 --
 Best regards,
 Oleg Gelbukh

 On Mon, Jun 15, 2015 at 4:21 PM, Andrew Woodward awoodw...@mirantis.com
 wrote:

 I think there is some desire to see more documentation around here as
 there are some odd interactions with parts of the data payload, and perhaps
 documenting these may improve some of them.

 I think the gaps in order of most used are:
 * node object create / update
 * environment networks ( the fact that metadata cant be updated kills me)
 * environment settings (the separate api for hidden and non kills me)
 * release update
 * role add/update

 After these are updated I think we can move on to common but less used
 * node interface assignment
 * node disk assignment



 On Mon, Jun 15, 2015 at 8:09 AM Oleg Gelbukh ogelb...@mirantis.com
 wrote:

 Good day, fellow fuelers

 Fuel API is a powerful tool that allow for very fine tuning of
 deployment settings and parameters, and we all know that UI exposes only a
 fraction of the full range of attributes client can pass to Fuel installer.

 However, there are very little documentation that explains what settings
 are accepted by Fuel objects, what are they meanings and what is their
 syntax. There is a main reference document for API [1], but it does give
 almost no insight into payload of parameters that every entity accepts.
 Which are they and what they for seems to be mostly scattered as a tribal
 knowledge.

 I would like to understand if there is a need in such a document among
 developers and deployers who consume Fuel API? Or might be there is already
 such document or effort to create it going on?

 --
 Best regards,
 Oleg Gelbukh

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 --
 Andrew Woodward
 Mirantis
 Fuel Community Ambassador
 Ceph Community

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-17 Thread Lucas Alvares Gomes
Hi,

I don't want to have to diverge much from the topic of this thread,
I've done this already as pointed out by Sean. But I feel like
replying to this.

 Sorry I might be missing something. I don't think one thing justify
 the other, plus the problem seems to be the source of truth. I thought
 that the idea of big tent in OpenStack was to not having TC to pick
 winners. E.g, If someone wants to have an alternative implementation
 of the Baremetal service they will always have to follow Ironic's API?
 That's unfair, cause they will always be behind and mostly likely
 won't weight much on the decisions of the API.


 I agree and at the same I disagree with this statement.

 A competing project in the Baremetal (or networking, or pop-corn as a
 service) areas, can move into two directions:
 1) Providing a different implementation for the same API that the
 incumbent (Ironic in this case) provides.
 2) Supply different paradigms, including a different user API, thus
 presenting itself as a new way of doing Baremetal (and this is exactly
 what Quantum did to nova-network).

 Both cases are valid, I believe.
 In the first case, the advantage is that operators could switch between the
 various implementations without affecting their users (this does not mean
 that the switch won't be painful for them of course). Also, users shouldn't
 have to worry about what's implementing the service, as they always interact
 with the same API.
 However, it creates a problem regarding control of said API... the team from
 the incumbent project, the new team, both teams, the API-WG, or no-one?
 The second case is super-painful for both operators and users (do you need a
 refresh on the nova-network vs neutron saga? We're at the 5th series now,
 and the end is not even in sight) However, it completely avoid the
 governance problem arising from having APIs which are implemented by
 multiple projects.


Right, I wasn't considering 2) because I thought it was out of the
table for this discussion.

 As I mentioned in the other reply, I find it difficult to talk about
 alternative implementations while we do not decouple the API
 definition level from the implementation level. If we want alternative
 implementations to be a real competitor we need to have a sorta of
 program in OpenStack that will be responsible for delivering a
 reference API for each type of project (Baremetal, Compute, Identity,
 and so on...).


 Indeed. If I understood what you wrote correctly, this is in-line with what
 I stated in the previous paragraph.
 Nevertheless, since afaict we do not have any competing APIs at the moment
 (the nova-network API is part of the Nova API so we might be talking about
 overlap there rather than competition), how crazy does it sound if we say
 that for OpenStack Nova is the compute API and Ironic the Bare Metal API and
 so on? Would that be an unacceptable power grab?

It's not that it's unacceptable, but I think that things weren't
projected that way. Jay started this thread with this sentence:

To be blunt, Nova is the *implementation* of the OpenStack Compute
API. Ironic is the *implementation* of the OpenStack BareMetal API.

Which I don't think is totally correct, at least for Ironic. The
Ironic's API have evolved and shaped as we implemented Ironic, I think
that some decisions we made in the API makes it clear, e.g:

* Resources have JSON attributes. If you look at some attributes of
the resources you will see that they are just a JSON blob. That's by
design because we didn't know exactly how the API should look like and
so by having these JSON fields it allows us to easily extend the
resource without changing it's structure [1] (see driver_info,
instance_info, extra)

* We have a vendor endpoint. This endpoint allows vendor to extend our
API to expose new hardware capabilities that aren't present in the
core API. Once multiple vendors starts implementing the same feature
on this endpoint we then decide whether to promote it to the core API.

* There's a reservation attribute in the Node's resource [1] which
valueis the hostname of the conductor that is currently holding an
exclusive lock to act upon this node. This is because internally we
use a distributed hashing algorithm to be able to route the requests
from the API service to a conductor service that is able to manage
that Node. And having this field in the API

I don't think that any of those decisions were bad by the way, this
have helped us a lot to understand how a service to manage Bare Metal
machines should looks like, and we have made wrong decisions too (You
can get the same information by GET'ing different endpoints in the
API, the Chassis resources currently have no usage apart from
logically grouping nodes, etc...)

So back to the topic. if we are removing the project name from the
Header to facilitate another project to implement the these type of
APIs I don't think it will help much. Perhaps the API-WG group should
make say that for new API's the 

Re: [openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class

2015-06-17 Thread Daniel P. Berrange
On Tue, Jun 16, 2015 at 04:21:16PM -0500, Matt Riedemann wrote:
 The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all very
 similar.
 
 I want to extract a common base class that abstracts some of the common code
 and then let the sub-classes provide overrides where necessary.
 
 As part of this, I'm wondering if we could just have a single
 'mount_point_base' config option rather than one per backend like we have
 today:
 
 nfs_mount_point_base
 glusterfs_mount_point_base
 smbfs_mount_point_base
 quobyte_mount_point_base
 
 With libvirt you can only have one of these drivers configured per compute
 host right?  So it seems to make sense that we could have one option used
 for all 4 different driver implementations and reduce some of the config
 option noise.

Doesn't cinder support multiple different backends to be used ? I was always
under the belief that it did, and thus Nova had to be capable of using any
of its volume drivers concurrently.

 Are there any concerns with this?

Not a concern, but since we removed the 'volume_drivers' config parameter,
we're now free to re-arrange the code too. I'd like use to create a subdir
nova/virt/libvirt/volume and create one file in that subdir per driver
that we have.

 Is a blueprint needed for this refactor?

Not from my POV. We've just done a huge libvirt driver refactor by adding
the Guest.py module without any blueprint.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] the need about implementing a MAC security hook framework for OpenStack

2015-06-17 Thread Clark, Robert Graham
Hi Yang,

This is an interesting idea. Most operators running production OpenStack 
deployments will be using OS-level Mandatory Access Controls already (likely 
AppArmour or SELinux).

I can see where there might be some application on a per-service basis, 
introducing more security for Swift, Nova etc, I’m not sure what you could do 
that would be OpenStack-wide.

Interested to hear where you think work on this might go.

-Rob


From: Yang Luo [mailto:hslu...@gmail.com]
Sent: 17 June 2015 07:47
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Security] the need about implementing a MAC security 
hook framework for OpenStack

Hi list,

  I'd like to know the need about implementing a MAC (Mandatory Access Control) 
security hook framework for OpenStack, just like the Linux Security Module to 
Linux. It can be used to help construct a security module that mediates the 
communications between OpenStack nodes and controls distribution of resources 
(i.e., images, network, shared disks). This security hook framework should be 
cluster-wide, dynamic policy updating supported, non-intrusive implemented and 
with low performance overhead. The famous module in LSM, SELinux can also be 
imported into this security hook framework. In my point, as OpenStack has 
become a leading cloud operating system, it needs some kind of security 
architecture as standard OS.

I am a Ph.D student who has been following OpenStack security closely for 
nearly 1 year. This is just my initial idea and I know this project won't be 
small, so before I actually work on it, I'd like to hear your suggestions or 
objections about it. Thanks!

Best,
Yang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-17 Thread Clark, Robert Graham
I think this is an interesting if somewhat difficult to follow thread.

It’s worth keeping in mind that there are more ways to handle certificates in 
OpenStack than just Barbican, though there are often good reasons to use it.

Is there a blueprint or scheduled IRC meeting to discuss the options? If useful 
I’d be happy to arrange for some folks from the Security Project to take a 
look, we spend a lot of time collectively dealing with TLS issues and might be 
able to help with the path-finding for TLS in Magnum.

-Rob

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: 17 June 2015 06:12
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Clint,

Hi! It’s good to hear from you!

On Jun 16, 2015, at 8:58 PM, Clint Byrum 
cl...@fewbar.commailto:cl...@fewbar.com wrote:

I don't understand at all what you said there.

If my kubernetes minions are attached to a gateway which has a direct
route to Magnum, let's say they're at, 192.0.2.{100,101,102}, and
Magnum is at 198.51.100.1, then as long as the minions' gateway knows
how to find 198.51.100.0/24, and Magnum's gateway knows how to route to
192.0.2.0/24, then you can have two-way communication and no floating
ips or NAT. This seems orthogonal to how external users find the minions.

That’s correct. Keep in mind that large clouds use layer 3 routing protocols to 
get packets around, especially for north/south traffic where public IP 
addresses are typically used. Injecting new routes into the network fabric each 
time we create a bay might cause reluctance from network administrators to 
allow the adoption of Magnum. Pre-allocating tons of RFC-1918 addresses to 
Magnum may also be impractical on networks that use those addresses 
extensively. Steve’s explanation of using routable addresses as floating IP 
addresses is one approach to leverage the prevailing SDN in the cloud’s network 
to address this concern.

Let’s not get too far off topic on this thread. We are discussing the 
implementation of TLS as a mechanism of access control for API services that 
run on networks that are reachable by the public. We got a good suggestion to 
use an approach that can work regardless of network connectivity between the 
Magnum control plane and the Nova instances (Magnum Nodes) and the containers 
that run on them. I’d like to see if we could use cloud-init to get the keys 
into the bay nodes (docker hosts). That way we can avoid the requirement for 
end-to-end network connectivity between bay nodes and the Magnum control plane.

Thanks,

Adrian

Excerpts from Steven Dake (stdake)'s message of 2015-06-16 19:40:25 -0700:

Clint,

Answering Clint’s question, yes there is a reason all nodes must expose a 
floating IP address.

In a Kubernetes cluster, each minion has a port address space.  When an 
external service contacts the floating IP’s port, the request is routed over 
the internal network to the correct container using a proxy mechanism.  The 
problem then is, how do you know which minion to connect to with your external 
service?  The answer is you can connect to any of them.  Kubernetes only has 
one port address space, so Kubernetes suffers from a single namespace problem 
(which Magnum solves with Bays).

Longer term it may make sense to put the minion external addresses on a RFC1918 
network, and put a floating VIF with a load balancer to connect to them.  Then 
no need for floating address per node.  We are blocked behind kubernetes 
implementing proper support for load balancing in OpenStack to even consider 
this work.

Regards
-steve

From: Fox, Kevin M 
kevin@pnnl.govmailto:kevin@pnnl.govmailto:kevin@pnnl.gov
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 16, 2015 at 6:36 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Out of the box, vms usually can contact the controllers though the routers nat, 
but not visa versa. So its preferable for guest agents to make the connection, 
not the controller connect to the guest agents. No floating ips, security group 
rules or special networks are needed then.

Thanks,
Kevin


From: Clint Byrum
Sent: Monday, June 15, 2015 6:10:27 PM
To: openstack-dev
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Excerpts from Fox, Kevin M's message of 2015-06-15 15:59:18 -0700:

No, I was confused by your statement:
When we create a bay, we have an ssh keypair that we use to inject the ssh 
public key onto the nova instances we create.

It sounded like you were using that keypair to inject a public key. I just 
misunderstood.

It does raise the 

Re: [openstack-dev] [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Neil Jerram

Hi Sam,

On 17/06/15 01:31, Sam Morrison wrote:

We at NeCTAR are starting the transition to neutron from nova-net and neutron 
almost does what we want.

We have 10 “public networks and 10 “service networks and depending on which 
compute node you land on you get attached to one of them.

In neutron speak we have multiple shared externally routed provider networks. 
We don’t have any tenant networks or any other fancy stuff yet.
How I’ve currently got this set up is by creating 10 networks and subsequent 
subnets eg. public-1, public-2, public-3 … and service-1, service-2, service-3 
and so on.

In nova we have made a slight change in allocate for instance [1] whereby the 
compute node has a designated hardcoded network_ids for the public and service 
network it is physically attached to.
We have also made changes in the nova API so users can’t select a network and 
the neutron endpoint is not registered in keystone.

That all works fine but ideally I want a user to be able to choose if they want a 
public and or service network. We can’t let them as we have 10 public networks, we 
almost need something in neutron like a network group” or something that 
allows a user to select “public” and it allocates them a port in one of the 
underlying public networks.


This begs the question: why have you defined 10 public-N networks, 
instead of just one public network?



I tried going down the route of having 1 public and 1 service network in 
neutron then creating 10 subnets under each. That works until you get to things 
like dhcp-agent and metadata agent although this looks like it could work with 
a few minor changes. Basically I need a dhcp-agent to be spun up per subnet and 
ensure they are spun up in the right place.


Why the 10 subnets?  Is it to do with where you actually have real L2 
segments, in your deployment?


Thanks,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Neil Jerram
[Sorry - unintentionally dropped -operators below; adding it back in 
this copy.]


On 17/06/15 11:35, Neil Jerram wrote:

Hi Sam,

On 17/06/15 01:31, Sam Morrison wrote:

We at NeCTAR are starting the transition to neutron from nova-net and
neutron almost does what we want.

We have 10 “public networks and 10 “service networks and depending
on which compute node you land on you get attached to one of them.

In neutron speak we have multiple shared externally routed provider
networks. We don’t have any tenant networks or any other fancy stuff yet.
How I’ve currently got this set up is by creating 10 networks and
subsequent subnets eg. public-1, public-2, public-3 … and service-1,
service-2, service-3 and so on.

In nova we have made a slight change in allocate for instance [1]
whereby the compute node has a designated hardcoded network_ids for
the public and service network it is physically attached to.
We have also made changes in the nova API so users can’t select a
network and the neutron endpoint is not registered in keystone.

That all works fine but ideally I want a user to be able to choose if
they want a public and or service network. We can’t let them as we
have 10 public networks, we almost need something in neutron like a
network group” or something that allows a user to select “public” and
it allocates them a port in one of the underlying public networks.


This begs the question: why have you defined 10 public-N networks,
instead of just one public network?


I tried going down the route of having 1 public and 1 service network
in neutron then creating 10 subnets under each. That works until you
get to things like dhcp-agent and metadata agent although this looks
like it could work with a few minor changes. Basically I need a
dhcp-agent to be spun up per subnet and ensure they are spun up in the
right place.


Why the 10 subnets?  Is it to do with where you actually have real L2
segments, in your deployment?

Thanks,
 Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][db] online schema upgrades

2015-06-17 Thread Anna Kamyshnikova
Ihar, thanks for bringing this up!

This is very interesting and I think it worth trying. I'm +1 on that and
want to participate in this work.

In fact a lot *not strict* migrations are removed with juno_initial, so I
hope it won't be so hard for now to apply stricter rules for migration. But
what is the plan for those migrations that are *not strict* now?

I think that we should try to use Alembic as much we could as Mike is going
to support us in that and we have time to make some change in Alembic
directly.

We should undoubtedly plan this work for M release because there will be
some issues that will appear in the process.

On Tue, Jun 16, 2015 at 6:58 PM, Mike Bayer mba...@redhat.com wrote:



 On 6/16/15 11:41 AM, Ihar Hrachyshka wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 - - instead of migrating data with alembic rules, migrate it in runtime.
 There should be a abstraction layer that will make sure that data is
 migrated into new schema fields and objects, while preserving data
 originally stored in 'old' schema elements.

 That would allow old neutron-server code to run against new schema (it
 will just ignore new additions); and new neutron-server code to
 gradually migrate data into new columns/fields/tables while serving user
 s.

 Hi Ihar -

 I was in the middle of writing a spec for neutron online schema
 migrations, which maintains expand / contract workflow but also maintains
 Alembic migration scripts.   As I've stated many times in the past, there
 is no reason to abandon migration scripts, while there are many issues
 related to abandoning the notion of the database in a specific versioned
 state as well as the ability to script any migrations whatsoever.   The
 spec amends Nova's approach and includes upstream changes to Alembic such
 that both approaches can be supported using the same codebase.

 - mike




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] [neutron] Re: How do your end users use networking?

2015-06-17 Thread Kyle Mestery
On Wed, Jun 17, 2015 at 1:59 AM, Armando M. arma...@gmail.com wrote:



 On 16 June 2015 at 22:36, Sam Morrison sorri...@gmail.com wrote:


 On 17 Jun 2015, at 10:56 am, Armando M. arma...@gmail.com wrote:



 On 16 June 2015 at 17:31, Sam Morrison sorri...@gmail.com wrote:

 We at NeCTAR are starting the transition to neutron from nova-net and
 neutron almost does what we want.

 We have 10 “public networks and 10 “service networks and depending on
 which compute node you land on you get attached to one of them.

 In neutron speak we have multiple shared externally routed provider
 networks. We don’t have any tenant networks or any other fancy stuff yet.
 How I’ve currently got this set up is by creating 10 networks and
 subsequent subnets eg. public-1, public-2, public-3 … and service-1,
 service-2, service-3 and so on.

 In nova we have made a slight change in allocate for instance [1]
 whereby the compute node has a designated hardcoded network_ids for the
 public and service network it is physically attached to.
 We have also made changes in the nova API so users can’t select a
 network and the neutron endpoint is not registered in keystone.

 That all works fine but ideally I want a user to be able to choose if
 they want a public and or service network. We can’t let them as we have 10
 public networks, we almost need something in neutron like a network group”
 or something that allows a user to select “public” and it allocates them a
 port in one of the underlying public networks.

 I tried going down the route of having 1 public and 1 service network in
 neutron then creating 10 subnets under each. That works until you get to
 things like dhcp-agent and metadata agent although this looks like it could
 work with a few minor changes. Basically I need a dhcp-agent to be spun up
 per subnet and ensure they are spun up in the right place.

 I’m not sure what the correct way of doing this. What are other people
 doing in the interim until this kind of use case can be done in Neutron?


 Would something like [1] be adequate to address your use case? If not,
 I'd suggest you to file an RFE bug (more details in [2]), so that we can
 keep the discussion focused on this specific case.

 HTH
 Armando

 [1] https://blueprints.launchpad.net/neutron/+spec/rbac-networks


 That’s not applicable in this case. We don’t care about what tenants are
 when in this case.

 [2]
 https://github.com/openstack/neutron/blob/master/doc/source/policies/blueprints.rst#neutron-request-for-feature-enhancements


 The bug Kris mentioned outlines all I want too I think.


 I don't know what you're referring to.



Armando, I think this is the bug he's referring to:

https://bugs.launchpad.net/neutron/+bug/1458890

This is something I'd like to look at next week during the mid-cycle,
especially since Carl is there and his spec for routed networks [2] covers
a lot of these use cases.

[2] https://review.openstack.org/#/c/172244/



 Sam






 Cheers,
 Sam

 [1]
 https://github.com/NeCTAR-RC/nova/commit/1bc2396edc684f83ce471dd9dc9219c4635afb12



  On 17 Jun 2015, at 12:20 am, Jay Pipes jaypi...@gmail.com wrote:
 
  Adding -dev because of the reference to the Neutron Get me a network
 spec. Also adding [nova] and [neutron] subject markers.
 
  Comments inline, Kris.
 
  On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
  During the Openstack summit this week I got to talk to a number of
 other
  operators of large Openstack deployments about how they do networking.
   I was happy, surprised even, to find that a number of us are using a
  similar type of networking strategy.  That we have similar challenges
  around networking and are solving it in our own but very similar way.
   It is always nice to see that other people are doing the same things
  as you or see the same issues as you are and that you are not crazy.
  So in that vein, I wanted to reach out to the rest of the Ops
 Community
  and ask one pretty simple question.
 
  Would it be accurate to say that most of your end users want almost
  nothing to do with the network?
 
  That was my experience at ATT, yes. The vast majority of end users
 could not care less about networking, as long as the connectivity was
 reliable, performed well, and they could connect to the Internet (and have
 others connect from the Internet to their VMs) when needed.
 
  In my experience what the majority of them (both internal and
 external)
  want is to consume from Openstack a compute resource, a property of
  which is it that resource has an IP address.  They, at most, care
 about
  which network they are on.  Where a network is usually an
 arbitrary
  definition around a set of real networks, that are constrained to a
  location, in which the company has attached some sort of policy.  For
  example, I want to be in the production network vs's the xyz lab
  network, vs's the backup network, vs's the corp network.  I would say
  for Godaddy, 99% of our use cases would be defined as: I want a
 

  1   2   >