[openstack-dev] [nova-docker] Looking for volunteers to take care of nova-docker

2015-05-27 Thread Davanum Srinivas
Hi all,

So the feedback during the Vancouver summit from some of the nova
cores was that, we needed volunteers to take care of the nova-docker
driver before it can be considered to merge in the Nova tree.

As an exercise is resposibility, we need people who can reinstate the
nova-docker non-voting job (essentially revert [1]) and keep an eye on
the output of the job every day to make sure when the CI jobs run
against the nova reviews, they stay green.

I've cc'ed some folks who expressed interest in the past, please reply
back to this thread if you wish to join this effort and specifically
if you can volunteer for watching and fixing the CI as issues arise
(keeping up with Nova trunk and requirements etc).

If there are no volunteers here, nova-docker will stay in sourceforge.
So folks who are using it, please step up.

Thanks,
dims

[1] https://review.openstack.org/#/c/150887/

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Redis as Messaging System

2015-05-27 Thread Clint Byrum

Excerpts from Ryan Brown's message of 2015-05-26 05:48:14 -0700:
 Zaqar provides an option to use Redis as a backend, and Zaqar provides
 pubsub messaging.
 

Please please please do not mistake under-the-cloud messaging, which
oslo.messaging is intended to facilitate, for user-facing messaging,
which Zaqar is intended to facilitate.

Under the cloud, you have one tenant, and simply communicating directly
with Redis will suffice.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][oslo] Stepping down from oslo-ironic liaison

2015-05-27 Thread Davanum Srinivas
Tan,

Awesome, please update the page here:
https://wiki.openstack.org/wiki/CrossProjectLiaisons

thanks,
dims

On Tue, May 26, 2015 at 9:04 PM, Tan, Lin lin@intel.com wrote:
 Hi Doug and guys,

 I would like to work as oslo-ironic liasison to sync Ironic with Oslo.
 I will attend the regular Oslo meeting for sure. My IRC name is lintan, and 
 Launchpad id is tan-lin-good

 Thanks

 Tan

 -Original Message-
 From: Doug Hellmann [mailto:d...@doughellmann.com]
 Sent: Tuesday, May 26, 2015 9:17 PM
 To: openstack-dev
 Subject: Re: [openstack-dev] [Ironic][oslo] Stepping down from oslo-ironic 
 liaison

 Excerpts from Ghe Rivero's message of 2015-05-25 09:45:47 -0700:
 My focus on the Ironic project has been decreasing in the last cycles,
 so it's about time to relinquish my position as a oslo-ironic liaison
 so new contributors can take over it and help ironic to be the vibrant
 project it is.

 So long, and thanks for all the fish,

 Ghe Rivero

 Thanks for your help as liaison, Ghe, the Oslo team appreciates your effort!

 Doug

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Setting cluster status when provisioning a node

2015-05-27 Thread Roman Prykhodchenko
Oleg,

Thanks for the feedback. I have the following as a response:

1. This spec is just an excerpt for scoping in the proposed improvement to the 
7.0 release plan. If it get’s scope the full specification will go through a 
standard review process so it will be possible to discuss names along with the 
rest of details then.

2. It’s already noticed in the spec the status is is generated using an 
aggregate query like you described so I don’t propose to store it. Storing that 
data will require sophisticated algorithms to work with it and also will lead 
to more locks or race conditions in the database. So yes, it’s going to be a 
method.


- romcheg


 27 трав. 2015 о 08:19 Oleg Gelbukh ogelb...@mirantis.com написав(ла):
 
 Roman,
 
 This looks like a great solution to me, and I like your proposal very much. 
 The status of cluster derived directly from statuses of nodes is exactly what 
 I was thinking about.
 
 I have to notes to the proposal, and I can copy them to etherpad if you think 
 they deserve it:
 
 1) status name 'operational' seem a bit unclear to me, as it sounds more like 
 something Monitoring should report: it implies that the actual OpenStack 
 environment is operational, which might or might not be a case, and Fuel has 
 no way to tell. I would really prefer if that status name was 'Deployed' or 
 something along those lines.
 
 2) I'm not sure if we need to keep the complex status of the cluster 
 explicitly in 'cluster' table in the format you suggest. This information can 
 be taken directly from 'nodes' table in Nailgun DB. For example, getting it 
 in the second form you propose is as simple as:
 
 nailgun= SELECT status,count(status) FROM nodes GROUP BY status;
 discover|1
 ready|5
 
 What do you think about making it a method rather then an element of data 
 model? Or that's exactly the complexity you want to get rid of?
 
 --
 Best regards,
 Oleg Gelbukh
 
 
 On Tue, May 26, 2015 at 4:16 PM, Roman Prykhodchenko m...@romcheg.me 
 mailto:m...@romcheg.me wrote:
 Oleg,
 
 Aleksander also proposed a nice proposed a nice solution [1] which is to have 
 a complex status for cluster. That, however, looks like a BP so I’ve created 
 an excerpt [2] for it and we will try to discuss it scope it for 7.0, if 
 there is a consensus.
 
 
 References:
 
 1. http://lists.openstack.org/pipermail/openstack-dev/2015-May/064670.html 
 http://lists.openstack.org/pipermail/openstack-dev/2015-May/064670.html
 2. https://etherpad.openstack.org/p/fuel-cluster-complex-status 
 https://etherpad.openstack.org/p/fuel-cluster-complex-status
 
 
 - romcheg
 
 22 трав. 2015 о 22:32 Oleg Gelbukh ogelb...@mirantis.com 
 mailto:ogelb...@mirantis.com написав(ла):
 
 Roman,
 
 I'm totally for fixing Nailgun. However, the status of environment is not 
 simply function of statuses of nodes in it. Ideally, it should depend on 
 whether appropriate number of nodes of certain roles are in 'ready' status. 
 For the meantime, it would be enough if environment was set to 'operational' 
 when all nodes in it become 'ready', no matter how they were deployed (i.e. 
 via Web UI or CLI).
 
 --
 Best regards,
 Oleg Gelbukh
 
 On Fri, May 22, 2015 at 5:33 PM, Roman Prykhodchenko m...@romcheg.me 
 mailto:m...@romcheg.me wrote:
 Hi folks!
 
 Recently I encountered an issue [1] that the Deploy Changes button in the 
 web ui is still active when a provisioning of single node is started using 
 the command line client.
 The background for that issue is that the provisioning task does not seem to 
 update the cluster status correctly and Nailgun’s API returns it as NEW even 
 while some of the node are been provisioned.
 
 The reason for raising this thread in the mailing list is that provisioning 
 a node is a feature for developers and basically end-users should not do 
 that. What is the best solution for that: fix Nailgun to set the correct 
 status, or make this provisioning feature available only for developers?
 
 1. https://bugs.launchpad.net/fuel/7.0.x/+bug/1449086 
 https://bugs.launchpad.net/fuel/7.0.x/+bug/1449086
 
 
 - romcheg
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 

Re: [openstack-dev] [Ironic][oslo] Stepping down from oslo-ironic liaison

2015-05-27 Thread Davanum Srinivas
Victor,

Nice, yes, Joe was the liaison with Nova so far. Yes, please go ahead
and add your name in the wiki for Nova as i believe Joe is winding
down the oslo liaison as well.
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Oslo

thanks,
dims

On Wed, May 27, 2015 at 5:12 AM, Victor Stinner vstin...@redhat.com wrote:
 Hi,

 By the way, who is the oslo liaison for nova? If there is nobody, I would
 like to take this position.

 Victor

 Le 25/05/2015 18:45, Ghe Rivero a écrit :

 My focus on the Ironic project has been decreasing in the last cycles,
 so it's about time to relinquish my position as a oslo-ironic liaison so
 new contributors can take over it and help ironic to be the vibrant
 project it is.

 So long, and thanks for all the fish,

 Ghe Rivero
 --
 Pinky: Gee, Brain, what do you want to do tonight?
 The Brain: The same thing we do every night, Pinky—try to take over the
 world!

   .''`.  Pienso, Luego Incordio
 : :' :
 `. `'
`- www.debian.org http://www.debian.org www.openstack.com
 http://www.openstack.com

 GPG Key: 26F020F7
 GPG fingerprint: 4986 39DA D152 050B 4699  9A71 66DB 5A36 26F0 20F7


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] modification disk configuration

2015-05-27 Thread Evgeniy L
Hi Samuel,

Currently it's not possible to change partitioning schema of Fuel roles,
but you can change partitioning in post_deployment tasks of your plugin.

Thanks,

On Wed, May 27, 2015 at 9:38 AM, Samuel Bartel samuel.bartel@gmail.com
wrote:

 Hi folks

 In some plugin  such as the nfs for glance or nova and the netapp for
 cinder, we have replaced the lvm by nfs mount point. However we still have
 partition setup for glance, cinder, nova which are not used anymore.
 we can still int the disk configure allocate the minimum space to these
 partitions. but is it possible in the plugin to chache the disk
 configuration in order to reallocate thiese partition to a different used?

 regards

 Samuel

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Availability of device names for operations with volumes and BDM and other features.

2015-05-27 Thread Alexandre Levine

Hi all,

I'd like to bring up this matter again, although it was at some extent 
discussed during the recent summit.


The problem arises from the fact that the functionality exposing device 
names for usage through public APIs is deteriorating in nova. It's being 
deliberately removed because as I understand, it doesn't universally and 
consistently work in all of the backends. It happens  since IceHouse and 
introduction of bdm v2. The following very recent review is one of the 
ongoing efforts in this direction:

https://review.openstack.org/#/c/185438/

The reason for my concern is that EC2 API have some important cases 
relying on this information (some of them have no workarounds). Namely:

1. Change of parameters set by image for instance booting.
2. Showing instance's devices information by euca2ools.
3. Providing additional volumes for instance booting
4. Attaching volume
etc...

Related to device names and additional related features we have troubles 
with now:

1. All device name related features
2. Modification of deleteOnTermination flag
3. Modification of parameters for instance booting
4. deleteOnTermination and size of volume aren't stored into instance 
snapshots now.


Discussions during the summit on the matter were complicated because 
nobody present really understood in details why and what is happening 
with this functionality in nova. It was decided though, that overall 
direction would be to add necessary features or restore them unless 
there is something really showstopping:

https://etherpad.openstack.org/p/YVR-nova-contributor-meetup

As I understand, Nikola Depanov is the one working on the matter for 
some time obviously is the best person who can help to resolve the 
situation. Nikola, if possible, could you help with it and clarify the 
issue.


My suggestion, based on my limited knowledge at the moment, still is to 
restore back or add all of the necessary APIs and provide tickets or 
known issues for the cases where the functionality is suffering from the 
backend limitations.


Please let me know what you think.

Best regards,
  Alex Levine





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][oslo] Stepping down from oslo-ironic liaison

2015-05-27 Thread Victor Stinner

Hi,

By the way, who is the oslo liaison for nova? If there is nobody, I 
would like to take this position.


Victor

Le 25/05/2015 18:45, Ghe Rivero a écrit :

My focus on the Ironic project has been decreasing in the last cycles,
so it's about time to relinquish my position as a oslo-ironic liaison so
new contributors can take over it and help ironic to be the vibrant
project it is.

So long, and thanks for all the fish,

Ghe Rivero
--
Pinky: Gee, Brain, what do you want to do tonight?
The Brain: The same thing we do every night, Pinky—try to take over the
world!

  .''`.  Pienso, Luego Incordio
: :' :
`. `'
   `- www.debian.org http://www.debian.org www.openstack.com
http://www.openstack.com

GPG Key: 26F020F7
GPG fingerprint: 4986 39DA D152 050B 4699  9A71 66DB 5A36 26F0 20F7


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-05-27 Thread Thomas Goirand
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi all,

tl;dr:
- - We'd like to push distribution packaging of OpenStack on upstream
gerrit with reviews.
- - The intention is to better share the workload, and improve the overall
QA for packaging *and* upstream.
- - The goal is *not* to publish packages upstream
- - There's an ongoing discussion about using stackforge or openstack.
This isn't, IMO, that important, what's important is to get started.
- - There's an ongoing discussion about using a distribution specific
namespace, my own opinion here is that using /openstack-pkg-{deb,rpm} or
/stackforge-pkg-{deb,rpm} would be the most convenient because of a
number of technical reasons like the amount of Git repository.
- - Finally, let's not discuss for too long and let's do it!!! :)

Longer version:

Before I start: some stuff below is just my own opinion, others are just
given facts. I'm sure the reader is smart enough to guess which is what,
and we welcome anyone involved in the project to voice an opinion if
he/she differs.

During the Vancouver summit, operation, Canonical, Fedora and Debian
people gathered and collectively expressed the will to maintain
packaging artifacts within upstream OpenStack Gerrit infrastructure. We
haven't decided all the details of the implementation, but spent the
Friday morning together with members of the infra team (hi Paul!) trying
to figure out what and how.

A number of topics have been raised, which needs to be shared.

First, we've been told that such a topic deserved a message to the dev
list, in order to let groups who were not present at the summit. Yes,
there was a consensus among distributions that this should happen, but
still, it's always nice to let everyone know.

So here it is. Suse people (and other distributions), you're welcome to
join the effort.

- - Why doing this

It's been clear to both Canonical/Ubuntu teams, and Debian (ie: myself)
that we'd be a way more effective if we worked better together, on a
collaborative fashion using a review process like on upstream Gerrit.
But also, we'd like to welcome anyone, and especially the operation
folks, to contribute and give feedback. Using Gerrit is the obvious way
to give everyone a say on what we're implementing.

As OpenStack is welcoming every day more and more projects, it's making
even more sense to spread the workload.

This is becoming easier for Ubuntu guys as Launchpad now understand not
only BZR, but also Git.

We'd start by merging all of our packages that aren't core packages
(like all the non-OpenStack maintained dependencies, then the Oslo libs,
then the clients). Then we'll see how we can try merging core packages.

Another reason is that we believe working with the infra of OpenStack
upstream will improve the overall quality of the packages. We want to be
able to run a set of tests at build time, which we already do on each
distribution, but now we want this on every proposed patch. Later on,
when we have everything implemented and working, we may explore doing a
package based CI on every upstream patch (though, we're far from doing
this, so let's not discuss this right now please, this is a very long
term goal only, and we will have a huge improvement already *before*
this is implemented).

- - What it will *not* be
===
We do not have the intention (yet?) to publish the resulting packages
built on upstream infra. Yes, we will share the same Git repositories,
and yes, the infra will need to keep a copy of all builds (for example,
because core packages will need oslo.db to build and run unit tests).
But we will still upload on each distributions on separate repositories.
So published packages by the infra isn't currently discussed. We could
get to this topic once everything is implemented, which may be nice
(because we'd have packages following trunk), though please, refrain to
engage in this topic right now: having the implementation done is more
important for the moment. Let's try to stay on tracks and be constructive.

- - Let's keep efficiency in mind
===
Over the last few years, I've been able to maintain all of OpenStack in
Debian with little to no external contribution. Let's hope that the
Gerrit workflow will not slow down too much the packaging work, even if
there's an unavoidable overhead. Hopefully, we can implement some
liberal ACL policies for the core reviewers so that the Gerrit workflow
don't slow down anyone too much. For example we may be able to create
new repositories very fast, and it may be possible to self-approve some
of the most trivial patches (for things like typo in a package
description, adding new debconf translations, and such obvious fixes, we
shouldn't waste our time).

There's a middle ground between the current system (ie: only write
access ACLs for git.debian.org with no other check what so ever) and a
too restrictive fully protected gerrit workflow that may slow down
everyone too much. 

Re: [openstack-dev] [Zaqar][all] Zaqar will stay... Lots of work ahead

2015-05-27 Thread Victoria Martínez de la Cruz
Hi,

Thanks for writing down the summit notes Flavio. I'm really glad that there
are so many great features to work on in this cycle. I'll be there to help
as much as possible with every one of them.

I would also like to add that we will be doing a lot of work in the client
side, updating it to cover latest features and working on improving the CLI
with the new Outreachy intern, Doraly.

Looking forward for the next meeting,

Victoria

El mar., 26 may. 2015 a las 9:40, Flavio Percoco (fla...@redhat.com)
escribió:

 On 26/05/15 08:23 -0400, Ryan Brown wrote:
 On 05/26/2015 04:28 AM, Flavio Percoco wrote:

 [snip]

  As a first step, we should restore our meetings and get to work right
  away. To favor our contributors in NZ, next week's meeting will be at
  21:00 UTC and we'll keep it at that time for 2 weeks.
 
 For those who didn't know what day the Zaqar meetings are normally, they
 are on Mondays according to the OpenStack calendar (next meeting on June
 1).

 Damn, I always forget something. Yes, meetings are on Mondays with
 alternate times (15:00 and 21:00 UTC). We'll do 21:00 UTC for 2 weeks
 to have enough time to sync with folks from NZ.

 Cheers,
 Flavio

 --
 @flaper87
 Flavio Percoco
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-05-27 Thread ozamiatin

Hi,

I'll try to address the question about Proxy process.

AFAIK there is no way yet in zmq to bind more than once to a specific 
port (e.g. tcp://*:9501).


Apparently we can:

socket1.bind('tcp://node1:9501')
socket2.bind('tcp://node2:9501')

but we can not:

socket1.bind('tcp://*:9501')
socket2.bind('tcp://*:9501')

So if we would like to have a definite port assigned with the driver we 
need to use a proxy which receives on a single socket and redirects to a 
number of sockets.


It is a normal practice in zmq to do so. There are even some helpers 
implemented in the library so-called 'devices'.


Here the performance question is relevant. According to ZeroMQ 
documentation [1] The basic heuristic is to allocate 1 I/O thread in the 
context for every gigabit per second of data that will be sent and 
received (aggregated).


The other way is to 'bind_to_random_port', but here we need some 
mechanism to notify the client about the port we are listening to. So it 
is more complicated solution.


Why to run in a separate process? For zmq api it doesn't matter to 
communicate between threads (INPROC), between processes (IPC) or between 
nodes (TCP, PGM and others). Because we need to run proxy once on a node 
it's easier to do it in a separate process. How to track the proxy is 
running already if we put it in a thread of some service?


In spite of having a broker-like instance locally we still stay 
brokerless because we have no central broker node with a queue we need 
to replicate and keep alive. Each node is acutally a peer. The broker is 
not a standalone node so we can not say that it is a 'single point of 
failure' . We can consider the local broker as a part of a server. It is 
worth noting that IPC communication is much more reliable than real 
network communication. One more benefit is that the proxy is stateless 
so we don't have to bother about managing the state (syncing it or 
having enough memory to keep it)


I'll cite the zmq-guide about broker/brokerless (4.14. Brokerless 
Reliability p.221):


It might seem ironic to focus so much on broker-based reliability, when 
we often explain ØMQ as brokerless messaging. However, in messaging, 
as in real life, the middleman is both a burden and a benefit. In 
practice, *_most messaging architectures benefit from a mix of 
distributed and brokered messaging_*. 



Thanks,
Oleksii


1 - http://zeromq.org/area:faq#toc7


5/26/15 18:57, Davanum Srinivas пишет:

Alec,

Here are the slides:
http://www.slideshare.net/davanum/oslomessaging-new-0mq-driver-proposal

All the 0mq patches to date should be either already merged in trunk
or waiting for review on trunk.

Oleksii, Li Ma,
Can you please address the other questions?

thanks,
Dims

On Tue, May 26, 2015 at 11:43 AM, Alec Hothan (ahothan)
ahot...@cisco.com wrote:

Looking at what is the next step following the design summit meeting on
0MQ as the etherpad does not provide too much information.
Few questions:
- would it be possible to have the slides presented (showing the proposed
changes in the 0MQ driver design) to be available somewhere?
- is there a particular branch in the oslo messaging repo that contains
0MQ related patches - I'm more particularly interested by James Page's
patch to pool the 0MQ connections but there might be other
- question for Li Ma, are you deploying with the straight upstream 0MQ
driver or with some additional patches?

The per node proxy process (which is itself some form of broker) needs to
be removed completely if the new solution is to be made really
broker-less. This will also eliminate the only single point of failure in
the path and reduce the number of 0MQ sockets (and hops per message) by
half.

I think it was proposed that we go on with the first draft of the new
driver (which still keeps the proxy server but reduces the number of
sockets) before eventually tackling the removal of the proxy server?



Thanks

   Alec



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-05-27 Thread Neil.Jerram
Great initiative, IMO. I favour going directly to openstack-, rather than 
stackforge-, for the migration reason that you mention.

  Original Message  
From: Thomas Goirand
Sent: Wednesday, 27 May 2015 09:17
To: OpenStack Development Mailing List (not for usage questions)
Reply To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [packaging] Adding packaging as an OpenStack project

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi all,

tl;dr:
- - We'd like to push distribution packaging of OpenStack on upstream
gerrit with reviews.
- - The intention is to better share the workload, and improve the overall
QA for packaging *and* upstream.
- - The goal is *not* to publish packages upstream
- - There's an ongoing discussion about using stackforge or openstack.
This isn't, IMO, that important, what's important is to get started.
- - There's an ongoing discussion about using a distribution specific
namespace, my own opinion here is that using /openstack-pkg-{deb,rpm} or
/stackforge-pkg-{deb,rpm} would be the most convenient because of a
number of technical reasons like the amount of Git repository.
- - Finally, let's not discuss for too long and let's do it!!! :)

Longer version:

Before I start: some stuff below is just my own opinion, others are just
given facts. I'm sure the reader is smart enough to guess which is what,
and we welcome anyone involved in the project to voice an opinion if
he/she differs.

During the Vancouver summit, operation, Canonical, Fedora and Debian
people gathered and collectively expressed the will to maintain
packaging artifacts within upstream OpenStack Gerrit infrastructure. We
haven't decided all the details of the implementation, but spent the
Friday morning together with members of the infra team (hi Paul!) trying
to figure out what and how.

A number of topics have been raised, which needs to be shared.

First, we've been told that such a topic deserved a message to the dev
list, in order to let groups who were not present at the summit. Yes,
there was a consensus among distributions that this should happen, but
still, it's always nice to let everyone know.

So here it is. Suse people (and other distributions), you're welcome to
join the effort.

- - Why doing this

It's been clear to both Canonical/Ubuntu teams, and Debian (ie: myself)
that we'd be a way more effective if we worked better together, on a
collaborative fashion using a review process like on upstream Gerrit.
But also, we'd like to welcome anyone, and especially the operation
folks, to contribute and give feedback. Using Gerrit is the obvious way
to give everyone a say on what we're implementing.

As OpenStack is welcoming every day more and more projects, it's making
even more sense to spread the workload.

This is becoming easier for Ubuntu guys as Launchpad now understand not
only BZR, but also Git.

We'd start by merging all of our packages that aren't core packages
(like all the non-OpenStack maintained dependencies, then the Oslo libs,
then the clients). Then we'll see how we can try merging core packages.

Another reason is that we believe working with the infra of OpenStack
upstream will improve the overall quality of the packages. We want to be
able to run a set of tests at build time, which we already do on each
distribution, but now we want this on every proposed patch. Later on,
when we have everything implemented and working, we may explore doing a
package based CI on every upstream patch (though, we're far from doing
this, so let's not discuss this right now please, this is a very long
term goal only, and we will have a huge improvement already *before*
this is implemented).

- - What it will *not* be
===
We do not have the intention (yet?) to publish the resulting packages
built on upstream infra. Yes, we will share the same Git repositories,
and yes, the infra will need to keep a copy of all builds (for example,
because core packages will need oslo.db to build and run unit tests).
But we will still upload on each distributions on separate repositories.
So published packages by the infra isn't currently discussed. We could
get to this topic once everything is implemented, which may be nice
(because we'd have packages following trunk), though please, refrain to
engage in this topic right now: having the implementation done is more
important for the moment. Let's try to stay on tracks and be constructive.

- - Let's keep efficiency in mind
===
Over the last few years, I've been able to maintain all of OpenStack in
Debian with little to no external contribution. Let's hope that the
Gerrit workflow will not slow down too much the packaging work, even if
there's an unavoidable overhead. Hopefully, we can implement some
liberal ACL policies for the core reviewers so that the Gerrit workflow
don't slow down anyone too much. For example we may be able to create
new repositories very fast, and it may 

Re: [openstack-dev] [Neutron]: DVR Presentation slides from the Vancouver summit

2015-05-27 Thread Somanchi Trinath
Thanks for the share.. :)

From: Vasudevan, Swaminathan (PNB Roseville) 
[mailto:swaminathan.vasude...@hp.com]
Sent: Tuesday, May 26, 2015 11:20 PM
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [Neutron]: DVR Presentation slides from the Vancouver 
summit

Hi Folks,
Unfortunately our presentation video is missing from the OpenStack Vancouver 
summit website.
But there was a lot more request for the slides that we presented in the summit.

Here is the link to the slides that we presented in the OpenStack Vancouver 
summit.
https://drive.google.com/file/d/0B4kh-7VVPWlPYXNaQWxXd1NDdm8/view?usp=sharing

Please let me know if you have any questions.

thanks

Swaminathan Vasudevan
Systems Software Engineer (TC)


HP Networking
Hewlett-Packard
8000 Foothills Blvd
M/S 5541
Roseville, CA - 95747
tel: 916.785.0937
fax: 916.785.1815
email: swaminathan.vasude...@hp.commailto:swaminathan.vasude...@hp.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Redis as Messaging System

2015-05-27 Thread Ryan Brown
On 05/27/2015 04:12 AM, Clint Byrum wrote:
 
 Excerpts from Ryan Brown's message of 2015-05-26 05:48:14 -0700:
 Zaqar provides an option to use Redis as a backend, and Zaqar provides
 pubsub messaging.

 
 Please please please do not mistake under-the-cloud messaging, which
 oslo.messaging is intended to facilitate, for user-facing messaging,
 which Zaqar is intended to facilitate.
 
 Under the cloud, you have one tenant, and simply communicating directly
 with Redis will suffice.

Ah, I see what I missed. I read messaging for OpenStack as messaging
service for OpenStack tenants not messaging for OpenStack internally.

Good catch Clint,
Ryan

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Setting cluster status when provisioning a node

2015-05-27 Thread Aleksey Kasatkin
Thank you Roman for driving this!

Full list of nodes statuses is:

NODE_STATUSES = Enum(
'ready',
'discover',
'provisioning',
'provisioned',
'deploying',
'error',
'removing',
)

We could combine 'provisioning', 'provisioned', 'deploying' into one maybe
as cluster has only 'deployment' status for all of that now. It seems to be
enough for cluster management.

CLUSTER_STATUSES = Enum(
'new',
'deployment',
'stopped',
'operational',
'error',
'remove',
'update',
'update_error'
)

[1]
https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/consts.py



Aleksey Kasatkin


On Wed, May 27, 2015 at 4:00 PM, Oleg Gelbukh ogelb...@mirantis.com wrote:

 Excellent, nice to know that we're on the same page about this.

 Thank you!

 --
 Best regards,
 Oleg Gelbukh

 On Wed, May 27, 2015 at 12:22 PM, Roman Prykhodchenko m...@romcheg.me
 wrote:

 Oleg,

 Thanks for the feedback. I have the following as a response:

 1. This spec is just an excerpt for scoping in the proposed improvement
 to the 7.0 release plan. If it get’s scope the full specification will go
 through a standard review process so it will be possible to discuss names
 along with the rest of details then.

 2. It’s already noticed in the spec the status is is generated using an
 aggregate query like you described so I don’t propose to store it. Storing
 that data will require sophisticated algorithms to work with it and also
 will lead to more locks or race conditions in the database. So yes, it’s
 going to be a method.


 - romcheg


 27 трав. 2015 о 08:19 Oleg Gelbukh ogelb...@mirantis.com написав(ла):

 Roman,

 This looks like a great solution to me, and I like your proposal very
 much. The status of cluster derived directly from statuses of nodes is
 exactly what I was thinking about.

 I have to notes to the proposal, and I can copy them to etherpad if you
 think they deserve it:

 1) status name 'operational' seem a bit unclear to me, as it sounds more
 like something Monitoring should report: it implies that the actual
 OpenStack environment is operational, which might or might not be a case,
 and Fuel has no way to tell. I would really prefer if that status name was
 'Deployed' or something along those lines.

 2) I'm not sure if we need to keep the complex status of the cluster
 explicitly in 'cluster' table in the format you suggest. This information
 can be taken directly from 'nodes' table in Nailgun DB. For example,
 getting it in the second form you propose is as simple as:

 nailgun= SELECT status,count(status) FROM nodes GROUP BY status;
 discover|1
 ready|5

 What do you think about making it a method rather then an element of data
 model? Or that's exactly the complexity you want to get rid of?

 --
 Best regards,
 Oleg Gelbukh


 On Tue, May 26, 2015 at 4:16 PM, Roman Prykhodchenko m...@romcheg.me
 wrote:

 Oleg,

 Aleksander also proposed a nice proposed a nice solution [1] which is to
 have a complex status for cluster. That, however, looks like a BP so I’ve
 created an excerpt [2] for it and we will try to discuss it scope it for
 7.0, if there is a consensus.


 References:

 1.
 http://lists.openstack.org/pipermail/openstack-dev/2015-May/064670.html
 2. https://etherpad.openstack.org/p/fuel-cluster-complex-status


 - romcheg

 22 трав. 2015 о 22:32 Oleg Gelbukh ogelb...@mirantis.com написав(ла):

 Roman,

 I'm totally for fixing Nailgun. However, the status of environment is
 not simply function of statuses of nodes in it. Ideally, it should depend
 on whether appropriate number of nodes of certain roles are in 'ready'
 status. For the meantime, it would be enough if environment was set to
 'operational' when all nodes in it become 'ready', no matter how they were
 deployed (i.e. via Web UI or CLI).

 --
 Best regards,
 Oleg Gelbukh

 On Fri, May 22, 2015 at 5:33 PM, Roman Prykhodchenko m...@romcheg.me
 wrote:

 Hi folks!

 Recently I encountered an issue [1] that the Deploy Changes button in
 the web ui is still active when a provisioning of single node is started
 using the command line client.
 The background for that issue is that the provisioning task does not
 seem to update the cluster status correctly and Nailgun’s API returns it as
 NEW even while some of the node are been provisioned.

 The reason for raising this thread in the mailing list is that
 provisioning a node is a feature for developers and basically end-users
 should not do that. What is the best solution for that: fix Nailgun to set
 the correct status, or make this provisioning feature available only for
 developers?

 1. https://bugs.launchpad.net/fuel/7.0.x/+bug/1449086


 - romcheg



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 

Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-05-27 Thread Tom Fifield
Many thanks to Thomas and the other packagers for a great discussion at
the summit and this fast follow-up, explained well. Looking forward to
seeing what can be achieved!

On 27/05/15 16:14, Thomas Goirand wrote:
 Hi all,
 
 tl;dr:
 - We'd like to push distribution packaging of OpenStack on upstream
 gerrit with reviews.
 - The intention is to better share the workload, and improve the overall
 QA for packaging *and* upstream.
 - The goal is *not* to publish packages upstream
 - There's an ongoing discussion about using stackforge or openstack.
 This isn't, IMO, that important, what's important is to get started.
 - There's an ongoing discussion about using a distribution specific
 namespace, my own opinion here is that using /openstack-pkg-{deb,rpm} or
 /stackforge-pkg-{deb,rpm} would be the most convenient because of a
 number of technical reasons like the amount of Git repository.
 - Finally, let's not discuss for too long and let's do it!!! :)
 
 Longer version:
 
 Before I start: some stuff below is just my own opinion, others are just
 given facts. I'm sure the reader is smart enough to guess which is what,
 and we welcome anyone involved in the project to voice an opinion if
 he/she differs.
 
 During the Vancouver summit, operation, Canonical, Fedora and Debian
 people gathered and collectively expressed the will to maintain
 packaging artifacts within upstream OpenStack Gerrit infrastructure. We
 haven't decided all the details of the implementation, but spent the
 Friday morning together with members of the infra team (hi Paul!) trying
 to figure out what and how.
 
 A number of topics have been raised, which needs to be shared.
 
 First, we've been told that such a topic deserved a message to the dev
 list, in order to let groups who were not present at the summit. Yes,
 there was a consensus among distributions that this should happen, but
 still, it's always nice to let everyone know.
 
 So here it is. Suse people (and other distributions), you're welcome to
 join the effort.
 
 - Why doing this
 
 It's been clear to both Canonical/Ubuntu teams, and Debian (ie: myself)
 that we'd be a way more effective if we worked better together, on a
 collaborative fashion using a review process like on upstream Gerrit.
 But also, we'd like to welcome anyone, and especially the operation
 folks, to contribute and give feedback. Using Gerrit is the obvious way
 to give everyone a say on what we're implementing.
 
 As OpenStack is welcoming every day more and more projects, it's making
 even more sense to spread the workload.
 
 This is becoming easier for Ubuntu guys as Launchpad now understand not
 only BZR, but also Git.
 
 We'd start by merging all of our packages that aren't core packages
 (like all the non-OpenStack maintained dependencies, then the Oslo libs,
 then the clients). Then we'll see how we can try merging core packages.
 
 Another reason is that we believe working with the infra of OpenStack
 upstream will improve the overall quality of the packages. We want to be
 able to run a set of tests at build time, which we already do on each
 distribution, but now we want this on every proposed patch. Later on,
 when we have everything implemented and working, we may explore doing a
 package based CI on every upstream patch (though, we're far from doing
 this, so let's not discuss this right now please, this is a very long
 term goal only, and we will have a huge improvement already *before*
 this is implemented).
 
 - What it will *not* be
 ===
 We do not have the intention (yet?) to publish the resulting packages
 built on upstream infra. Yes, we will share the same Git repositories,
 and yes, the infra will need to keep a copy of all builds (for example,
 because core packages will need oslo.db to build and run unit tests).
 But we will still upload on each distributions on separate repositories.
 So published packages by the infra isn't currently discussed. We could
 get to this topic once everything is implemented, which may be nice
 (because we'd have packages following trunk), though please, refrain to
 engage in this topic right now: having the implementation done is more
 important for the moment. Let's try to stay on tracks and be constructive.
 
 - Let's keep efficiency in mind
 ===
 Over the last few years, I've been able to maintain all of OpenStack in
 Debian with little to no external contribution. Let's hope that the
 Gerrit workflow will not slow down too much the packaging work, even if
 there's an unavoidable overhead. Hopefully, we can implement some
 liberal ACL policies for the core reviewers so that the Gerrit workflow
 don't slow down anyone too much. For example we may be able to create
 new repositories very fast, and it may be possible to self-approve some
 of the most trivial patches (for things like typo in a package
 description, adding new debconf translations, and such obvious fixes, we
 shouldn't 

[openstack-dev] [oslo] Updates from the Summit

2015-05-27 Thread Davanum Srinivas
Hi Team,

Here are the etherpads from the summit[1].
Some highlights are as follows:
Oslo.messaging : Took status of the existing zmq driver, proposed a
new driver in parallel to the existing zmq driver. Also looked at
possibility of using Pika with RabbitMQ. Folks from pivotal promised
to help with our scenarios as well.
Oslo.rootwrap : Debated daemon vs a new privileged service. The Nova
change to add rootwrap as daemon is on hold pending progress on the
privsep proposal/activity.
Oslo.versionedobjects : We had a nice presentation from Dan about what
o.vo can do and a deepdive into what we could do in next release.
Taskflow : Josh and team came up with several new features and how to
improve usability

We will also have several new libraries in Liberty (oslo.cache,
oslo.service, oslo.reports, futurist, automaton etc). We talked about
our release processes, functional testing, deprecation strategies and
debated a but about how best to move to async models as well. Please
see etherpads for detailed information.

thanks,
dims

[1] https://wiki.openstack.org/wiki/Design_Summit/Liberty/Etherpads#Oslo

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Liberty Priorities

2015-05-27 Thread Flavio Percoco

On 26/05/15 17:14 +, Jesse Cook wrote:

I created an etherpad with priorities the RAX team I work on will be focusing
on based on our talks at the summit: https://etherpad.openstack.org/p/
liberty-priorities-rax. Input, guidance, feedback, and collaboration is not
just welcome, it is encouraged and appreciated.


May I ask what Image conversions follow up is meant to do?

I'll be working on a follow up as well and I want to make sure we
don't overlap.

Thanks,
Flavio

--
@flaper87
Flavio Percoco


pgpxz9V8hpeNx.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - dnsmasq 'dhcp-authoritative' option broke multiple DHCP servers

2015-05-27 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 05/26/2015 11:53 PM, Kevin Benton wrote:
 Actually, that approach was initially taken for bug 1345947, but
 then the patch was abandoned to be replaced with a simpler - 
 --dhcp-authoritative approach that ended up with unexpected NAKs 
 for multi agent setup.
 
 See: https://review.openstack.org/#/c/108272/12
 
 So I had seen that patch but it's quite different from the approach
 I was taking. That one requires a new script in the 'bin'
 directory, a new entry in setup.cfg, an a new dhcp filter entry for
 rootwrap. Is that something you would be comfortable back-porting
 all of the way back to Icehouse?

I agree that a patch without a new script and, more importantly,
without a rootwrap filter modification, is a lot better than the one I
cited above.

 
 The approach I was using was to generate the script at runtime in
 the data directory for each instance to just return the addresses
 directly. That way there are no setup changes or new entries in
 bin. Personally, I felt it was easier to understand since it simply
 generated a big echo statement, but I might be biased because I
 wrote it. :)
 

Looking at [1], I don't see that it generates a script at all. What it
does is it prepopulates a lease file for dnsmasq consumption (so there
is no external shell involved). Do we talk about the same patch?

I've checked [1], and it seems like the best approach, both from code
complexity and backportability perspective. I've left some comments
there.

[1]: https://review.openstack.org/#/c/185486/

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVZbE8AAoJEC5aWaUY1u57ufkH/RfsQJy+Ddz3f+L37mNY28uj
h+gLaiJIcZ1iMKKMo1tpg881u/aKpy3LlScoKHLnwWXub/IPxrxN2+/IMfCoF9iV
ZbgtmggVRh/TjHOMbMpQVqJ+J8qe4TN29kW5x1RcUEecYy/hbyyKeBYoLlEXoZhn
GzWcWyx9yp2qSOqe9010K+nmXdAzD+jg8/YJlBtP/ggO0qoWB7Is/D2bHkoeCPsd
uqJzhAAZg4w2hhPgKpb1aUhyQU9uE5gzj5Yh5PE+kvINDRwLTLoqWQ7sxpR1hiqH
rZ8t8FE1wmdQKEWrsRVy6/2pLOziKVNGPinBLYwwBUGY+S7kb2Jc6AgAHrAx/iY=
=Wnx+
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] Liberty summit: Updates in Glance

2015-05-27 Thread Kuvaja, Erno
 -Original Message-
 From: Flavio Percoco [mailto:fla...@redhat.com]
 Sent: 27 May 2015 00:58
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Glance] [all] Liberty summit: Updates in Glance
 
 Jesse, you beat me on this one :)
 
 On 26/05/15 13:54 -0400, Nikhil Komawar wrote:
 Thank you Jesse for your valuable input (here and at the summit) as
 well as intent to clarify the discussion.
 
 Just trying to ensure people are aware about the EXPERIMENTAL nature of
 the v3 API and reasons behind it. Please find my responses in-line.
 However, I do want to ensure you all, that we will strive hard to move
 away from the EXPERIMENTAL nature and go with a rock solid
 implementation as and when interest grows in the code-base (that helps
 stabilize it).
 
 On 5/26/15 12:57 PM, Jesse Cook wrote:
 
 
 On 5/22/15, 4:28 PM, Nikhil Komawar nik.koma...@gmail.com
 wrote:
 
 
 Hi all,
 
 tl;dr; Artifacts IS staying in Glance.
 
  1. We had a nice discussion at the contributors' meet-up at the
 Vancouver summit this morning. After weighing in many 
  possibilities
 and evolution of the Glance program, we have decided to go ahead
 with the Artifacts implementation within Glance program under the
 EXPERIMENTAL v3 API.
 
 Want to clarify a bit here. My understanding is: s/Artifacts/v3 API/g. 
  That
 is to say, Artifacts is the technical implementation of the v3 API. This
 also means the v3 API is an objects API vs just an images API.
 
 
 Generic data assets' API would be a nice term along the lines of the
 mission statement. Artifacts seemed fitting as that was the focus of
 discussion at various sessions.
 
 Regardless of how we call it, I do appreciate the clarity on the fact that
 Artifacts - data assests - is just the technical implementation of what will 
 be
 Glance's API v3. It's an important distinction to avoid sending the wrong
 message on what it's going to be done there.
 
 
 We also had some hallway talk about putting the v1 and v2 APIs on top of
 the v3 API. This forces faster adoption, verifies supportability via v1 
  and
 v2 tests, increases supportability of v1 and v2 APIs, and pushes out the
 need to kill v1 API.
 
 Let's discuss more as time and development progresses on that
 possibility. v3 API should stay EXPERIMENTAL for now as that would help
 us understand use-cases across programs as it gets adopted by various
 code-bases. Putting v1/v2 on top of v3 would be tricky for now as we
 may have breaking changes with code being relatively-less stable due to
 narrow review domain.
 
 I actually think we'd benefit more from having V2 on top of V3 than not doing
 it. I'd probably advocate to make this M material rather than L but I think 
 it'd
 be good.

We perhaps would, but that would realistically push v2 adoption across the 
projects to somewhere around O release. Just looking how long it took the v2 
code base to mature enough that we're seriously talking moving to use that in 
production.
 
 I think regardless of what we do, I'd like to kill v1 as it has a sharing 
 model
 that is not secure.

The above would postpone this one somewhere around Q-R (which is btw. not so 
far from U anymore).

More I think about this the more convinced I am about focusing to the move to 
v2 on our consumers, deprecating the v1 out and after that we can start talking 
about moving v2 on top of the v3 codebase if possible, not other way around 
hoping that it would speed up the v3 adoption.

- Erno
 
 Flavio
 
  1.
  2. The effort would primarily be conducted as a sub-team-like
 structure within the program and the co-coordinators and drivers 
  of
 the necessary Artifacts features would be given core-reviewer
 status temporarily with an informal agreement to merge code that 
  is
 only related to Artifacts.
  3. The entire Glance team would give reviews as time and priorities
 permit. The approval (+A/+WorkFlow) of any code within the
 program
 would need to come from core-reviewers who are not temporarily
 authorized. The list of such individuals and updated time-line
 would be documented in phases during the course of Liberty cycle.
  4. We will continue to evaluate  update the governance, maturity of
 the code and future plans for the v1, v2 and v3 Glance APIs as 
  time
 progresses. However, for now we are aiming to integrate all of
 Glance (specifically Images) as Artifacts in the v3 API.
 
 
 As I understand it, that is to say that v3 requests in the first
 “micro-version” that specify the object type as image would get a not
 implemented or similar error. The next next “micro-version” would likely
 contain the support for images along with possibly implementing the v1
 and
 v2 

Re: [openstack-dev] [Fuel] Setting cluster status when provisioning a node

2015-05-27 Thread Oleg Gelbukh
Excellent, nice to know that we're on the same page about this.

Thank you!

--
Best regards,
Oleg Gelbukh

On Wed, May 27, 2015 at 12:22 PM, Roman Prykhodchenko m...@romcheg.me wrote:

 Oleg,

 Thanks for the feedback. I have the following as a response:

 1. This spec is just an excerpt for scoping in the proposed improvement to
 the 7.0 release plan. If it get’s scope the full specification will go
 through a standard review process so it will be possible to discuss names
 along with the rest of details then.

 2. It’s already noticed in the spec the status is is generated using an
 aggregate query like you described so I don’t propose to store it. Storing
 that data will require sophisticated algorithms to work with it and also
 will lead to more locks or race conditions in the database. So yes, it’s
 going to be a method.


 - romcheg


 27 трав. 2015 о 08:19 Oleg Gelbukh ogelb...@mirantis.com написав(ла):

 Roman,

 This looks like a great solution to me, and I like your proposal very
 much. The status of cluster derived directly from statuses of nodes is
 exactly what I was thinking about.

 I have to notes to the proposal, and I can copy them to etherpad if you
 think they deserve it:

 1) status name 'operational' seem a bit unclear to me, as it sounds more
 like something Monitoring should report: it implies that the actual
 OpenStack environment is operational, which might or might not be a case,
 and Fuel has no way to tell. I would really prefer if that status name was
 'Deployed' or something along those lines.

 2) I'm not sure if we need to keep the complex status of the cluster
 explicitly in 'cluster' table in the format you suggest. This information
 can be taken directly from 'nodes' table in Nailgun DB. For example,
 getting it in the second form you propose is as simple as:

 nailgun= SELECT status,count(status) FROM nodes GROUP BY status;
 discover|1
 ready|5

 What do you think about making it a method rather then an element of data
 model? Or that's exactly the complexity you want to get rid of?

 --
 Best regards,
 Oleg Gelbukh


 On Tue, May 26, 2015 at 4:16 PM, Roman Prykhodchenko m...@romcheg.me
 wrote:

 Oleg,

 Aleksander also proposed a nice proposed a nice solution [1] which is to
 have a complex status for cluster. That, however, looks like a BP so I’ve
 created an excerpt [2] for it and we will try to discuss it scope it for
 7.0, if there is a consensus.


 References:

 1.
 http://lists.openstack.org/pipermail/openstack-dev/2015-May/064670.html
 2. https://etherpad.openstack.org/p/fuel-cluster-complex-status


 - romcheg

 22 трав. 2015 о 22:32 Oleg Gelbukh ogelb...@mirantis.com написав(ла):

 Roman,

 I'm totally for fixing Nailgun. However, the status of environment is not
 simply function of statuses of nodes in it. Ideally, it should depend on
 whether appropriate number of nodes of certain roles are in 'ready' status.
 For the meantime, it would be enough if environment was set to
 'operational' when all nodes in it become 'ready', no matter how they were
 deployed (i.e. via Web UI or CLI).

 --
 Best regards,
 Oleg Gelbukh

 On Fri, May 22, 2015 at 5:33 PM, Roman Prykhodchenko m...@romcheg.me
 wrote:

 Hi folks!

 Recently I encountered an issue [1] that the Deploy Changes button in
 the web ui is still active when a provisioning of single node is started
 using the command line client.
 The background for that issue is that the provisioning task does not
 seem to update the cluster status correctly and Nailgun’s API returns it as
 NEW even while some of the node are been provisioned.

 The reason for raising this thread in the mailing list is that
 provisioning a node is a feature for developers and basically end-users
 should not do that. What is the best solution for that: fix Nailgun to set
 the correct status, or make this provisioning feature available only for
 developers?

 1. https://bugs.launchpad.net/fuel/7.0.x/+bug/1449086


 - romcheg



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org
 ?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 

[openstack-dev] [Manila] Expected Manila behavior for creation of share from snapshot

2015-05-27 Thread Valeriy Ponomaryov
Hi everyone,

At last IRC meeting
http://eavesdrop.openstack.org/meetings/manila/2015/manila.2015-05-14-15.00.log.html
was
raised following question:

Whether Manila should allow us to create shares from snapshots with
different share networks or not?

What do users/admins expect in that case?

For the moment Manila restricts creation of shares from snapshot with share
network that is different than parent's.

From user point of view, he may want to copy share and use its copy in
different network and it is valid case.

From developer point of view, he will be forced to rework logic of share
servers creation for driver he maintains.

Also, how many back-ends are able to support such feature?

Regards,
Valeriy Ponomaryov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstackclient] Use and practices of --format shell for defining individual variables

2015-05-27 Thread Ronald Bradford
Thanks Doug,

I knew there would be a more definitive and simple use.
eval is that way!

Ronald


Ronald Bradford

Web Site: http://ronaldbradford.com
LinkedIn:  http://www.linkedin.com/in/ronaldbradford
Twitter:@RonaldBradford http://twitter.com/ronaldbradford
Skype: RonaldBradford
GTalk: Ronald.Bradford
IRC: rbradfor


On Tue, May 26, 2015 at 1:19 PM, Doug Hellmann d...@doughellmann.com
wrote:

 Excerpts from Ronald Bradford's message of 2015-05-26 11:18:09 -0400:
  Hi list,
 
  I came across the following neutron client specific syntax and decided to
  see if I could reproduce using the openstack client and it's flexible
  formatting capabilities
 
 
  $ NIC_ID1=$(neutron net-show public | awk '/ id /{print $4}')
 
  $  echo $NIC_ID1
  210d976e-16a3-42dc-ac31-f01810dbd297
 
  I can get similar syntax (unfortunately lowercase variable name only)
 with:
  NOTE: It may be nice to be able to pass an option to UPPERCASE all shell
  variables names.
 
  $ openstack network show public -c id --format shell --prefix nic_
  nic_id=210d976e-16a3-42dc-ac31-f01810dbd297
 
  However to use this I effectively have to place in a file and source that
  file to expose this variable to my current running shell.
  Reproducing the same syntax does not work obviously.

 It's actually meant to be used with an eval statement:

  $ eval $(openstack network show public -c id --format shell --prefix nic_)
  $ echo $nic_id

 Doug

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] correct API for getting image metadata for an instance ?

2015-05-27 Thread Daniel P. Berrange
As part of the work to object-ify the image metadata dicts, I'm looking
at the current way the libvirt driver fetches image metadata for an
instance, in cases where the compute manager hasn't already passed it
into the virt driver API. I see 2 methods that libvirt uses to get the
image metadata

 - nova.utils.get_image_from_system_metadata(instance.system_metadata)

 It takes the system metadata stored against the instance
 and turns it into image  metadata.


- nova.compute.utils.get_image_metadata(context,
 image_api,
 instance.image_ref,
 instance)

 This tries to get metadata from the image api and turns
 this into system metadata

 It then gets system metadata from the instance and merges
 it from the data from the image

 It then calls nova.utils.get_image_from_system_metadata()

 IIUC, any changes against the image will override what
 is stored against the instance



IIUC, when an instance is booted, the image metadata should be
saved against the instance. So I'm wondering why we need to have
code in compute.utils that merges back in the image metadata each
time ?

Is this intentional so that we pull in latest changes from the
image, to override what's previously saved on the instance ? If
so, then it seems that we should have been consistent in using
the compute_utils get_image_metadata() API everywhere.

It seems wrong though to pull in the latest metadata from the
image. The libvirt driver makes various decisions at boot time
about how to configure the guest based on the metadata. When we
later do changes to that guest (snapshot, hotplug, etc, etc)
we *must* use exactly the same image metadata we had at boot
time, otherwise decisions we make will be inconsistent with how
the guest is currently configured.

eg if you set  hw_disk_bus=virtio at boot time, and then later
change the image to use hw_disk_bus=scsi, and then try to hotplug
a new drive on the guest, we *must* operate wrt hw_disk_bus=virtio
because the guest will not have any scsi bus present.

This says to me we should /never/ use the compute_utils
get_image_metadata() API once the guest is running, and so we
should convert libvirt to use nova.utils.get_image_from_system_metadata()
exclusively.

It also makes me wonder how nova/compute/manager.py is obtaining image
meta in cases where it passes it into the API and whether that needs
changing at all.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: ERROR (BadRequest): Block Device Mapping is Invalid: failed to get volume

2015-05-27 Thread Nathan Stratton
Upgraded to Juno and now can't nova boot:
[root@openstack nova(keystone)]# nova boot --flavor 3 --boot-volume
a9f8f997-bc87-4f39-9f9f-0b41169a0256 radius --nic
net-id=b2324538-26ee-4d06-bccd-0ef2f6778f6b,v4-fixed-ip=10.71.0.161
ERROR (BadRequest): Block Device Mapping is Invalid: failed to get volume
a9f8f997-bc87-4f39-9f9f-0b41169a0256. (HTTP 400) (Request-ID:
req-cdcef902-c033-46c1-a211-2df8599b7583)


I don't see anything in cinder logs, but nova-api.log:
 2015-05-21 14:47:19.904 3132 DEBUG keystoneclient.session [-] REQ: curl -i
-X GET http://127.0.0.1:35357/v2.0/tokens/898f85bc891c48f296da114288c023db
-H User-Agent: python-keystoneclient -H Accept: application/json -H
X-Auth-Token: TOKEN_REDACTED _http_log_request
/usr/lib/python2.7/site-packages/keystoneclient/session.py:155
2015-05-21 14:47:19.917 3132 DEBUG keystoneclient.session [-] RESP: [200]
{'date': 'Thu, 21 May 2015 18:47:19 GMT', 'content-type':
'application/json', 'content-length': '3080', 'vary': 'X-Auth-Token'}
RESP BODY: {access: {token: {issued_at: 2015-05-21T18:47:19.889187,
expires: 2015-05-21T19:47:19Z, id:
898f85bc891c48f296da114288c023db, tenant: {enabled: true, id:
0ebcdedac0a3480ca81050bfedd97cf1,
 name: BroadSoft Labs, description: BroadSoft Labs Systems},
audit_ids: [myz_O6fxQqOlLXFgwZ4SZg]}, serviceCatalog:
[{endpoints_links: [], endpoints: [{adminURL: 
http://10.71.0.218:8774/v2/0ebcdedac0a34
80ca81050bfedd97cf1, region: RegionOne, publicURL: 
http://10.71.0.218:8774/v2/0ebcdedac0a3480ca81050bfedd97cf1;, id:
965990191c7d4fcdb95575dd2e504233, internalURL: 
http://10.71.0.218:8774/v2/0ebcdedac0a3480c
a81050bfedd97cf1}], type: compute, name: nova},
{endpoints_links: [], endpoints: [{adminURL: 
http://10.71.0.218:8776/v2/0ebcdedac0a3480ca81050bfedd97cf1;, region:
RegionOne, publicURL: http://10.71.0
.218:8776/v2/0ebcdedac0a3480ca81050bfedd97cf1, id:
06e9d088a7d2402b88f2c517dbb817db, internalURL: 
http://10.71.0.218:8776/v2/0ebcdedac0a3480ca81050bfedd97cf1}], type:
volumev2, name: cinder_v2}, {endpoint
s_links: [], endpoints: [{adminURL: http://10.71.0.218:8774/v3;,
region: RegionOne, publicURL: http://10.71.0.218:8774/v3;, id:
09fe287f2f8a481489902d46c1d53118, internalURL: 
http://10.71.0.218:8774/v3;
}], type: computev3, name: novav3}, {endpoints_links: [],
endpoints: [{adminURL: http://10.71.0.218:9292;, region:
RegionOne, publicURL: http://10.71.0.218:9292;, id:
1dff3e40f9bf430f9b79654b8e59c1
f1, internalURL: http://10.71.0.218:9292}], type: image, name:
glance}, {endpoints_links: [], endpoints: [{adminURL: 
http://10.71.0.218:8777;, region: RegionOne, publicURL: 
http://10.71.0.218:877
7, id: 98ff5aa8b3ec4df0a6cc2c876dd0d7dc, internalURL: 
http://10.71.0.218:8777}], type: metering, name: ceilometer},
{endpoints_links: [], endpoints: [{adminURL: 
http://10.71.0.218:8776/v1/0ebcdedac0
a3480ca81050bfedd97cf1, region: RegionOne, publicURL: 
http://10.71.0.218:8776/v1/0ebcdedac0a3480ca81050bfedd97cf1;, id:
1dfd84fe0de14c3c81efdfd7ff8a1244, internalURL: 
http://10.71.0.218:8776/v1/0ebcdedac0a34
80ca81050bfedd97cf1}], type: volume, name: cinder},
{endpoints_links: [], endpoints: [{adminURL: 
http://10.71.0.218:8773/services/Admin;, region: RegionOne,
publicURL: http://10.71.0.218:8773/service
s/Cloud, id: 498579a152e54ad58976a779e0ba5724, internalURL: 
http://10.71.0.218:8773/services/Cloud}], type: ec2, name:
nova_ec2}, {endpoints_links: [], endpoints: [{adminURL: 
http://10.71.0.218:3535
7/v2.0, region: RegionOne, publicURL: http://10.71.0.218:5000/v2.0;,
id: 0ef37dfe1c78489ba1fb0599b0956d87, internalURL: 
http://10.71.0.218:5000/v2.0}], type: identity, name: keystone}],
user: {us
ername: nathan, roles_links: [], id:
b4397deb6a884a8c8e70fbc255ce6d80, roles: [{name: admin}], name:
nathan}, metadata: {is_admin: 0, roles:
[d7ed50bf853340e0980d1cbcae8c14ca]}}}
 _http_log_response
/usr/lib/python2.7/site-packages/keystoneclient/session.py:182
2015-05-21 14:47:19.919 3132 DEBUG nova.api.openstack.wsgi
[req-05325a86-aefc-4729-b7bc-ae7caa0c76bd None] Calling method 'bound
method Controller.show of nova.api.openstack.compute.flavors.Controller
object at 0x3e20c90
' _process_stack
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:937
2015-05-21 14:47:19.938 3132 INFO nova.osapi_compute.wsgi.server
[req-05325a86-aefc-4729-b7bc-ae7caa0c76bd None] 10.71.0.218 GET
/v2/0ebcdedac0a3480ca81050bfedd97cf1/flavors/3 HTTP/1.1 status: 200 len:
597 time: 0.034395
9
2015-05-21 14:47:19.943 3187 DEBUG keystoneclient.session [-] REQ: curl -i
-X GET http://127.0.0.1:35357/v2.0/tokens/898f85bc891c48f296da114288c023db
-H User-Agent: python-keystoneclient -H Accept: application/json -H
X-Auth-Token: TOKEN_REDACTED _http_log_request
/usr/lib/python2.7/site-packages/keystoneclient/session.py:155
2015-05-21 14:47:19.955 3187 DEBUG keystoneclient.session [-] RESP: [200]
{'date': 'Thu, 21 May 2015 18:47:19 GMT', 'content-type':
'application/json', 'content-length': '3080', 'vary': 'X-Auth-Token'}
RESP BODY: {access: {token: {issued_at: 

Re: [openstack-dev] [ceilometer][all] Scalable metering

2015-05-27 Thread Joe Gordon
On Tue, May 26, 2015 at 6:03 PM, gordon chung g...@live.ca wrote:

 hi Tim,

 we're still doing some investigation but we're tracking/discussing part of
 the polling load issue here: https://review.openstack.org/#/c/185084/

 we're open to any ideas -- especially from nova api et al experts.


So I agree doing lots of naive polling can lead to issues on even the
fastest of APIs, but are there any bugs about this that were opened against
nova? At the very least nova should investigate why the specific calls are
so slow and see if we what we can do to make them at least a little faster
and lighter weight.





 cheers,
 gord


 
  From: tim.b...@cern.ch
  To: openstack-dev@lists.openstack.org
  Date: Tue, 26 May 2015 17:45:37 +
  Subject: [openstack-dev] [ceilometer][all] Scalable metering
 
 
 
 
  We had a good discussion at the summit regarding ceilometer scaling.
  Julien has written up some of the items discussed in
 
 https://julien.danjou.info/blog/2015/openstack-summit-liberty-vancouver-ceilometer-gnocchi
  and there is work ongoing in the storage area for scalable storage of
  ceilometer data using gnocchi.
 
 
 
  I’d like community input on the other scalability concern raised during
  the event, namely the load on other services when ceilometer is
  enabled. From the blog, “Ceilometer hits various endpoints in OpenStack
  that are poorly designed, and hitting those endpoints of Nova or other
  components triggers a lot of load on the platform.”.
 
 
 
  I would welcome suggestions on how to identify the potential changes in
  the OpenStack projects and improve the operator experience when
  deploying metering.
 
 
 
  Tim
 
 
 
 
 
 
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Using depends-on for patches which require an approved spec

2015-05-27 Thread Devananda van der Veen
I think this will help because it separates the judgement of is this code
good enough to land from the project and release coordination of should
this code land now.

I've been floating the idea of separating +2 and +A powers for the same
purpose: free up many of the technical reviewers from having to *also*
think about release schedules and project priorities, since larger projects
are tending towards having a smaller group of project-drivers who handle
the latter question.

It looks like Depends-On is one way to address that, and it's fairly light
weight; since we can edit the review message, core-reviewers can add that
line if they feel it's needed, without needing to -1 the code in question.
I'm open to trying it in the Ironic project.

-Devananda

On Tue, May 26, 2015 at 8:45 AM Daniel P. Berrange berra...@redhat.com
wrote:

 On Fri, May 22, 2015 at 02:57:23PM -0700, Michael Still wrote:
  Hey,
 
  it would be cool if devs posting changes for nova which depend on us
  approving their spec could use Depends-On to make sure their code
  doesn't land until the spec does.

 Does it actually bring any benefit ?  Any change for which there is
 a spec is already supposed to be tagged with 'Blueprint: foo-bar-wiz'
 and nova core devs are supposed to check the blueprint is approved
 before +A'ing it.  So also adding a Depends-on just feels redundant
 to me, and so is one more hurdle for contributors to remember to
 add. If we're concerned people forget the Blueprint tag, or forget
 to check blueprint approval, then we'll just have same problem with
 depends-on - people will forget to add it, and cores will forget
 to check the dependant change. So this just feels like extra rules
 for no gain and extra pain.

 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
 |: http://libvirt.org  -o- http://virt-manager.org
 :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
 :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
 :|

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Using depends-on for patches which require an approved spec

2015-05-27 Thread Joe Gordon
On Tue, May 26, 2015 at 8:45 AM, Daniel P. Berrange berra...@redhat.com
wrote:

 On Fri, May 22, 2015 at 02:57:23PM -0700, Michael Still wrote:
  Hey,
 
  it would be cool if devs posting changes for nova which depend on us
  approving their spec could use Depends-On to make sure their code
  doesn't land until the spec does.

 Does it actually bring any benefit ?  Any change for which there is
 a spec is already supposed to be tagged with 'Blueprint: foo-bar-wiz'
 and nova core devs are supposed to check the blueprint is approved
 before +A'ing it.  So also adding a Depends-on just feels redundant
 to me, and so is one more hurdle for contributors to remember to
 add. If we're concerned people forget the Blueprint tag, or forget
 to check blueprint approval, then we'll just have same problem with
 depends-on - people will forget to add it, and cores will forget
 to check the dependant change. So this just feels like extra rules
 for no gain and extra pain.


I think it does have a benefit. Giving a spec implementation patches,
commonly signals to reviewers to not review this patch (a -2 looks scary).
Instead of there was a depends-on no scary -2 is needed, we also wouldn't
need to hunt down the -2er and ask them to remove it (can be a delay due to
timezones). Anything that reduces the number of procedural -2s we need is a
good thing IMHO. But that doesn't mean we should require folks to do this,
we can try it out on a few patches and see how it goes.



 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
 |: http://libvirt.org  -o- http://virt-manager.org
 :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
 :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
 :|

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][all] Zaqar will stay... Lots of work ahead

2015-05-27 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2015-05-27 10:40:48 -0700:
 On 27/05/15 12:42, Clint Byrum wrote:
 
  == Crazy idea section ==
 
  One thing I never had a chance to discuss with any of the Zaqar devs that
  I would find interesting is an email-only backend for Zaqar. Basically
  make Zaqar an HTTP-to-email gateway. There are quite a few hyper-scale
  options for SMTP and IMAP, and they're inherently multi-tenant, so I'd
  find it interesting to see if the full Zaqar API could be mapped onto
  that. This would probably be more comfortable to scale for some deployers
  than Redis or MongoDB, and might have the nice side-effect that a deployer
  could expose IMAP IDLE for efficient end-user subscription,
 
 Can you guarantee delivery end-to-end (and still get the scaling 
 benefits)? Because AIUI SMTP is only best effort, and that makes this 
 idea a non-starter IMHO.
 

So yes and no. If you are going to set your Zaqar backend up to forward
messages across the internet, then no, of course you cannot guarantee
delivery. That is not what I am suggesting as the default configuration.

However, in a closed system like a Zaqar backend SMTP would be, you
have control over things like retries and bandwidth, so you can at
least guarantee that if the intended destination is available, the
message will make it there. If you have 100 SMTP servers enqueing for
1000 destination queue servers, all with 1Gbit network between them,
you'd simply set your retry interval much lower than for the internet,
and allow infinite or very very high retries before giving up on messages.

The scaling comes from the asynchronous queueing. One can have
guaranteed delivery with asynchronous queueing.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Question to driver maintainers

2015-05-27 Thread Sturdevant, Mark
Hi Igor,

The 3PAR can extend a share without loss of connectivity.

Regards,
markstur



From: yang, xing [mailto:xing.y...@emc.com]
Sent: Friday, May 22, 2015 3:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Manila] Question to driver maintainers

Hi Igor,

We can support extending a share without loss of connectivity, but we don’t 
support shrinking.

Thanks,
Xing


From: Jason Bishop [mailto:jason.bis...@gmail.com]
Sent: Thursday, May 21, 2015 8:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Manila] Question to driver maintainers

Hi Igor, Hitachi SOP drive can support share extending online without 
disruption.

cheers
jason


On Mon, May 18, 2015 at 1:15 AM, Igor Malinovskiy 
imalinovs...@mirantis.commailto:imalinovs...@mirantis.com wrote:

Hello, everyone!

My letter is mainly addressed to driver maintainers, but could be interesting 
to everybody.


As you probably know, on Kilo midcycle meetup we discussed share resize 
functionality (extend and shrink) and I already have implemented 'extend' API 
in Generic driver (https://review.openstack.org/182383/). After implementation 
review we

noticed that some backends are able to resize a share without causing 
disruptions, but others might only be able to do it disruptively (Generic 
driver case).


So I want to ask driver maintainers here:

Will your driver be able to do share extending without loss of connectivity?


Depending on your answers, we will handle this situation differently.


Best regards,

Igor Malinovskiy (IRC: u_glide)
Manila Core Team

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.vmware release 0.13.0 (liberty)

2015-05-27 Thread Davanum Srinivas
Joe,

Given that the code once lived in nova and the team across has spent
quite a bit of time to turn it into a library which at last count was
adopted by 6 projects at least. i'd like to give the team some credit.

openstack/ceilometer/test-requirements.txt:oslo.vmware=0.11.1
# Apache-2.0
openstack/cinder/requirements.txt:oslo.vmware=0.11.1
   # Apache-2.0
openstack/congress/requirements.txt:oslo.vmware=0.11.1
 # Apache-2.0
openstack/glance/requirements.txt:oslo.vmware=0.11.1
   # Apache-2.0
openstack/glance_store/test-requirements.txt:oslo.vmware=0.11.1
  # Apache-2.0
openstack/nova/test-requirements.txt:oslo.vmware=0.11.1,!=0.13.0
  # Apache-2.0

Shit happens! the team that works on oslo.vmware overlaps nova too and
others. There were several solutions that came up quickly as well. we
can't just say nothing should ever break or we should not use 0.x.x
then we can never make progress. This is not going to get any better
either with the big tent coming up. All that matters is how quickly we
can recover and move on with our collective sanity intact. Let's work
on that in addition as well. I'd also want to give some more say in
the actual folks who are contributing and working on the code as well
in the specific discussion.

Anyway, with the global-requirements block of 0.13.0, nova should
unclog and we'll try to get something out soon in 0.13.1 to keep
@haypo's python34 effort going as well.

+1000 to release fewer unexpectedly incompatible libraries and
continue working on improving how we handle dependencies in general.
i'd like to hear specific things we can do that we are not doing both
for libraries under our collective care as well as things we use from
the general python community.

thanks,
dims

On Wed, May 27, 2015 at 1:52 PM, Joe Gordon joe.gord...@gmail.com wrote:


 On Wed, May 27, 2015 at 12:54 AM, Gary Kotton gkot...@vmware.com wrote:

 Hi,
 I prefer the patched posted by Sabari. The patch has two changes:

 It fixes unit tests
 In the even that an instance spawn fails then it catches an exception to
 warn the admin that the guestId may be invalid. The only degradation may be
 that the warning will no longer be there. I think that the admin can get
 this information from the logged exception too.



 So this breakage takes us into some strange waters.

 oslo.vmware is at version 0.x.x which according to semver [0] means Major
 version zero (0.y.z) is for initial development. Anything may change at any
 time. The public API should not be considered stable. If that is accurate,
 then nova should not be using oslo.vmware, since we shouldn't use an
 unstable library in production. If we are treating the API as stable then
 semver says we need to rev the major version (MAJOR version when you make
 incompatible API changes).

 What I am trying to say is, I don't know how you can say the nova unit tests
 are 'wrong.' either nova using oslo.vmware is 'wrong' or oslo.vmware
 breaking the API is 'wrong'.

 With OpenStack being so large and having so many dependencies (many of them
 openstack owned), we should focus on making sure we release fewer
 unexpectedly incompatible libraries and continue working on improving how we
 handle dependencies in general (lifeless has a big arch he is working on
 here AFAIK). So I am not in favor of the nova unit test change as a fix
 here.


 [0] http://semver.org/


 Thanks
 Gary

 From: Sabari Murugesan sabari.b...@gmail.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Wednesday, May 27, 2015 at 6:20 AM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] oslo.vmware release 0.13.0 (liberty)

 Matt

 I posted a patch https://review.openstack.org/#/c/185830/1 to fix the nova
 tests and make it compatible with the oslo.vmware 0.13.0 release. I am fine
 with the revert and g-r blacklist as oslo.vmware broke the semver but we can
 also consider this patch as an option.

 Thanks
 Sabari



 On Tue, May 26, 2015 at 2:53 PM, Davanum Srinivas dava...@gmail.com
 wrote:

 Vipin, Gary,

 Can you please accept the revert or figure out the best way to handle
 this?

 thanks,
 dims

 On Tue, May 26, 2015 at 5:41 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:
 
 
  On 5/26/2015 4:19 PM, Matt Riedemann wrote:
 
 
 
  On 5/26/2015 9:53 AM, Davanum Srinivas wrote:
 
  We are gleeful to announce the release of:
 
  oslo.vmware 0.13.0: Oslo VMware library
 
  With source available at:
 
   http://git.openstack.org/cgit/openstack/oslo.vmware
 
  For more details, please see the git log history below and:
 
   http://launchpad.net/oslo.vmware/+milestone/0.13.0
 
  Please report issues through launchpad:
 
   http://bugs.launchpad.net/oslo.vmware
 
  Changes in oslo.vmware 0.12.0..0.13.0
  -
 
  5df9daa Add ToolsUnavailable exception
  286cb9e Add support for dynamicProperty
  7758123 Remove support for Python 3.3
  11e7d71 Updated from global 

Re: [openstack-dev] oslo.vmware release 0.13.0 (liberty)

2015-05-27 Thread Joe Gordon
On Wed, May 27, 2015 at 11:23 AM, Davanum Srinivas dava...@gmail.com
wrote:

 Joe,

 Given that the code once lived in nova and the team across has spent
 quite a bit of time to turn it into a library which at last count was
 adopted by 6 projects at least. i'd like to give the team some credit.


Agreed, they have done a great job. I was just pointing out a lot of
OpenStack libs don't use semver's 0.x.x clause much.



 openstack/ceilometer/test-requirements.txt:oslo.vmware=0.11.1
 # Apache-2.0
 openstack/cinder/requirements.txt:oslo.vmware=0.11.1
# Apache-2.0
 openstack/congress/requirements.txt:oslo.vmware=0.11.1
  # Apache-2.0
 openstack/glance/requirements.txt:oslo.vmware=0.11.1
# Apache-2.0
 openstack/glance_store/test-requirements.txt:oslo.vmware=0.11.1
   # Apache-2.0
 openstack/nova/test-requirements.txt:oslo.vmware=0.11.1,!=0.13.0
   # Apache-2.0

 Shit happens! the team that works on oslo.vmware overlaps nova too and
 others. There were several solutions that came up quickly as well. we
 can't just say nothing should ever break or we should not use 0.x.x
 then we can never make progress. This is not going to get any better
 either with the big tent coming up. All that matters is how quickly we
 can recover and move on with our collective sanity intact. Let's work
 on that in addition as well. I'd also want to give some more say in
 the actual folks who are contributing and working on the code as well
 in the specific discussion.

 Anyway, with the global-requirements block of 0.13.0, nova should
 unclog and we'll try to get something out soon in 0.13.1 to keep
 @haypo's python34 effort going as well.


Thanks! I think it would be good to move to 1.x.x soon to show that the API
is stable. But then again we do have a lot of other libraries that are
below 0.x.x so maybe we should look at that more holistically.



 +1000 to release fewer unexpectedly incompatible libraries and
 continue working on improving how we handle dependencies in general.
 i'd like to hear specific things we can do that we are not doing both
 for libraries under our collective care as well as things we use from
 the general python community.


For openstack libraries that have a fairly limited number of consumers we
can test source of the lib against target unit test suites, in addition to
a devstack run. So oslo.vmware would have a job running source oslo.vmware
against nova py27 unit tests.

As for in general, is cooking up a plan.



 thanks,
 dims

 On Wed, May 27, 2015 at 1:52 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
  On Wed, May 27, 2015 at 12:54 AM, Gary Kotton gkot...@vmware.com
 wrote:
 
  Hi,
  I prefer the patched posted by Sabari. The patch has two changes:
 
  It fixes unit tests
  In the even that an instance spawn fails then it catches an exception to
  warn the admin that the guestId may be invalid. The only degradation
 may be
  that the warning will no longer be there. I think that the admin can get
  this information from the logged exception too.
 
 
 
  So this breakage takes us into some strange waters.
 
  oslo.vmware is at version 0.x.x which according to semver [0] means
 Major
  version zero (0.y.z) is for initial development. Anything may change at
 any
  time. The public API should not be considered stable. If that is
 accurate,
  then nova should not be using oslo.vmware, since we shouldn't use an
  unstable library in production. If we are treating the API as stable then
  semver says we need to rev the major version (MAJOR version when you
 make
  incompatible API changes).
 
  What I am trying to say is, I don't know how you can say the nova unit
 tests
  are 'wrong.' either nova using oslo.vmware is 'wrong' or oslo.vmware
  breaking the API is 'wrong'.
 
  With OpenStack being so large and having so many dependencies (many of
 them
  openstack owned), we should focus on making sure we release fewer
  unexpectedly incompatible libraries and continue working on improving
 how we
  handle dependencies in general (lifeless has a big arch he is working on
  here AFAIK). So I am not in favor of the nova unit test change as a fix
  here.
 
 
  [0] http://semver.org/
 
 
  Thanks
  Gary
 
  From: Sabari Murugesan sabari.b...@gmail.com
  Reply-To: OpenStack List openstack-dev@lists.openstack.org
  Date: Wednesday, May 27, 2015 at 6:20 AM
  To: OpenStack List openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] oslo.vmware release 0.13.0 (liberty)
 
  Matt
 
  I posted a patch https://review.openstack.org/#/c/185830/1 to fix the
 nova
  tests and make it compatible with the oslo.vmware 0.13.0 release. I am
 fine
  with the revert and g-r blacklist as oslo.vmware broke the semver but
 we can
  also consider this patch as an option.
 
  Thanks
  Sabari
 
 
 
  On Tue, May 26, 2015 at 2:53 PM, Davanum Srinivas dava...@gmail.com
  wrote:
 
  Vipin, Gary,
 
  Can you please accept the revert or figure out the best way to handle
  this?
 
  thanks,

[openstack-dev] [release][oslo][stable] too 0.12.1 (juno)

2015-05-27 Thread Doug Hellmann
We are eager to announce the release of:

tooz 0.12.1: Coordination library for distributed systems.

This release is part of the juno stable release series.

With source available at:

http://git.openstack.org/cgit/openstack/tooz

For more details, please see the git log history below and:

http://launchpad.net/python-tooz/+milestone/0.12.1

Please report issues through launchpad:

http://bugs.launchpad.net/python-tooz/

Changes in tooz 0.12..0.12.1


bd86d31 Cap kazoo and zake from stable/juno global-requirements

Diffstat (except docs and test files)
-

.gitreview   | 1 +
requirements-py3.txt | 4 ++--
requirements.txt | 4 ++--
3 files changed, 5 insertions(+), 4 deletions(-)


Requirements updates


diff --git a/requirements-py3.txt b/requirements-py3.txt
index 5b14e3c..bee3396 100644
--- a/requirements-py3.txt
+++ b/requirements-py3.txt
@@ -6 +6 @@ iso8601
-kazoo=1.3.1
+kazoo=1.3.1,=2.0
@@ -8 +8 @@ pymemcache=1.2
-zake=0.1.6
+zake=0.1,=0.1.7 # Apache-2.0
diff --git a/requirements.txt b/requirements.txt
index 5f7325e..37176df 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6 +6 @@ iso8601=0.1.9
-kazoo=1.3.1
+kazoo=1.3.1,=2.0
@@ -8 +8 @@ pymemcache=1.2
-zake=0.1
+zake=0.1,=0.1.7 # Apache-2.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][all] Scalable metering

2015-05-27 Thread gordon chung

 
 So I agree doing lots of naive polling can lead to issues on even the 
 fastest of APIs, but are there any bugs about this that were opened 
 against nova? At the very least nova should investigate why the 
 specific calls are so slow and see if we what we can do to make them at 
 least a little faster and lighter weight. 
 
 

good question. Tim, i was also wondering, is the big issue the load on the nova 
api or the hypervisor? i'm just trying out a solution that might help regarding 
the former.

cheers,   
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][oslo] Stepping down from oslo-ironic liaison

2015-05-27 Thread Joe Gordon
On Wed, May 27, 2015 at 3:20 AM, Davanum Srinivas dava...@gmail.com wrote:

 Victor,

 Nice, yes, Joe was the liaison with Nova so far. Yes, please go ahead
 and add your name in the wiki for Nova as i believe Joe is winding
 down the oslo liaison as well.
 https://wiki.openstack.org/wiki/CrossProjectLiaisons#Oslo



Yup, thank you Victor!




 thanks,
 dims

 On Wed, May 27, 2015 at 5:12 AM, Victor Stinner vstin...@redhat.com
 wrote:
  Hi,
 
  By the way, who is the oslo liaison for nova? If there is nobody, I
 would
  like to take this position.
 
  Victor
 
  Le 25/05/2015 18:45, Ghe Rivero a écrit :
 
  My focus on the Ironic project has been decreasing in the last cycles,
  so it's about time to relinquish my position as a oslo-ironic liaison so
  new contributors can take over it and help ironic to be the vibrant
  project it is.
 
  So long, and thanks for all the fish,
 
  Ghe Rivero
  --
  Pinky: Gee, Brain, what do you want to do tonight?
  The Brain: The same thing we do every night, Pinky—try to take over the
  world!
 
.''`.  Pienso, Luego Incordio
  : :' :
  `. `'
 `- www.debian.org http://www.debian.org www.openstack.com
  http://www.openstack.com
 
  GPG Key: 26F020F7
  GPG fingerprint: 4986 39DA D152 050B 4699  9A71 66DB 5A36 26F0 20F7
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Davanum Srinivas :: https://twitter.com/dims

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: Return request-id to caller

2015-05-27 Thread Joe Gordon
On Wed, May 27, 2015 at 10:31 AM, Mike Bayer mba...@redhat.com wrote:



 On 5/27/15 3:06 AM, Kekane, Abhishek wrote:

  Hi Devs,



 Each OpenStack service sends a request ID header with HTTP responses. This
 request ID can be useful for tracking down problems in the logs. However,
 when operation crosses service boundaries, this tracking can become
 difficult, as each service has its own request ID. Request ID is not
 returned to the caller, so it is not easy to track the request. This
 becomes especially problematic when requests are coming in parallel. For
 example, glance will call cinder for creating image, but that cinder
 instance may be handling several other requests at the same time. By using
 same request ID in the log, user can easily find the cinder request ID that
 is same as glance request ID in the g-api log. It will help
 operators/developers to analyse logs effectively.



 To address this issue we have come up with following solutions:



 Solution 1: Return tuple containing headers and body from respective
 clients (also favoured by Joe Gordon)

 Reference:
 https://review.openstack.org/#/c/156508/6/specs/log-request-id-mappings.rst


 I like solution 1 as well as solution 3 at the same time, in fact.
 There's usefulness to being able to easily identify a set of requests as
 all part of the same operation as well as being able to identify a call's
 location in the hierarchy.

 In fact does solution #1 make the hierarchy apparent ?   I'd want it to do
 that, e.g. if call A calls B, which calls C and D, I'd want to know that
 the dependency tree is A-B-(C, D), and not just a bucket of (A, B, C,
 D).


#1 should make the hierarchy apparent. That IMHO is the biggest pro for #1.




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: Return request-id to caller

2015-05-27 Thread Joe Gordon
On Wed, May 27, 2015 at 12:06 AM, Kekane, Abhishek 
abhishek.kek...@nttdata.com wrote:

  Hi Devs,



 Each OpenStack service sends a request ID header with HTTP responses. This
 request ID can be useful for tracking down problems in the logs. However,
 when operation crosses service boundaries, this tracking can become
 difficult, as each service has its own request ID. Request ID is not
 returned to the caller, so it is not easy to track the request. This
 becomes especially problematic when requests are coming in parallel. For
 example, glance will call cinder for creating image, but that cinder
 instance may be handling several other requests at the same time. By using
 same request ID in the log, user can easily find the cinder request ID that
 is same as glance request ID in the g-api log. It will help
 operators/developers to analyse logs effectively.


Thank you for writing this up.




 To address this issue we have come up with following solutions:



 Solution 1: Return tuple containing headers and body from respective
 clients (also favoured by Joe Gordon)

 Reference:
 https://review.openstack.org/#/c/156508/6/specs/log-request-id-mappings.rst



 Pros:

 1. Maintains backward compatibility

 2. Effective debugging/analysing of the problem as both calling service
 request-id and called service request-id are logged in same log message

 3. Build a full call graph

 4. End user will able to know the request-id of the request and can
 approach service provider to know the cause of failure of particular
 request.



 Cons:

 1. The changes need to be done first in cross-projects before making
 changes in clients

 2. Applications which are using python-*clients needs to do required
 changes (check return type of  response)


Additional cons:

3. Cannot simply search all logs (ala logstash) using the request-id
returned to the user without any post processing of the logs.






 Solution 2:  Use thread local storage to store 'x-openstack-request-id'
 returned from headers (suggested by Doug Hellmann)

 Reference:
 https://review.openstack.org/#/c/156508/9/specs/log-request-id-mappings.rst



 Add new method ‘get_openstack_request_id’ to return this request-id to the
 caller.



 Pros:

 1. Doesn’t break compatibility

 2. Minimal changes are required in client

 3. Build a full call graph



 Cons:

 1. Malicious user can send long request-id to fill up the disk-space,
 resulting in potential DoS

 2. Changes need to be done in all python-*clients

 3. Last request id should be flushed out in a subsequent call otherwise it
 will return wrong request id to the caller





 Solution 3: Unique request-id across OpenStack Services (suggested by
 Jamie Lennox)

 Reference:
 https://review.openstack.org/#/c/156508/10/specs/log-request-id-mappings.rst



 Get 'x-openstack-request-id' from auth plugin and add it to the request
 headers. If 'x-openstack-request-id' key is present in the request header,
 then it will use the same one further or else it will generate a new one.



 Dependencies:

 https://review.openstack.org/#/c/164582/ - Include request-id in auth
 plugin and add it to request headers

 https://review.openstack.org/#/c/166063/ - Add session-object for glance
 client

 Add 'UserAuthPlugin' and '_ContextAuthPlugin' same as nova in cinder and
 neutron





 Pros:

 1. Using same request id for the request crossing multiple service
 boundaries will help operators/developers identify the problem quickly

 2. Required changes only in keystonemiddleware and oslo_middleware
 libraries. No changes are required in the python client bindings or
 OpenStack core services



 Cons:

 1. As 'x-openstack-request-id' in the request header will be visible to
 the user, it is possible to send same request id for multiple requests
 which in turn could create more problems in case of troubleshooting cause
 of the failure as request_id middleware will not check for its uniqueness
 in the scope of the running OpenStack service.

 2. Having the same request ID for all services for a single user API call
 means you cannot generate a full call graph. For example if a single user's
 nova API call produces 2 calls to glance you want to be able to
 differentiate the two different calls.





 During the Liberty design summit, I had a chance of discussing these
 designs with some of the core members like Doug, Joe Gordon, Jamie Lennox
 etc. But not able to came to any conclusion on the final design and know
 the communities direction by which way they want to use this request-id
 effectively.



 However IMO, solution 1 sounds more useful as the debugger can able to
 build the full call graph which can be helpful for analysing gate failures
 effectively as well as end user will be able to know his request-id and can
 track his request.



 I request all community members to go through these solutions and let us
 know which is the appropriate way to improve the logs by logging request-id.





 Thanks  Regards,



 

Re: [openstack-dev] [ceilometer][all] Scalable metering

2015-05-27 Thread Tim Bell

We've noticed the load on the Nova and Keystone APIs. It may well be that there 
is an increased load on the hypervisor but this is distributed and thus we 
might not have seen it. The API impact was significant a factor of 14x 
increase in API call rate 
(http://openstack-in-production.blogspot.de/2014/03/cern-cloud-architecture-update-for.html)

Tim

 -Original Message-
 From: gordon chung [mailto:g...@live.ca]
 Sent: 27 May 2015 20:43
 To: OpenStack Development Mailing List not for usage questions
 Subject: Re: [openstack-dev] [ceilometer][all] Scalable metering
 
 
 
  So I agree doing lots of naive polling can lead to issues on even the
  fastest of APIs, but are there any bugs about this that were opened
  against nova? At the very least nova should investigate why the
  specific calls are so slow and see if we what we can do to make them
  at least a little faster and lighter weight.
 
 
 
 good question. Tim, i was also wondering, is the big issue the load on the 
 nova
 api or the hypervisor? i'm just trying out a solution that might help 
 regarding the
 former.
 
 cheers,
 _
 _
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][all] Scalable metering

2015-05-27 Thread gordon chung
cool cool. we'll work against that. i want to point out we added a caching 
mechanism in Juno which i hope helps with the Nova issue -- basically we cache 
some of the secondary calls we make when gathering information.

https://github.com/openstack/ceilometer/commit/4863fc95bcbbb6f7dbe7e48ef13f370456611738

cheers,

gord



 From: tim.b...@cern.ch
 To: openstack-dev@lists.openstack.org
 Date: Wed, 27 May 2015 18:53:44 +
 Subject: Re: [openstack-dev] [ceilometer][all] Scalable metering


 We've noticed the load on the Nova and Keystone APIs. It may well be that 
 there is an increased load on the hypervisor but this is distributed and thus 
 we might not have seen it. The API impact was significant a factor of 14x 
 increase in API call rate 
 (http://openstack-in-production.blogspot.de/2014/03/cern-cloud-architecture-update-for.html)

 Tim

 -Original Message-
 From: gordon chung [mailto:g...@live.ca]
 Sent: 27 May 2015 20:43
 To: OpenStack Development Mailing List not for usage questions
 Subject: Re: [openstack-dev] [ceilometer][all] Scalable metering



 So I agree doing lots of naive polling can lead to issues on even the
 fastest of APIs, but are there any bugs about this that were opened
 against nova? At the very least nova should investigate why the
 specific calls are so slow and see if we what we can do to make them
 at least a little faster and lighter weight.



 good question. Tim, i was also wondering, is the big issue the load on the 
 nova
 api or the hypervisor? i'm just trying out a solution that might help 
 regarding the
 former.

 cheers,
 _
 _
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.vmware release 0.13.0 (liberty)

2015-05-27 Thread Joe Gordon
On Wed, May 27, 2015 at 12:54 AM, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 I prefer the patched posted by Sabari. The patch has two changes:

1. It fixes unit tests
2. In the even that an instance spawn fails then it catches an
exception to warn the admin that the guestId may be invalid. The only
degradation may be that the warning will no longer be there. I think that
the admin can get this information from the logged exception too.



So this breakage takes us into some strange waters.

oslo.vmware is at version 0.x.x which according to semver [0] means Major
version zero (0.y.z) is for initial development. Anything may change at any
time. The public API should not be considered stable. If that is accurate,
then nova should not be using oslo.vmware, since we shouldn't use an
unstable library in production. If we are treating the API as stable then
semver says we need to rev the major version (MAJOR version when you make
incompatible API changes).

What I am trying to say is, I don't know how you can say the nova unit
tests are 'wrong.' either nova using oslo.vmware is 'wrong' or oslo.vmware
breaking the API is 'wrong'.

With OpenStack being so large and having so many dependencies (many of them
openstack owned), we should focus on making sure we release fewer
unexpectedly incompatible libraries and continue working on improving how
we handle dependencies in general (lifeless has a big arch he is working on
here AFAIK). So I am not in favor of the nova unit test change as a fix
here.


[0] http://semver.org/


 Thanks
 Gary

   From: Sabari Murugesan sabari.b...@gmail.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Wednesday, May 27, 2015 at 6:20 AM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] oslo.vmware release 0.13.0 (liberty)

   Matt

  I posted a patch https://review.openstack.org/#/c/185830/1 to fix the
 nova tests and make it compatible with the oslo.vmware 0.13.0 release. I am
 fine with the revert and g-r blacklist as oslo.vmware broke the semver but
 we can also consider this patch as an option.

  Thanks
 Sabari



 On Tue, May 26, 2015 at 2:53 PM, Davanum Srinivas dava...@gmail.com
 wrote:

 Vipin, Gary,

 Can you please accept the revert or figure out the best way to handle
 this?

 thanks,
 dims

 On Tue, May 26, 2015 at 5:41 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:
 
 
  On 5/26/2015 4:19 PM, Matt Riedemann wrote:
 
 
 
  On 5/26/2015 9:53 AM, Davanum Srinivas wrote:
 
  We are gleeful to announce the release of:
 
  oslo.vmware 0.13.0: Oslo VMware library
 
  With source available at:
 
   http://git.openstack.org/cgit/openstack/oslo.vmware
 
  For more details, please see the git log history below and:
 
   http://launchpad.net/oslo.vmware/+milestone/0.13.0
 
  Please report issues through launchpad:
 
   http://bugs.launchpad.net/oslo.vmware
 
  Changes in oslo.vmware 0.12.0..0.13.0
  -
 
  5df9daa Add ToolsUnavailable exception
  286cb9e Add support for dynamicProperty
  7758123 Remove support for Python 3.3
  11e7d71 Updated from global requirements
  883c441 Remove run_cross_tests.sh
  1986196 Use suds-jurko on Python 2
  84ab8c4 Updated from global requirements
  6cbde19 Imported Translations from Transifex
  8d4695e Updated from global requirements
  1668fef Raise VimFaultException for unknown faults
  15dbfb2 Imported Translations from Transifex
  c338f19 Add NoDiskSpaceException
  25ec49d Add utility function to get profiles by IDs
  32c61ee Add bandit to tox for security static analysis
  f140b7e Add SPBM WSDL for vSphere 6.0
 
  Diffstat (except docs and test files)
  -
 
  bandit.yaml|  130 +++
  openstack-common.conf  |2 -
  .../locale/fr/LC_MESSAGES/oslo.vmware-log-error.po |9 -
  .../locale/fr/LC_MESSAGES/oslo.vmware-log-info.po  |3 -
  .../fr/LC_MESSAGES/oslo.vmware-log-warning.po  |   10 -
  oslo.vmware/locale/fr/LC_MESSAGES/oslo.vmware.po   |   86 +-
  oslo.vmware/locale/oslo.vmware.pot |   48 +-
  oslo_vmware/api.py |   10 +-
  oslo_vmware/exceptions.py  |   13 +-
  oslo_vmware/objects/datastore.py   |6 +-
  oslo_vmware/pbm.py |   18 +
  oslo_vmware/service.py |2 +-
  oslo_vmware/wsdl/6.0/core-types.xsd|  237 +
  oslo_vmware/wsdl/6.0/pbm-messagetypes.xsd  |  186 
  oslo_vmware/wsdl/6.0/pbm-types.xsd |  806
 ++
  oslo_vmware/wsdl/6.0/pbm.wsdl  | 1104
  
  oslo_vmware/wsdl/6.0/pbmService.wsdl   |   16 +
  requirements-py3.txt   |   27 -
  requirements.txt   |8 +-
  setup.cfg 

Re: [openstack-dev] [nova] Availability of device names for operations with volumes and BDM and other features.

2015-05-27 Thread Nikola Đipanov
On 05/27/2015 09:47 AM, Alexandre Levine wrote:
 Hi all,
 
 I'd like to bring up this matter again, although it was at some extent
 discussed during the recent summit.
 
 The problem arises from the fact that the functionality exposing device
 names for usage through public APIs is deteriorating in nova. It's being
 deliberately removed because as I understand, it doesn't universally and
 consistently work in all of the backends. It happens  since IceHouse and
 introduction of bdm v2. The following very recent review is one of the
 ongoing efforts in this direction:
 https://review.openstack.org/#/c/185438/
 

I've abandoned the change as it is clear we need to discuss how to go
about this some more.

But first let me try to give a bit more detailed explanation and
background to what the deal is with device names. Supplying device names
that will be honoured by the guests is really only possible with Xen PV
guests (meaning the guest needs to be running PV-enabled kernel and
drivers).

Back in Havana, when we were working on [1] (see [2] for more details)
the basic idea was that we will still accept device names because
removing them from the public API is not likely to happen (mostly
because of the EC2 compatibility), but in case of libvirt driver, we
will treat them as hints only, and provide our own (by mostly
replicating the logic libvirt uses to order devices [3]). We also
allowed for device names to not be specified by the user as this is
really what anyone not using the EC2 API should be doing (users using
the EC2 API do however need to be aware the fact that i may not be
honoured).

[1]
https://blueprints.launchpad.net/nova/+spec/improve-block-device-handling
[2] https://wiki.openstack.org/wiki/BlockDeviceConfig
[3]
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/blockinfo.py

 The reason for my concern is that EC2 API have some important cases
 relying on this information (some of them have no workarounds). Namely:
 1. Change of parameters set by image for instance booting.
 2. Showing instance's devices information by euca2ools.
 3. Providing additional volumes for instance booting
 4. Attaching volume
 etc...
 

So based on the above - it seems to me that you think we are removing
the information about device names completely. That's not the case -
currently it is simply not mandatory for the Nova boot API call (it was
never mandatory for volume attach afaict) - you can still pass it in,
though libvirt may not honour it. It will still be tracked by the Nova
DB and available for users to refer to.

 Related to device names and additional related features we have troubles
 with now:
 1. All device name related features

As I said - they are not removed, in addition, you can still completely
disregard the BDMv2 syntax as Nova should transparently handle old-style
syntax when passed in (actually since BDM info is stored with images
when snapshotting and it may have been v1 syntax - it is likely that we
will never remove this support). If you are seeing some bugs related to
this - please report them.

 2. Modification of deleteOnTermination flag

I don't have enough details on this but if some behaviour has changed
when using the old syntax - it is likely a bug so please report it.

 3. Modification of parameters for instance booting

Again - I am not sure what this is related to exactly - but none of the
parameters have changed really (only new ones were added). It would be
good to get more information on this (preferably a bug report).

 4. deleteOnTermination and size of volume aren't stored into instance
 snapshots now.
 

This does sound like a bug - and hopefully an easy to fix one.

 Discussions during the summit on the matter were complicated because
 nobody present really understood in details why and what is happening
 with this functionality in nova. It was decided though, that overall
 direction would be to add necessary features or restore them unless
 there is something really showstopping:
 https://etherpad.openstack.org/p/YVR-nova-contributor-meetup
 
 As I understand, Nikola Depanov is the one working on the matter for
 some time obviously is the best person who can help to resolve the
 situation. Nikola, if possible, could you help with it and clarify the
 issue.
 
 My suggestion, based on my limited knowledge at the moment, still is to
 restore back or add all of the necessary APIs and provide tickets or
 known issues for the cases where the functionality is suffering from the
 backend limitations.
 
 Please let me know what you think.
 

As explained above - nothing was intentionally removed, and if something
broke - it's a bug that we should fix, so I urge the team behind the EC2
API on stackforge to report those, and I will try to at least look into
them, if not fix them. We might want to have a tag for EC2 related bugs
in LP (I seem to remember there being such a thing before).

Device names though are not something we can easily resolve without
having the users 

[openstack-dev] [stable] stable/kilo and stable/juno blocked on zake 0.2.2 release

2015-05-27 Thread Matt Riedemann
All changes to stable/kilo (and probably stable/juno) are broken due to 
a zake 0.2.2 release today which excludes kazoo 2.1.


tooz 0.12 requires uncapped zake and kazoo so it's pulling in kazoo 2.1 
which zake 0.2.2 doesn't allow.


ceilometer pulls in tooz.

There is no stable/juno branch for tooz so that we can cap requirements 
for kazoo (since stable/juno g-r caps kazoo=2.0).


We need the oslo team to create a stable/juno branch for tooz, sync g-r 
from stable/juno to tooz on stable/juno and then do a release of tooz 
that will work for stable/juno - else fix kazoo and put out a 2.1.1 
release so that zake will start working with latest kazoo.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] stable/kilo and stable/juno blocked on zake 0.2.2 release

2015-05-27 Thread Matt Riedemann



On 5/27/2015 10:57 AM, Matt Riedemann wrote:

All changes to stable/kilo (and probably stable/juno) are broken due to
a zake 0.2.2 release today which excludes kazoo 2.1.

tooz 0.12 requires uncapped zake and kazoo so it's pulling in kazoo 2.1
which zake 0.2.2 doesn't allow.

ceilometer pulls in tooz.

There is no stable/juno branch for tooz so that we can cap requirements
for kazoo (since stable/juno g-r caps kazoo=2.0).

We need the oslo team to create a stable/juno branch for tooz, sync g-r
from stable/juno to tooz on stable/juno and then do a release of tooz
that will work for stable/juno - else fix kazoo and put out a 2.1.1
release so that zake will start working with latest kazoo.



Here is a link to the type of failure you'll see with this:

http://logs.openstack.org/56/183656/4/check/check-grenade-dsvm/3acba73/logs/old/screen-s-proxy.txt.gz

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] stable/kilo and stable/juno blocked on zake 0.2.2 release

2015-05-27 Thread Matt Riedemann



On 5/27/2015 10:58 AM, Matt Riedemann wrote:



On 5/27/2015 10:57 AM, Matt Riedemann wrote:

All changes to stable/kilo (and probably stable/juno) are broken due to
a zake 0.2.2 release today which excludes kazoo 2.1.

tooz 0.12 requires uncapped zake and kazoo so it's pulling in kazoo 2.1
which zake 0.2.2 doesn't allow.

ceilometer pulls in tooz.

There is no stable/juno branch for tooz so that we can cap requirements
for kazoo (since stable/juno g-r caps kazoo=2.0).

We need the oslo team to create a stable/juno branch for tooz, sync g-r
from stable/juno to tooz on stable/juno and then do a release of tooz
that will work for stable/juno - else fix kazoo and put out a 2.1.1
release so that zake will start working with latest kazoo.



Here is a link to the type of failure you'll see with this:

http://logs.openstack.org/56/183656/4/check/check-grenade-dsvm/3acba73/logs/old/screen-s-proxy.txt.gz




Here is the tooz bug I reported for tracking:

https://bugs.launchpad.net/python-tooz/+bug/1459322

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: Return request-id to caller

2015-05-27 Thread Ian Cordasco


On 5/27/15, 10:15, Doug Hellmann d...@doughellmann.com wrote:

Excerpts from Kekane, Abhishek's message of 2015-05-27 07:06:56 +:
 Hi Devs,
 
 Each OpenStack service sends a request ID header with HTTP responses.
This request ID can be useful for tracking down problems in the logs.
However, when operation crosses service boundaries, this tracking can
become difficult, as each service has its own request ID. Request ID is
not returned to the caller, so it is not easy to track the request. This
becomes especially problematic when requests are coming in parallel. For
example, glance will call cinder for creating image, but that cinder
instance may be handling several other requests at the same time. By
using same request ID in the log, user can easily find the cinder
request ID that is same as glance request ID in the g-api log. It will
help operators/developers to analyse logs effectively.
 
 To address this issue we have come up with following solutions:
 
 Solution 1: Return tuple containing headers and body from respective
clients (also favoured by Joe Gordon)
 Reference: 
https://review.openstack.org/#/c/156508/6/specs/log-request-id-mappings.r
st
 
 Pros:
 1. Maintains backward compatibility
 2. Effective debugging/analysing of the problem as both calling service
request-id and called service request-id are logged in same log message
 3. Build a full call graph
 4. End user will able to know the request-id of the request and can
approach service provider to know the cause of failure of particular
request.
 
 Cons:
 1. The changes need to be done first in cross-projects before making
changes in clients
 2. Applications which are using python-*clients needs to do required
changes (check return type of  response)
 
 
 Solution 2:  Use thread local storage to store 'x-openstack-request-id'
returned from headers (suggested by Doug Hellmann)
 Reference: 
https://review.openstack.org/#/c/156508/9/specs/log-request-id-mappings.r
st
 
 Add new method 'get_openstack_request_id' to return this request-id to
the caller.
 
 Pros:
 1. Doesn't break compatibility
 2. Minimal changes are required in client
 3. Build a full call graph
 
 Cons:
 1. Malicious user can send long request-id to fill up the disk-space,
resulting in potential DoS
 2. Changes need to be done in all python-*clients
 3. Last request id should be flushed out in a subsequent call otherwise
it will return wrong request id to the caller
 
 
 Solution 3: Unique request-id across OpenStack Services (suggested by
Jamie Lennox)
 Reference: 
https://review.openstack.org/#/c/156508/10/specs/log-request-id-mappings.
rst
 
 Get 'x-openstack-request-id' from auth plugin and add it to the request
headers. If 'x-openstack-request-id' key is present in the request
header, then it will use the same one further or else it will generate a
new one.
 
 Dependencies:
 https://review.openstack.org/#/c/164582/ - Include request-id in auth
plugin and add it to request headers
 https://review.openstack.org/#/c/166063/ - Add session-object for
glance client
 Add 'UserAuthPlugin' and '_ContextAuthPlugin' same as nova in cinder
and neutron
 
 
 Pros:
 1. Using same request id for the request crossing multiple service
boundaries will help operators/developers identify the problem quickly
 2. Required changes only in keystonemiddleware and oslo_middleware
libraries. No changes are required in the python client bindings or
OpenStack core services
 
 Cons:
 1. As 'x-openstack-request-id' in the request header will be visible to
the user, it is possible to send same request id for multiple requests
which in turn could create more problems in case of troubleshooting
cause of the failure as request_id middleware will not check for its
uniqueness in the scope of the running OpenStack service.
 2. Having the same request ID for all services for a single user API
call means you cannot generate a full call graph. For example if a
single user's nova API call produces 2 calls to glance you want to be
able to differentiate the two different calls.
 
 
 During the Liberty design summit, I had a chance of discussing these
designs with some of the core members like Doug, Joe Gordon, Jamie
Lennox etc. But not able to came to any conclusion on the final design
and know the communities direction by which way they want to use this
request-id effectively.
 
 However IMO, solution 1 sounds more useful as the debugger can able to
build the full call graph which can be helpful for analysing gate
failures effectively as well as end user will be able to know his
request-id and can track his request.
 
 I request all community members to go through these solutions and let
us know which is the appropriate way to improve the logs by logging
request-id.

Robert Collins pointed out that os-profiler is already tracking
requests across REST calls. Does it use one of the proposed methods?
Since os-profiler is seeing more adoption, could we combine efforts
here?

Doug

Speaking with my 

Re: [openstack-dev] [Horizon] dashboard-app split in horizon

2015-05-27 Thread Thai Q Tran
Yes Rob, you are correct. ToastService was something Cindy wrote to replace horizon.alert (aka messages). We can't remove it because legacy still uses it.-"Rob Cresswell (rcresswe)" rcres...@cisco.com wrote: -To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.orgFrom: "Rob Cresswell (rcresswe)" rcres...@cisco.comDate: 05/26/2015 11:29PMSubject: Re: [openstack-dev] [Horizon] dashboard-app split in horizon
Went through the files myself and I concur. Most of these files define pieces specific to our implementation of the dashboard, so should be moved.


Im not entirely sure on where _messages should sit. As we move forward, wont that file just end up as a toast element and nothing more? Maybe Im misinterpreting it, Im not familiar with toastService.


Rob


From: Richard Jones r1chardj0...@gmail.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Date: Tuesday, 26 May 2015 01:35
To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Cc: "Johanson, Tyr H" t...@hp.com
Subject: Re: [openstack-dev] [Horizon] dashboard-app split in horizon

As a follow-up to this [in the misguided hope that anyone will actually read this conversation with myself ;-)] I've started looking at the base.html split. At the summit last week, we agreed to:


1. move base.html over from the framework to the dashboard, and
2. move the _conf.html and _scripts.html over as well, since they configure the application (dashboard).


Upon starting the work it occurs to me that all of the other files referenced by base.html should also move. So, here's the complete list of base.html components and whether they should move over in my opinion:


- horizon/_custom_meta.html
 Yep, is an empty file in horizon, intended as an extension point in dashboard. The empty file (plus an added comment) should move.
 - horizon/_stylesheets.html
 Is just a dummy in horizon anyway, should move.
- horizon/_conf.html
 Yep, should move.
- horizon/client_side/_script_loader.html
 Looks to be a framework component not intended for override, so we should leave it there.
- horizon/_custom_head_js.html

 Yep, is an empty file in horizon, intended as an extension point in dashboard. Move, with a comment added.

- horizon/_header.html
 There is a basic implementation in framework but the real (used) implementation is in dashboard, so should move.
- horizon/_messages.html
 This is a framework component, so I think should stay there. I'm not sure whether anyone would ever wish to override this. Also the bulk of it is probably going to be replaced by the toast implementation anyway... hmm...
- horizon/common/_sidebar.html
 This is an overridable component that I think should move.
-horizon/common/_page_header.html
 This is an overridable component that I think should move.
-horizon/_scripts.html

 Yep, should move.Thoughts, anyone who has read this far?
  Richard


On Sat, 23 May 2015 at 11:46 Richard Jones r1chardj0...@gmail.com wrote:


As part of the ongoing Horizon project code reorganisation, we today agreed to clean up the Horizon-the-Framework and OpenStack Dashboard separation issue by doing a couple of things:


1. nuke (the recently-created) horizon dashboard-app by moving the angular app over to dashboard and the other contents to appropriate places (mostly under the heading of "tech-debt" :)
2. move base.html, _conf.html and _scripts.html from horizon over to dashboard.


Thanks to Cindy, Sean and Thai for the pair (er triple?) programming keeping me honest today.


The first step is done and captured in several linked patches based off your leaf patch "ngReorg - Create dashboard-app" https://review.openstack.org/#/c/184597/ (yes, I am nuking
 the thing created by your patch).


I've not done the second step, but might find some time since I have 6 hours to waste in LAX tomorrow.


  Richard
__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][all] Zaqar will stay... Lots of work ahead

2015-05-27 Thread Hayes, Graham
On 05/26/2015 09:29 AM, Flavio Percoco wrote:
 Greetings,
 
 TL;DR: Thanks everyone for your feedback. Based on the discussed plans
 at the summit - I'll be writing more about these later - Zaqar will
 stick around and play its role in the community.
 
 Summit Summary
 ==
 
 I'm happy to say that several use cases were discussed at the summit
 and the difference from previous summits is that we left the room with
 some action items to make them happen.
 
 Cross-project user-facing notifications
 ===
 
 https://etherpad.openstack.org/p/liberty-cross-project-user-notifications
 
 Besides brainstorming a bit on what things should/should not be
 notified and what format should be used, we also talked a bit about
 the available technologies that could be used for this tasks. Zaqar
 was among those and, AFAICT, at the end of the session we agreed on
 giving this a try. It'll likely not happen as fast as we want but the
 action item out of this session was to write a cross-project spec
 describing the things discussed and the technology that will be
 adopted.
 
 Heat + Zaqar
 
 
 The 2 main areas where Zaqar will be used in Heat are Software Config
 and Hooks. The minimum requirements (server side) for this are in
 place already. There's some work to do on the client side that the
 team will get to asap.
 
 
 Sahara (or other guest agent based services) + Zaqar
 
 
 We discussed 3 different ways to enable services to communicate with
 their guest agents using Zaqar:
 
 1) Using notification hooks: Assuming the guest agents doesn't need to
 communicate with the controller, the controller can register a
 notification hook that will push messages to the guest agent.
 
 2) Inject keystone credentials: The controller would inject keystone
 credentials into the VM to allow the guest agent to send/receive
 messages using Zaqar.
 
 3) PreSigned URLs: The controller injects a PreSigned URL in the
 controller that will grant the guest agent access to a specific
 tenant/queue with either read or readwrite access.

I think it is important to note that these 3 options are for a sub set
of guest agent based services.

It was agreed that Zaqar would look at a oslo_messaging driver for the
services that did not agree with the 3 options presented.

 
 
 Hallway Discussions
 ===
 
 We had a chance to talk to some other folks from teams like Horizon
 that were also interested in doing some actual integration work with
 Zaqar as well. Not to mention that some other folks from the puppet
 team showed interest in helping out with the creation of these
 manifests.
 
 
 Next Steps
 ==
 
 In light of the above, and as mentioned in the TL;DR, Zaqar will stick
 around and the team, as promised, will focus on making those
 integrations happen. The team is small, which means we'll carefully
 pick the tasks we'll be spending time on.
 
 As a first step, we should restore our meetings and get to work right
 away. To favor our contributors in NZ, next week's meeting will be at
 21:00 UTC and we'll keep it at that time for 2 weeks.
 
 For the Zaqar team (and folks interested), I'll be sending out further
 emails to sync on the work to do.
 
 Special thanks for all the folks that showed interest, participated in
 sessions and that are committed on making this happen.
 
 Lets now make it happen,
 Flavio
 


-- 
*Graham Hayes
*Software Engineer
DNS as a Service
HP Public Cloud - Platform Services

GPG Key: 7D28E972


graham.ha...@hp.com mailto:graham.ha...@hp.com
M +353 87 377 8315

P +353 1 524 2175
Dublin,
Ireland

HP http://www.hp.com/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] IRC meetings agenda is now driven from Gerrit !

2015-05-27 Thread Sean M. Collins
On Wed, May 27, 2015 at 10:53:38AM EDT, Thierry Carrez wrote:
 For future meeting additions and schedule modifications, please propose
 changes to openstack-infra/irc-meetings via Gerrit !

Fantastic work everyone! I knew this was always a tough manual task,
it's great to see it finally become automated!

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] Liberty summit: Updates in Glance

2015-05-27 Thread Nikhil Komawar
I kinda agree with all 3 of you, possibly because there are Grey areas 
on how best we can actually do this. We are talking about one cycle 
ahead to the very least. While that is a great thing to do, I think we 
should focus on making the current Artifacts implementation stable  
bring it to a state that it works great with the different data asset 
requirements within OpenStack realm (to support the main motivation 
behind this concept and Glance's mission statement).


Cheers
Nikhil

On 5/27/15 8:30 AM, Kuvaja, Erno wrote:

-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com]
Sent: 27 May 2015 00:58
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] [all] Liberty summit: Updates in Glance

Jesse, you beat me on this one :)

On 26/05/15 13:54 -0400, Nikhil Komawar wrote:

Thank you Jesse for your valuable input (here and at the summit) as
well as intent to clarify the discussion.

Just trying to ensure people are aware about the EXPERIMENTAL nature of
the v3 API and reasons behind it. Please find my responses in-line.
However, I do want to ensure you all, that we will strive hard to move
away from the EXPERIMENTAL nature and go with a rock solid
implementation as and when interest grows in the code-base (that helps

stabilize it).

On 5/26/15 12:57 PM, Jesse Cook wrote:


On 5/22/15, 4:28 PM, Nikhil Komawar nik.koma...@gmail.com

wrote:


Hi all,

tl;dr; Artifacts IS staying in Glance.

 1. We had a nice discussion at the contributors' meet-up at the
Vancouver summit this morning. After weighing in many possibilities
and evolution of the Glance program, we have decided to go ahead
with the Artifacts implementation within Glance program under the
EXPERIMENTAL v3 API.

Want to clarify a bit here. My understanding is: s/Artifacts/v3 API/g. That
is to say, Artifacts is the technical implementation of the v3 API. This
also means the v3 API is an objects API vs just an images API.


Generic data assets' API would be a nice term along the lines of the
mission statement. Artifacts seemed fitting as that was the focus of
discussion at various sessions.

Regardless of how we call it, I do appreciate the clarity on the fact that
Artifacts - data assests - is just the technical implementation of what will be
Glance's API v3. It's an important distinction to avoid sending the wrong
message on what it's going to be done there.


We also had some hallway talk about putting the v1 and v2 APIs on top of
the v3 API. This forces faster adoption, verifies supportability via v1 and
v2 tests, increases supportability of v1 and v2 APIs, and pushes out the
need to kill v1 API.

Let's discuss more as time and development progresses on that
possibility. v3 API should stay EXPERIMENTAL for now as that would help
us understand use-cases across programs as it gets adopted by various
code-bases. Putting v1/v2 on top of v3 would be tricky for now as we
may have breaking changes with code being relatively-less stable due to

narrow review domain.

I actually think we'd benefit more from having V2 on top of V3 than not doing
it. I'd probably advocate to make this M material rather than L but I think it'd
be good.

We perhaps would, but that would realistically push v2 adoption across the 
projects to somewhere around O release. Just looking how long it took the v2 
code base to mature enough that we're seriously talking moving to use that in 
production.

I think regardless of what we do, I'd like to kill v1 as it has a sharing model
that is not secure.

The above would postpone this one somewhere around Q-R (which is btw. not so 
far from U anymore).

More I think about this the more convinced I am about focusing to the move to 
v2 on our consumers, deprecating the v1 out and after that we can start talking 
about moving v2 on top of the v3 codebase if possible, not other way around 
hoping that it would speed up the v3 adoption.

- Erno

Flavio


 1.
 2. The effort would primarily be conducted as a sub-team-like
structure within the program and the co-coordinators and drivers of
the necessary Artifacts features would be given core-reviewer
status temporarily with an informal agreement to merge code that is
only related to Artifacts.
 3. The entire Glance team would give reviews as time and priorities
permit. The approval (+A/+WorkFlow) of any code within the

program

would need to come from core-reviewers who are not temporarily
authorized. The list of such individuals and updated time-line
would be documented in phases during the course of Liberty cycle.
 4. We will continue to evaluate  update the governance, maturity of
the code and future plans for the v1, v2 and v3 Glance APIs as time
progresses. 

Re: [openstack-dev] [nova-docker] Looking for volunteers to take care of nova-docker

2015-05-27 Thread Davanum Srinivas
Bich,

Sure thing, just start on #nova-docker irc channel and we can talk there

-- dims

On Wed, May 27, 2015 at 11:28 AM, Bich Le l...@platform9.com wrote:
 I'd like to contribute.
 But I may need some help / pointers in getting started (I have experience
 with running and hacking openstack, but this would be my first time
 contributing).

 Thanks.

 Bich Le
 Platform9

 On Wed, May 27, 2015 at 3:48 AM, Davanum Srinivas dava...@gmail.com wrote:

 Hi all,

 So the feedback during the Vancouver summit from some of the nova
 cores was that, we needed volunteers to take care of the nova-docker
 driver before it can be considered to merge in the Nova tree.

 As an exercise is resposibility, we need people who can reinstate the
 nova-docker non-voting job (essentially revert [1]) and keep an eye on
 the output of the job every day to make sure when the CI jobs run
 against the nova reviews, they stay green.

 I've cc'ed some folks who expressed interest in the past, please reply
 back to this thread if you wish to join this effort and specifically
 if you can volunteer for watching and fixing the CI as issues arise
 (keeping up with Nova trunk and requirements etc).

 If there are no volunteers here, nova-docker will stay in sourceforge.
 So folks who are using it, please step up.

 Thanks,
 dims

 [1] https://review.openstack.org/#/c/150887/

 --
 Davanum Srinivas :: https://twitter.com/dims





-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugin][astute][UI] DSL restrictions with an action: none to display a message?

2015-05-27 Thread Julia Aranovich
Hi,

That's an issue of course. Settings definitely should support 'none' action
in their restrictions. Thank you for catching it!
And we've prepared the *fix*: https://review.openstack.org/#/c/186049/. It
should be merged ASAP.

Best regards,
Julia

On Wed, May 27, 2015 at 5:57 PM, Swann Croiset scroi...@mirantis.com
wrote:

 Folks,

 With our plugin UI definition [0]  I'm trying to use a restriction with
 'action: none' to display a message but nothing happen.
 According to the doc this should just works [1], btw I didn't find any
 similar example on fuel-web/nailgun.
 So I guess I hit a bug here or smth is wrong with plugin integration or I
 missed smth.

 Does somebody can confirm the bug and help to determine if it should be
 filled on 'fuel-plugin' or 'fuel' launchpad project?

 Thanks

 [0] https://review.openstack.org/#/c/184981/4/environment_config.yaml,cm
 [1]
 https://github.com/stackforge/fuel-web/blob/master/docs/develop/nailgun/customization/settings.rst#restrictions


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kind Regards,
Julia Aranovich,
Software Engineer,
Mirantis, Inc
+7 (905) 388-82-61 (cell)
Skype: juliakirnosova
www.mirantis.ru
jaranov...@mirantis.com jkirnos...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [DevStack][Neutron] PHYSICAL_NETWORK vs. PUBLIC_PHYSICAL_NETWORK - rant

2015-05-27 Thread Sean M. Collins
Hi,

We have a *lot* of configuration knobs in DevStack for Neutron. I am not
a smart man, so I think we may need to wrap our arms around this and
simplify.

Here's an example.

Can you tell me the difference between

PUBLIC_PHYSICAL_NETWORK and PHYSICAL_NETWORK? 

I had a local.conf with the following:

[[local|localrc]]
HOST_IP=203.0.113.2
FLAT_INTERFACE=eth1
PUBLIC_INTERFACE=eth1
FIXED_RANGE=10.0.0.0/24
FLOATING_RANGE=203.0.113.0/24
PUBLIC_NETWORK_GATEWAY=203.0.113.1

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-meta
enable_service q-l3

Q_USE_SECGROUP=True
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=3001:4000
PHYSICAL_NETWORK=default
OVS_PHYSICAL_BRIDGE=br-ex
Q_L3_ENABLED=True
Q_FLOATING_ALLOCATION_POOL=start=203.0.113.3,end=203.0.113.254

Q_USE_PROVIDERNET_FOR_PUBLIC=True


Which causes the following error during creation:

++ neutron net-create public -- --router:external=True 
--provider:network_type=flat --provider:physical_network=public
++ grep ' id '
++ get_field 2
++ local data field
++ read data
Invalid input for operation: physical_network 'public' unknown for flat 
provider network.
+ EXT_NET_ID=
+ die_if_not_set 586 EXT_NET_ID 'Failure creating EXT_NET_ID for public'


Because the bridge mappings file is set as:

bridge_mappings = default:br-ex

Now, fixing the --physical_network to be default, which I defined in
PHYSICAL_NETWORK in local.conf allows the creation.

vagrant@vagrant-ubuntu-trusty-64:~/devstack$ neutron net-create public -- 
--router:external=True --provider:network_type=flat 
--provider:physical_network=default
Created a new network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| c2501278-d77b-4af1-af35-55ad8f864c18 |
| mtu   | 0|
| name  | public   |
| provider:network_type | flat |
| provider:physical_network | default  |
| provider:segmentation_id  |  |
| router:external   | True |
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tenant_id | fc618c8151ad4c53b0fccbca89502b8e |
+---+--+

Basically, this boils down to the fact that with PHYSICAL_NETWORK,
DevStack populates the bridge_mappings file, while
PUBLIC_PHYSICAL_NETWORK does not.

Confused yet?


-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Liberty Priorities

2015-05-27 Thread Nikhil Komawar
I added a few metadata on the etherpad and linked it to main Glance 
etherpad here https://etherpad.openstack.org/p/liberty-glance . We can 
collaborate better across different product groups on image conversion 
as well as other subjects therein.


Cheers,
Nikhil

On 5/27/15 9:23 AM, Flavio Percoco wrote:

On 26/05/15 17:14 +, Jesse Cook wrote:
I created an etherpad with priorities the RAX team I work on will be 
focusing

on based on our talks at the summit: https://etherpad.openstack.org/p/
liberty-priorities-rax. Input, guidance, feedback, and collaboration 
is not

just welcome, it is encouraged and appreciated.


May I ask what Image conversions follow up is meant to do?

I'll be working on a follow up as well and I want to make sure we
don't overlap.

Thanks,
Flavio



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] IRC meetings agenda is now driven from Gerrit !

2015-05-27 Thread Thierry Carrez
Hi everyone,

TL;DR:
IRC meetings list now lives at http://eavesdrop.openstack.org
ical file with meetings is now at
http://eavesdrop.openstack.org/irc-meetings.ical

Long version:

I'm very pleased to announce that we solved a long-standing pain in how
we organize the IRC meetings agenda.

We used to change a wiki page, and then a human had to notice and pick
up the change, check for conflicts and (if all goes well) add it
manually to the iCal. There were a number of back-and-forth as people
kept on suggesting weird recurrence rules or non-UTC times. That was a
generally painful, error-prone process which I used to do, and that Tony
Breeds was kind enough to take over last cycle.

We have wanted to replace this system with a Gerrit-driven automation
for a long time. The meetings description and schedule would live as a
set of YAML files in a git repository, with a specific format to limit
errors. New meetings or meeting changes would be proposed in Gerrit, and
check/gate tests would make sure that there aren't any conflict. Humans
would just check that the change looks legit, and post-job automation
would take care of updating the iCal and the HTML list of meetings.

This started as a student project at NDSU driven by Lance Bragstad. It
was never really finished though, so I ended up picking up the pieces.
During a work session at the summit in Vancouver last week, the
Infrastructure team pushed the last bits, and it is now live.

It is now two things:

openstack-infra/yaml2ical
This is the Python library piece that generates ical (as well as a
templated meeting index) from a set of YAML descriptions, checking for
conflicts

openstack-infra/irc-meetings
This is the list of OpenStack IRC meetings and the template we use for
the index

The resulting index file is now posted at:
http://eavesdrop.openstack.org
Yes, it is pretty ugly. Feel free to suggest improvements to the
template! It lives as meetingindex.jinja in the irc-meetings repo.

The resulting iCal file is now posted at:
http://eavesdrop.openstack.org/irc-meetings.ical
Please update your calendar application(s) as we'll discontinue usage of
the old one very soon.

For future meeting additions and schedule modifications, please propose
changes to openstack-infra/irc-meetings via Gerrit !

The old wiki page at https://wiki.openstack.org/wiki/Meetings now
redirects to the new system.

Thanks to everyone involved for making this possible. Let's keep on
automating the boring tasks !

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: Return request-id to caller

2015-05-27 Thread Doug Hellmann
Excerpts from Kekane, Abhishek's message of 2015-05-27 07:06:56 +:
 Hi Devs,
 
 Each OpenStack service sends a request ID header with HTTP responses. This 
 request ID can be useful for tracking down problems in the logs. However, 
 when operation crosses service boundaries, this tracking can become 
 difficult, as each service has its own request ID. Request ID is not returned 
 to the caller, so it is not easy to track the request. This becomes 
 especially problematic when requests are coming in parallel. For example, 
 glance will call cinder for creating image, but that cinder instance may be 
 handling several other requests at the same time. By using same request ID in 
 the log, user can easily find the cinder request ID that is same as glance 
 request ID in the g-api log. It will help operators/developers to analyse 
 logs effectively.
 
 To address this issue we have come up with following solutions:
 
 Solution 1: Return tuple containing headers and body from respective clients 
 (also favoured by Joe Gordon)
 Reference: 
 https://review.openstack.org/#/c/156508/6/specs/log-request-id-mappings.rst
 
 Pros:
 1. Maintains backward compatibility
 2. Effective debugging/analysing of the problem as both calling service 
 request-id and called service request-id are logged in same log message
 3. Build a full call graph
 4. End user will able to know the request-id of the request and can approach 
 service provider to know the cause of failure of particular request.
 
 Cons:
 1. The changes need to be done first in cross-projects before making changes 
 in clients
 2. Applications which are using python-*clients needs to do required changes 
 (check return type of  response)
 
 
 Solution 2:  Use thread local storage to store 'x-openstack-request-id' 
 returned from headers (suggested by Doug Hellmann)
 Reference: 
 https://review.openstack.org/#/c/156508/9/specs/log-request-id-mappings.rst
 
 Add new method 'get_openstack_request_id' to return this request-id to the 
 caller.
 
 Pros:
 1. Doesn't break compatibility
 2. Minimal changes are required in client
 3. Build a full call graph
 
 Cons:
 1. Malicious user can send long request-id to fill up the disk-space, 
 resulting in potential DoS
 2. Changes need to be done in all python-*clients
 3. Last request id should be flushed out in a subsequent call otherwise it 
 will return wrong request id to the caller
 
 
 Solution 3: Unique request-id across OpenStack Services (suggested by Jamie 
 Lennox)
 Reference: 
 https://review.openstack.org/#/c/156508/10/specs/log-request-id-mappings.rst
 
 Get 'x-openstack-request-id' from auth plugin and add it to the request 
 headers. If 'x-openstack-request-id' key is present in the request header, 
 then it will use the same one further or else it will generate a new one.
 
 Dependencies:
 https://review.openstack.org/#/c/164582/ - Include request-id in auth plugin 
 and add it to request headers
 https://review.openstack.org/#/c/166063/ - Add session-object for glance 
 client
 Add 'UserAuthPlugin' and '_ContextAuthPlugin' same as nova in cinder and 
 neutron
 
 
 Pros:
 1. Using same request id for the request crossing multiple service boundaries 
 will help operators/developers identify the problem quickly
 2. Required changes only in keystonemiddleware and oslo_middleware libraries. 
 No changes are required in the python client bindings or OpenStack core 
 services
 
 Cons:
 1. As 'x-openstack-request-id' in the request header will be visible to the 
 user, it is possible to send same request id for multiple requests which in 
 turn could create more problems in case of troubleshooting cause of the 
 failure as request_id middleware will not check for its uniqueness in the 
 scope of the running OpenStack service.
 2. Having the same request ID for all services for a single user API call 
 means you cannot generate a full call graph. For example if a single user's 
 nova API call produces 2 calls to glance you want to be able to differentiate 
 the two different calls.
 
 
 During the Liberty design summit, I had a chance of discussing these designs 
 with some of the core members like Doug, Joe Gordon, Jamie Lennox etc. But 
 not able to came to any conclusion on the final design and know the 
 communities direction by which way they want to use this request-id 
 effectively.
 
 However IMO, solution 1 sounds more useful as the debugger can able to build 
 the full call graph which can be helpful for analysing gate failures 
 effectively as well as end user will be able to know his request-id and can 
 track his request.
 
 I request all community members to go through these solutions and let us know 
 which is the appropriate way to improve the logs by logging request-id.

Robert Collins pointed out that os-profiler is already tracking
requests across REST calls. Does it use one of the proposed methods?
Since os-profiler is seeing more adoption, could we combine efforts
here?

Doug


[openstack-dev] [fuel][plugin][astute][UI] DSL restrictions with an action: none to display a message?

2015-05-27 Thread Swann Croiset
Folks,

With our plugin UI definition [0]  I'm trying to use a restriction with
'action: none' to display a message but nothing happen.
According to the doc this should just works [1], btw I didn't find any
similar example on fuel-web/nailgun.
So I guess I hit a bug here or smth is wrong with plugin integration or I
missed smth.

Does somebody can confirm the bug and help to determine if it should be
filled on 'fuel-plugin' or 'fuel' launchpad project?

Thanks

[0] https://review.openstack.org/#/c/184981/4/environment_config.yaml,cm
[1]
https://github.com/stackforge/fuel-web/blob/master/docs/develop/nailgun/customization/settings.rst#restrictions
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][all] Zaqar will stay... Lots of work ahead

2015-05-27 Thread Zane Bitter

On 27/05/15 12:42, Clint Byrum wrote:


== Crazy idea section ==

One thing I never had a chance to discuss with any of the Zaqar devs that
I would find interesting is an email-only backend for Zaqar. Basically
make Zaqar an HTTP-to-email gateway. There are quite a few hyper-scale
options for SMTP and IMAP, and they're inherently multi-tenant, so I'd
find it interesting to see if the full Zaqar API could be mapped onto
that. This would probably be more comfortable to scale for some deployers
than Redis or MongoDB, and might have the nice side-effect that a deployer
could expose IMAP IDLE for efficient end-user subscription,


Can you guarantee delivery end-to-end (and still get the scaling 
benefits)? Because AIUI SMTP is only best effort, and that makes this 
idea a non-starter IMHO.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][infra] Functional gate expansion

2015-05-27 Thread Steven Dake (stdake)


On 5/26/15, 2:16 PM, jpee...@redhat.com jpee...@redhat.com wrote:

(Trying to summarize discussions from earlier on IRC)

On Mon, May 25, 2015 at 06:54:43PM +, Steven Dake (stdake) wrote:
Hey fellow Kolla devs,

With Sam¹s recent change to add build from source as an option and build
from Debuntian binaries as an option, we will end up in a situation
where our gate will take 4+ hours to build all of the images and run the
functional tests.  I would like to separate each major distro and source
type with a separate functional gate, for example:

centos-rdo
fedora-rdo
ubuntu-binary
debian-bianry
centos-source
fedora-source
debian-source
ubuntu-source

I propose separating each of these as a separate non-voting check job.
What needs to happen in our image building scripts, our functional
tests, and the project-config repo to make this happen?

Sam said he was working on a patch the allowed CLI args to be passed in
to set the prefix. Then it appears that tox supports passing arguments
to underlying tests, so a new argument could be passed to
test_images.py, and that file be modified to pass the different prefix
to build-all-docker-images. (The project-config repo would be nearly
identical, just with new jobs executing tox with the different
argument.)

I really wish we were somehow using caching, though there is one
downside. If caching were used and a network resource goes down that was
cached, the build would succeed, which could be confusing. Then again
that very con could be considered a pro as being more robust to third
party failures.

Today on IRC it was mentioned that pushing images would take hours, so I
think we should leverage Docker's trusted builds and not even bother
trying to push images from our test run (another potential solution).
The flow would look something like:
submitted to gerrit
approved
gating performed
commit permitted to land in repo
github commit hook triggers trusted build
trusted build updated to latest and available for all to download from
docker registry

Looking into this further, we'd need infrastructure help to get
permissions on github to create the webhooks.

I am pretty sure webhooks for our use case are a nonstarter for the infra
team.  It was discussed in the past at the start of kolla and I believe
this was the conclusion of the infra team.  But I¹ve added the [infra] tag
for confirmation.


Once we get to this point we should be able to do image pulls of trusted
builds, greatly accelerating the build process. However, if this is
deemed too risky, the trusted builds would at least allow community
users to stay up to date easily without any long build times.

I¹d like to make our current functional gate voting asap, but don¹t want
to block build from source to make that happen.

ASAP? I thought we discussed leaving the job to run for a while before
making it voting. But if voting is to be turned on in the near term,
obviously we'd start without caching and just using the centos images
for now. As far as I can tell, the review is ready:
https://review.openstack.org/#/c/183417/

Thoughts?

I changed my mind re voting gate.  My rationale is if the gate votes,
nobody will break it :)

We do need a week or two of soak time to make sure what we have isn¹t
producing heisenbugs :)

Regards
-steve




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: Return request-id to caller

2015-05-27 Thread Mike Bayer



On 5/27/15 3:06 AM, Kekane, Abhishek wrote:


Hi Devs,

Each OpenStack service sends a request ID header with HTTP responses. 
This request ID can be useful for tracking down problems in the logs. 
However, when operation crosses service boundaries, this tracking can 
become difficult, as each service has its own request ID. Request ID 
is not returned to the caller, so it is not easy to track the request. 
This becomes especially problematic when requests are coming in 
parallel. For example, glance will call cinder for creating image, but 
that cinder instance may be handling several other requests at the 
same time. By using same request ID in the log, user can easily find 
the cinder request ID that is same as glance request ID in the g-api 
log. It will help operators/developers to analyse logs effectively.


To address this issue we have come up with following solutions:

Solution 1: Return tuple containing headers and body from respective 
clients (also favoured by Joe Gordon)


Reference: 
https://review.openstack.org/#/c/156508/6/specs/log-request-id-mappings.rst




I like solution 1 as well as solution 3 at the same time, in fact. 
There's usefulness to being able to easily identify a set of requests as 
all part of the same operation as well as being able to identify a 
call's location in the hierarchy.


In fact does solution #1 make the hierarchy apparent ?   I'd want it to 
do that, e.g. if call A calls B, which calls C and D, I'd want to know 
that the dependency tree is A-B-(C, D), and not just a bucket of (A, 
B, C, D).


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Updates from the Summit

2015-05-27 Thread Davanum Srinivas
Josh,

thanks. yes, the kombu vs pika is just a research thing someone should
look at. no blueprint, no spec, so not approved in anyway :)

-- dims

On Wed, May 27, 2015 at 1:24 PM, Joshua Harlow harlo...@outlook.com wrote:
 Thanks dims!

 Good write up,

 Other things I noted (more controversial ones); is that we need to come up
 with a concurrency strategy (and/or guidelines, and/or best practices). At
 least I feel this will be a way that works and imho implies that one
 concurrency strategy will (likely) not fit every project; at the best we can
 do is try to offer best practices.

 Also for the pika one, I'd really like to understand why not kombu. I don't
 know enough of the background, but from that session it looks like we need
 to do some comparative analysis (and imho get feedback from asksol[1] and
 others) before we go to deep down that rabbit hole (no jump to another
 'greener pasture' imho should be done without all of this, to avoid pissing
 off the two [kombu, pika] communities).

 My 2 cents :-P

 [1] https://github.com/ask

 -Josh

 Davanum Srinivas wrote:

 Hi Team,

 Here are the etherpads from the summit[1].
 Some highlights are as follows:
 Oslo.messaging : Took status of the existing zmq driver, proposed a
 new driver in parallel to the existing zmq driver. Also looked at
 possibility of using Pika with RabbitMQ. Folks from pivotal promised
 to help with our scenarios as well.
 Oslo.rootwrap : Debated daemon vs a new privileged service. The Nova
 change to add rootwrap as daemon is on hold pending progress on the
 privsep proposal/activity.
 Oslo.versionedobjects : We had a nice presentation from Dan about what
 o.vo can do and a deepdive into what we could do in next release.
 Taskflow : Josh and team came up with several new features and how to
 improve usability

 We will also have several new libraries in Liberty (oslo.cache,
 oslo.service, oslo.reports, futurist, automaton etc). We talked about
 our release processes, functional testing, deprecation strategies and
 debated a but about how best to move to async models as well. Please
 see etherpads for detailed information.

 thanks,
 dims

 [1] https://wiki.openstack.org/wiki/Design_Summit/Liberty/Etherpads#Oslo


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] dropping Py26 support

2015-05-27 Thread John Dickinson
All,

A month ago I sent a question out asking should we drop Py26 support in 
Swift?. I heard zero people asking us to keep support for it on the mailing 
list. At the summit, we brought it up again with operators, and the consensus 
of the room was to drop support.

What does this mean? Simply, we'll turn off the Py26 unit test checks. Pretty 
soon, I'm sure, we'll start bringing in things that are only py27+, probably 
first in tests, and then elsewhere. One practical result is that this will make 
any eventual move to py3 much simpler. Longer term, we'll be able to 
intentionally remove some things that are specifically there because of py26 
(like simplejson).

Swift 2.3.0 (ie the version released in Kilo) is the very last version of Swift 
that supports Py26.

I'd especially like to thank Rackspace for supporting this decision. Rackspace 
has the oldest Swift clusters, and they are currently in the process of moving 
off of some original hardware. Until that happens, they will still have some 
older software that uses Py26 and can't be quickly moved off because of 
hardware driver issues. However, at the summit Rackspace was supportive of the 
decision. Everyone else is already using newer versions of Python.


--John








signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][all] Zaqar will stay... Lots of work ahead

2015-05-27 Thread Clint Byrum
Excerpts from Flavio Percoco's message of 2015-05-26 01:28:06 -0700:
 Greetings,
 
 TL;DR: Thanks everyone for your feedback. Based on the discussed plans
 at the summit - I'll be writing more about these later - Zaqar will
 stick around and play its role in the community.
 
 Summit Summary
 ==
 
 I'm happy to say that several use cases were discussed at the summit
 and the difference from previous summits is that we left the room with
 some action items to make them happen.
 
 Cross-project user-facing notifications
 ===
 
 https://etherpad.openstack.org/p/liberty-cross-project-user-notifications
 
 Besides brainstorming a bit on what things should/should not be
 notified and what format should be used, we also talked a bit about
 the available technologies that could be used for this tasks. Zaqar
 was among those and, AFAICT, at the end of the session we agreed on
 giving this a try. It'll likely not happen as fast as we want but the
 action item out of this session was to write a cross-project spec
 describing the things discussed and the technology that will be
 adopted.
 

My takeaway from that session was that there's need for something
like yagi to filter the backend notifications into user-consumable
tenant-scoped messages, and that Zaqar would be an interesting target for
those messages along with Atom feeds or perhaps other pub/sub oriented
things that deployers would be comfortable hosting for their users.

 Heat + Zaqar
 
 
 The 2 main areas where Zaqar will be used in Heat are Software Config
 and Hooks. The minimum requirements (server side) for this are in
 place already. There's some work to do on the client side that the
 team will get to asap.
 

The bigger one to me is just user-notificiation which I think is covered
in the cross project session, but it's worth noting that Heat is one
of the projects that already does some user notification and suffers
problems because of it (the events API is what I mean here).

 Next Steps
 ==
 
 In light of the above, and as mentioned in the TL;DR, Zaqar will stick
 around and the team, as promised, will focus on making those
 integrations happen. The team is small, which means we'll carefully
 pick the tasks we'll be spending time on.
 
 As a first step, we should restore our meetings and get to work right
 away. To favor our contributors in NZ, next week's meeting will be at
 21:00 UTC and we'll keep it at that time for 2 weeks.
 
 For the Zaqar team (and folks interested), I'll be sending out further
 emails to sync on the work to do.
 
 Special thanks for all the folks that showed interest, participated in
 sessions and that are committed on making this happen.
 

Thanks for setting things up for success before the summit. I think we
all went into the discussions with an open mind because we knew where
the team stood.


== Crazy idea section ==

One thing I never had a chance to discuss with any of the Zaqar devs that
I would find interesting is an email-only backend for Zaqar. Basically
make Zaqar an HTTP-to-email gateway. There are quite a few hyper-scale
options for SMTP and IMAP, and they're inherently multi-tenant, so I'd
find it interesting to see if the full Zaqar API could be mapped onto
that. This would probably be more comfortable to scale for some deployers
than Redis or MongoDB, and might have the nice side-effect that a deployer
could expose IMAP IDLE for efficient end-user subscription, and it could
also allow Zaqar to serve as email-as-a-service for senders too (to
prevent getting all your vms' IPs added to spam lists overnight. ;)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Updates from the Summit

2015-05-27 Thread Joshua Harlow

Thanks dims!

Good write up,

Other things I noted (more controversial ones); is that we need to come 
up with a concurrency strategy (and/or guidelines, and/or best 
practices). At least I feel this will be a way that works and imho 
implies that one concurrency strategy will (likely) not fit every 
project; at the best we can do is try to offer best practices.


Also for the pika one, I'd really like to understand why not kombu. I 
don't know enough of the background, but from that session it looks like 
we need to do some comparative analysis (and imho get feedback from 
asksol[1] and others) before we go to deep down that rabbit hole (no 
jump to another 'greener pasture' imho should be done without all of 
this, to avoid pissing off the two [kombu, pika] communities).


My 2 cents :-P

[1] https://github.com/ask

-Josh

Davanum Srinivas wrote:

Hi Team,

Here are the etherpads from the summit[1].
Some highlights are as follows:
Oslo.messaging : Took status of the existing zmq driver, proposed a
new driver in parallel to the existing zmq driver. Also looked at
possibility of using Pika with RabbitMQ. Folks from pivotal promised
to help with our scenarios as well.
Oslo.rootwrap : Debated daemon vs a new privileged service. The Nova
change to add rootwrap as daemon is on hold pending progress on the
privsep proposal/activity.
Oslo.versionedobjects : We had a nice presentation from Dan about what
o.vo can do and a deepdive into what we could do in next release.
Taskflow : Josh and team came up with several new features and how to
improve usability

We will also have several new libraries in Liberty (oslo.cache,
oslo.service, oslo.reports, futurist, automaton etc). We talked about
our release processes, functional testing, deprecation strategies and
debated a but about how best to move to async models as well. Please
see etherpads for detailed information.

thanks,
dims

[1] https://wiki.openstack.org/wiki/Design_Summit/Liberty/Etherpads#Oslo



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Liberty Priorities

2015-05-27 Thread Jesse Cook
On 5/27/15, 8:23 AM, Flavio Percoco fla...@redhat.com wrote:


On 26/05/15 17:14 +, Jesse Cook wrote:
I created an etherpad with priorities the RAX team I work on will be
focusing
on based on our talks at the summit: https://etherpad.openstack.org/p/
liberty-priorities-rax. Input, guidance, feedback, and collaboration is
not
just welcome, it is encouraged and appreciated.

May I ask what Image conversions follow up is meant to do?

I'll be working on a follow up as well and I want to make sure we
don't overlap.

Thanks,
Flavio

-- 
@flaper87
Flavio Percoco

Not sure. I requested clarification on the etherpad. I¹ll be focusing on
the first 6 items as of right now.

Thanks,

Jesse


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] [infra] App Catalog infrastructure integration (was: App Catalog next steps)

2015-05-27 Thread Jeremy Stanley
[Operators ML dropped from Cc as my reply is off-topic there]

On 2015-05-22 21:06:32 -0700 (-0700), Christopher Aedo wrote:
[...]
 - I'll be working with the OpenStack infra team to get the server
 and CI set up in their environment (though that work will not
 impact the catalog as it stands today).
[...]

I'm extremely excited about this, and eager to help. OpenStack
project infrastructure is maintained by the community through
collaborative configuration management and automation kept in Git
repositories and updated via changes proposed to our code review
system just like all other OpenStack projects.

http://docs.openstack.org/infra/system-config/project.html

I see a deployment subdirectory in your stackforge/apps-catalog
repo, but it looks like it's for building a third-party CI test
system... it's also possible I'm just not understanding your
deployment mechanisms.

Anyway, how would be most convenient for you to engage with me and
other members of our community so we can help get started on this
with you? We can continue this here on the -dev mailing list, or
move the subthread to the openstack-infra ML, or just do some ad-hoc
planning in IRC if you prefer (perhaps in the #openstack-infra
channel).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Expected Manila behavior for creation of share from snapshot

2015-05-27 Thread yang, xing
Hi Valeriy,

VNX can support creating a share from snapshot using a different share network.

Thanks,
Xing



From: Valeriy Ponomaryov [mailto:vponomar...@mirantis.com]
Sent: Wednesday, May 27, 2015 8:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Manila] Expected Manila behavior for creation of 
share from snapshot

Hi everyone,

At last IRC 
meetinghttp://eavesdrop.openstack.org/meetings/manila/2015/manila.2015-05-14-15.00.log.html
 was raised following question:

Whether Manila should allow us to create shares from snapshots with different 
share networks or not?

What do users/admins expect in that case?

For the moment Manila restricts creation of shares from snapshot with share 
network that is different than parent's.

From user point of view, he may want to copy share and use its copy in 
different network and it is valid case.

From developer point of view, he will be forced to rework logic of share 
servers creation for driver he maintains.

Also, how many back-ends are able to support such feature?

Regards,
Valeriy Ponomaryov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [app-catalog] [infra] App Catalog infrastructure integration (was: App Catalog next steps)

2015-05-27 Thread Fox, Kevin M
Should we talk about having a repo where contributed resources could be gated? 
I have some templates I'd like to eventually commit, but it would be good to 
have some infrastructure in place to make sure they don't bit rot.

Thanks,
Kevin

From: Jeremy Stanley [fu...@yuggoth.org]
Sent: Wednesday, May 27, 2015 12:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [app-catalog] [infra] App Catalog infrastructure 
integration (was: App Catalog next steps)

[Operators ML dropped from Cc as my reply is off-topic there]

On 2015-05-22 21:06:32 -0700 (-0700), Christopher Aedo wrote:
[...]
 - I'll be working with the OpenStack infra team to get the server
 and CI set up in their environment (though that work will not
 impact the catalog as it stands today).
[...]

I'm extremely excited about this, and eager to help. OpenStack
project infrastructure is maintained by the community through
collaborative configuration management and automation kept in Git
repositories and updated via changes proposed to our code review
system just like all other OpenStack projects.

http://docs.openstack.org/infra/system-config/project.html

I see a deployment subdirectory in your stackforge/apps-catalog
repo, but it looks like it's for building a third-party CI test
system... it's also possible I'm just not understanding your
deployment mechanisms.

Anyway, how would be most convenient for you to engage with me and
other members of our community so we can help get started on this
with you? We can continue this here on the -dev mailing list, or
move the subthread to the openstack-infra ML, or just do some ad-hoc
planning in IRC if you prefer (perhaps in the #openstack-infra
channel).
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Some Changes to Cinder Core

2015-05-27 Thread Eric Harney
On 05/22/2015 07:34 PM, Mike Perez wrote:
 This is long overdue, but it gives me great pleasure to nominate Sean
 McGinnis for
 Cinder core.
 
 
 Cinder core, please reply with a +1 for approval. This will be left
 open until May 29th. Assuming there are no objections, this will go
 forward after voting is closed.
 

+1 from me!



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [app-catalog] [infra] App Catalog infrastructure integration (was: App Catalog next steps)

2015-05-27 Thread Jeremy Stanley
On 2015-05-27 21:15:58 + (+), Fox, Kevin M wrote:
 Should we talk about having a repo where contributed resources
 could be gated? I have some templates I'd like to eventually
 commit, but it would be good to have some infrastructure in place
 to make sure they don't bit rot.

It looks like right now the templates are just represented by URLs
in structured data files within the static Web content tree, and
point to random locations around the Internet. It seems reasonable
to assume that some of these might be developed as individual or
aggregate repos within the OpenStack community and served
from/tested on our infrastructure... at least the current design of
the app-catalog doesn't appear to make that particularly
complicated.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-05-27 Thread Derek Higgins

On 27/05/15 09:14, Thomas Goirand wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi all,

tl;dr:
- - We'd like to push distribution packaging of OpenStack on upstream
gerrit with reviews.
- - The intention is to better share the workload, and improve the overall
QA for packaging *and* upstream.
- - The goal is *not* to publish packages upstream
- - There's an ongoing discussion about using stackforge or openstack.
This isn't, IMO, that important, what's important is to get started.
- - There's an ongoing discussion about using a distribution specific
namespace, my own opinion here is that using /openstack-pkg-{deb,rpm} or
/stackforge-pkg-{deb,rpm} would be the most convenient because of a
number of technical reasons like the amount of Git repository.
- - Finally, let's not discuss for too long and let's do it!!! :)

Longer version:

Before I start: some stuff below is just my own opinion, others are just
given facts. I'm sure the reader is smart enough to guess which is what,
and we welcome anyone involved in the project to voice an opinion if
he/she differs.

During the Vancouver summit, operation, Canonical, Fedora and Debian
people gathered and collectively expressed the will to maintain
packaging artifacts within upstream OpenStack Gerrit infrastructure. We
haven't decided all the details of the implementation, but spent the
Friday morning together with members of the infra team (hi Paul!) trying
to figure out what and how.

A number of topics have been raised, which needs to be shared.

First, we've been told that such a topic deserved a message to the dev
list, in order to let groups who were not present at the summit. Yes,
there was a consensus among distributions that this should happen, but
still, it's always nice to let everyone know.

So here it is. Suse people (and other distributions), you're welcome to
join the effort.

- - Why doing this

It's been clear to both Canonical/Ubuntu teams, and Debian (ie: myself)
that we'd be a way more effective if we worked better together, on a
collaborative fashion using a review process like on upstream Gerrit.
But also, we'd like to welcome anyone, and especially the operation
folks, to contribute and give feedback. Using Gerrit is the obvious way
to give everyone a say on what we're implementing.

As OpenStack is welcoming every day more and more projects, it's making
even more sense to spread the workload.

This is becoming easier for Ubuntu guys as Launchpad now understand not
only BZR, but also Git.

We'd start by merging all of our packages that aren't core packages
(like all the non-OpenStack maintained dependencies, then the Oslo libs,
then the clients). Then we'll see how we can try merging core packages.

Another reason is that we believe working with the infra of OpenStack
upstream will improve the overall quality of the packages. We want to be
able to run a set of tests at build time, which we already do on each
distribution, but now we want this on every proposed patch. Later on,
when we have everything implemented and working, we may explore doing a
package based CI on every upstream patch (though, we're far from doing
this, so let's not discuss this right now please, this is a very long
term goal only, and we will have a huge improvement already *before*
this is implemented).

- - What it will *not* be
===
We do not have the intention (yet?) to publish the resulting packages
built on upstream infra. Yes, we will share the same Git repositories,
and yes, the infra will need to keep a copy of all builds (for example,
because core packages will need oslo.db to build and run unit tests).
But we will still upload on each distributions on separate repositories.
So published packages by the infra isn't currently discussed. We could
get to this topic once everything is implemented, which may be nice
(because we'd have packages following trunk), though please, refrain to
engage in this topic right now: having the implementation done is more
important for the moment. Let's try to stay on tracks and be constructive.

- - Let's keep efficiency in mind
===
Over the last few years, I've been able to maintain all of OpenStack in
Debian with little to no external contribution. Let's hope that the
Gerrit workflow will not slow down too much the packaging work, even if
there's an unavoidable overhead. Hopefully, we can implement some
liberal ACL policies for the core reviewers so that the Gerrit workflow
don't slow down anyone too much. For example we may be able to create
new repositories very fast, and it may be possible to self-approve some
of the most trivial patches (for things like typo in a package
description, adding new debconf translations, and such obvious fixes, we
shouldn't waste our time).

There's a middle ground between the current system (ie: only write
access ACLs for git.debian.org with no other check what so ever) and a
too restrictive fully protected gerrit 

Re: [openstack-dev] [Openstack] PCI pass-through SRIOV

2015-05-27 Thread Kamsali, RaghavendraChari (Artesyn)
Hi,

I resolved it by adding or uncommenting  /dev/vfio/vfio in cgroup_device_acl 
list in /etc/libvirtd/qemu.conf file and restart libvird service .

Now when I listed nova instances am not getting any Network shown 

[stack@Controller images]$ nova list
+--+--+++-+--+
| ID   | Name | Status | Task State | Power 
State | Networks |
+--+--+++-+--+
| 6cd81ab3-d60f-4511-ad5d-7d337af3cacd | ubuntuvm | ACTIVE | -  | 
Running |  |
+--+--+++-+--+
[stack@Controller images]$



-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com] 
Sent: Tuesday, May 26, 2015 9:45 PM
To: Moshe Levi
Cc: Kamsali, RaghavendraChari [ENGINEERING/IN]; OpenStack Development Mailing 
List (not for usage questions); openst...@lists.openstack.org
Subject: Re: [Openstack] [openstack-dev] PCI pass-through SRIOV

- Original Message -
 From: Moshe Levi mosh...@mellanox.com
 To: RaghavendraChari Kamsali (Artesyn) 
 raghavendrachari.kams...@artesyn.com, OpenStack Development Mailing 
 List
 
 This is a different  error
 
 2015-05-26 13:34:08.081 TRACE nova.compute.manager [instance:
 101776a0-cd2e-47b9-bdc4-1097782201c6] if ret == -1: raise libvirtError
 ('virDomainCreateWithFlags() failed', dom=self)
 2015-05-26 13:34:08.081 TRACE nova.compute.manager [instance:
 101776a0-cd2e-47b9-bdc4-1097782201c6] libvirtError: internal error: 
 process exited while connecting to monitor: 2015-05-26T04:34:07.980897Z 
 qemu-kvm:
 -device vfio-pci,host=81:02.3,id=hostdev0,bus=pci.0,addr=0x4: vfio: 
 failed to open /dev/vfio/vfio: Operation not permitted
 2015-05-26 13:34:08.081 TRACE nova.compute.manager [instance:
 101776a0-cd2e-47b9-bdc4-1097782201c6] 2015-05-26T04:34:07.980951Z qemu-kvm:
 -device vfio-pci,host=81:02.3,id=hostdev0,bus=pci.0,addr=0x4: vfio: 
 failed to setup container for group 49
 2015-05-26 13:34:08.081 TRACE nova.compute.manager [instance:
 101776a0-cd2e-47b9-bdc4-1097782201c6] 2015-05-26T04:34:07.980970Z qemu-kvm:
 -device vfio-pci,host=81:02.3,id=hostdev0,bus=pci.0,addr=0x4: vfio: 
 failed to get group 49
 2015-05-26 13:34:08.081 TRACE nova.compute.manager [instance:
 101776a0-cd2e-47b9-bdc4-1097782201c6] 2015-05-26T04:34:07.980995Z qemu-kvm:
 -device vfio-pci,host=81:02.3,id=hostdev0,bus=pci.0,addr=0x4: Device 
 initialization failed.
 2015-05-26 13:34:08.081 TRACE nova.compute.manager [instance:
 101776a0-cd2e-47b9-bdc4-1097782201c6] 2015-05-26T04:34:07.981019Z qemu-kvm:
 -device vfio-pci,host=81:02.3,id=hostdev0,bus=pci.0,addr=0x4: Device 
 'vfio-pci' could not be initialized
 
 You are using intel card therefore I think you should contact them and 
 ask if this card is supported.

In addition there are a number of Intel cards for which ACS quirks had to be 
added to the kernel, this was done fairly recently in patches like this one:

http://www.spinics.net/lists/kernel/msg1951202.html

You may want to check whether a) your card is one of those impacted and b) your 
kernel has these patches, though your output does not appear to be an exact 
match for what we were seeing in 
https://bugzilla.redhat.com/show_bug.cgi?id=1141399

Thanks,

--
Steve Gordon, RHCE
Sr. Technical Product Manager,
Red Hat Enterprise Linux OpenStack Platform
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] dashboard-app split in horizon

2015-05-27 Thread Rob Cresswell (rcresswe)
Went through the files myself and I concur. Most of these files define pieces 
specific to our implementation of the dashboard, so should be moved.

I’m not entirely sure on where _messages should sit. As we move forward, won’t 
that file just end up as a toast element and nothing more? Maybe I’m 
misinterpreting it, I’m not familiar with toastService.

Rob


From: Richard Jones r1chardj0...@gmail.commailto:r1chardj0...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, 26 May 2015 01:35
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Johanson, Tyr H t...@hp.commailto:t...@hp.com
Subject: Re: [openstack-dev] [Horizon] dashboard-app split in horizon

As a follow-up to this [in the misguided hope that anyone will actually read 
this conversation with myself ;-)] I've started looking at the base.html split. 
At the summit last week, we agreed to:

1. move base.html over from the framework to the dashboard, and
2. move the _conf.html and _scripts.html over as well, since they configure the 
application (dashboard).

Upon starting the work it occurs to me that all of the other files referenced 
by base.html should also move. So, here's the complete list of base.html 
components and whether they should move over in my opinion:

- horizon/_custom_meta.html
  Yep, is an empty file in horizon, intended as an extension point in 
dashboard. The empty file (plus an added comment) should move.
  - horizon/_stylesheets.html
  Is just a dummy in horizon anyway, should move.
- horizon/_conf.html
  Yep, should move.
- horizon/client_side/_script_loader.html
  Looks to be a framework component not intended for override, so we should 
leave it there.
- horizon/_custom_head_js.html
  Yep, is an empty file in horizon, intended as an extension point in 
dashboard. Move, with a comment added.
- horizon/_header.html
  There is a basic implementation in framework but the real (used) 
implementation is in dashboard, so should move.
- horizon/_messages.html
  This is a framework component, so I think should stay there. I'm not sure 
whether anyone would ever wish to override this. Also the bulk of it is 
probably going to be replaced by the toast implementation anyway... hmm...
- horizon/common/_sidebar.html
  This is an overridable component that I think should move.
- horizon/common/_page_header.html
  This is an overridable component that I think should move.
- horizon/_scripts.html
  Yep, should move.

Thoughts, anyone who has read this far?


Richard


On Sat, 23 May 2015 at 11:46 Richard Jones 
r1chardj0...@gmail.commailto:r1chardj0...@gmail.com wrote:
As part of the ongoing Horizon project code reorganisation, we today agreed to 
clean up the Horizon-the-Framework and OpenStack Dashboard separation issue by 
doing a couple of things:

1. nuke (the recently-created) horizon dashboard-app by moving the angular app 
over to dashboard and the other contents to appropriate places (mostly under 
the heading of tech-debt :)
2. move base.html, _conf.html and _scripts.html from horizon over to dashboard.

Thanks to Cindy, Sean and Thai for the pair (er triple?) programming keeping me 
honest today.

The first step is done and captured in several linked patches based off your 
leaf patch ngReorg - Create dashboard-app 
https://review.openstack.org/#/c/184597/ (yes, I am nuking the thing created 
by your patch).

I've not done the second step, but might find some time since I have 6 hours to 
waste in LAX tomorrow.


 Richard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] patch about pagination info at top and bottom of table

2015-05-27 Thread Tzu Lao
Hi All,
This is my first time to contribute code to openstack.

I made a patch for pagination info at top and bottom of table.
https://review.openstack.org/#/c/183963/

Cindy said:
Tried this out on the flavors table. Set page number to 2, but it still
displayed all flavors.

But there are no way to set page number to 2 when I created 140 flavors.
Horizon just show all flavor in one page.

what's wrong with me?

Thanks

Kuo-tung Kao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] cross project communication: Return request-id to caller

2015-05-27 Thread Kekane, Abhishek
Hi Devs,

Each OpenStack service sends a request ID header with HTTP responses. This 
request ID can be useful for tracking down problems in the logs. However, when 
operation crosses service boundaries, this tracking can become difficult, as 
each service has its own request ID. Request ID is not returned to the caller, 
so it is not easy to track the request. This becomes especially problematic 
when requests are coming in parallel. For example, glance will call cinder for 
creating image, but that cinder instance may be handling several other requests 
at the same time. By using same request ID in the log, user can easily find the 
cinder request ID that is same as glance request ID in the g-api log. It will 
help operators/developers to analyse logs effectively.

To address this issue we have come up with following solutions:

Solution 1: Return tuple containing headers and body from respective clients 
(also favoured by Joe Gordon)
Reference: 
https://review.openstack.org/#/c/156508/6/specs/log-request-id-mappings.rst

Pros:
1. Maintains backward compatibility
2. Effective debugging/analysing of the problem as both calling service 
request-id and called service request-id are logged in same log message
3. Build a full call graph
4. End user will able to know the request-id of the request and can approach 
service provider to know the cause of failure of particular request.

Cons:
1. The changes need to be done first in cross-projects before making changes in 
clients
2. Applications which are using python-*clients needs to do required changes 
(check return type of  response)


Solution 2:  Use thread local storage to store 'x-openstack-request-id' 
returned from headers (suggested by Doug Hellmann)
Reference: 
https://review.openstack.org/#/c/156508/9/specs/log-request-id-mappings.rst

Add new method 'get_openstack_request_id' to return this request-id to the 
caller.

Pros:
1. Doesn't break compatibility
2. Minimal changes are required in client
3. Build a full call graph

Cons:
1. Malicious user can send long request-id to fill up the disk-space, resulting 
in potential DoS
2. Changes need to be done in all python-*clients
3. Last request id should be flushed out in a subsequent call otherwise it will 
return wrong request id to the caller


Solution 3: Unique request-id across OpenStack Services (suggested by Jamie 
Lennox)
Reference: 
https://review.openstack.org/#/c/156508/10/specs/log-request-id-mappings.rst

Get 'x-openstack-request-id' from auth plugin and add it to the request 
headers. If 'x-openstack-request-id' key is present in the request header, then 
it will use the same one further or else it will generate a new one.

Dependencies:
https://review.openstack.org/#/c/164582/ - Include request-id in auth plugin 
and add it to request headers
https://review.openstack.org/#/c/166063/ - Add session-object for glance client
Add 'UserAuthPlugin' and '_ContextAuthPlugin' same as nova in cinder and neutron


Pros:
1. Using same request id for the request crossing multiple service boundaries 
will help operators/developers identify the problem quickly
2. Required changes only in keystonemiddleware and oslo_middleware libraries. 
No changes are required in the python client bindings or OpenStack core services

Cons:
1. As 'x-openstack-request-id' in the request header will be visible to the 
user, it is possible to send same request id for multiple requests which in 
turn could create more problems in case of troubleshooting cause of the failure 
as request_id middleware will not check for its uniqueness in the scope of the 
running OpenStack service.
2. Having the same request ID for all services for a single user API call means 
you cannot generate a full call graph. For example if a single user's nova API 
call produces 2 calls to glance you want to be able to differentiate the two 
different calls.


During the Liberty design summit, I had a chance of discussing these designs 
with some of the core members like Doug, Joe Gordon, Jamie Lennox etc. But not 
able to came to any conclusion on the final design and know the communities 
direction by which way they want to use this request-id effectively.

However IMO, solution 1 sounds more useful as the debugger can able to build 
the full call graph which can be helpful for analysing gate failures 
effectively as well as end user will be able to know his request-id and can 
track his request.

I request all community members to go through these solutions and let us know 
which is the appropriate way to improve the logs by logging request-id.


Thanks  Regards,

Abhishek Kekane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and 

Re: [openstack-dev] [Manila] About how to hide the dummy destination record during migration

2015-05-27 Thread Valeriy Ponomaryov
Hello Vincent Hou,

We, Manila folks, are about to merge one of new features - private driver
storage [1]. That is going to serve for not-user facing data storage
related to any resource that can be reached by both - API and share driver.

And in case of share migration, it will be possible to avoid creation of
temporary share DB record and use this data storage for storing all
required data per each share.

Please, look at it, and provide feedback, whether such approach can be used
in your case or not and why.

[1] - https://review.openstack.org/#/c/176877/

Kind regards,

Valeriy Ponomaryov

On Wed, May 27, 2015 at 7:28 AM, Sheng Bo Hou sb...@cn.ibm.com wrote:

 Hi everyone working for Manila,

 This is Vincent Hou from IBM. I am working on all the migration issues in
 Cinder.

 I had one session for the Cinder migration issue in Vancouver and some of
 you folks attended it. The etherpad link is
 https://etherpad.openstack.org/p/volume-migration-improvement
 Per the issue that we had better not let the user see the target volume
 during migration when issuing command cinder list, we can add an
 additional flag into the volume table, for example, hidden, into it. The
 default value is 0, meaning that it will display for cinder list. For the
 target volume during migration. We can set it to 1, so the user will not be
 able to see it with the command cinder list. I think it is a
 straightforward approach we can go with. I just sync up with you folks, so
 that we can have a consistent way to resolve this issue in both Cinder and
 Manila. I just need to make sure we are on the same page. Is this solution
 OK with you folks? Especially @Rodrigo Barbieri and @Erlon Cruz, etc.

 Looking forward to hearing from you. Thanks.

 Best wishes,
 Vincent Hou (侯胜博)

 Staff Software Engineer, Open Standards and Open Source Team, Emerging
 Technology Institute, IBM China Software Development Lab

 Tel: 86-10-82450778 Fax: 86-10-82453660
 Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com
 Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang
 West Road, Haidian District, Beijing, P.R.C.100193
 地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Setting cluster status when provisioning a node

2015-05-27 Thread Oleg Gelbukh
Roman,

This looks like a great solution to me, and I like your proposal very much.
The status of cluster derived directly from statuses of nodes is exactly
what I was thinking about.

I have to notes to the proposal, and I can copy them to etherpad if you
think they deserve it:

1) status name 'operational' seem a bit unclear to me, as it sounds more
like something Monitoring should report: it implies that the actual
OpenStack environment is operational, which might or might not be a case,
and Fuel has no way to tell. I would really prefer if that status name was
'Deployed' or something along those lines.

2) I'm not sure if we need to keep the complex status of the cluster
explicitly in 'cluster' table in the format you suggest. This information
can be taken directly from 'nodes' table in Nailgun DB. For example,
getting it in the second form you propose is as simple as:

nailgun= SELECT status,count(status) FROM nodes GROUP BY status;
discover|1
ready|5

What do you think about making it a method rather then an element of data
model? Or that's exactly the complexity you want to get rid of?

--
Best regards,
Oleg Gelbukh


On Tue, May 26, 2015 at 4:16 PM, Roman Prykhodchenko m...@romcheg.me wrote:

 Oleg,

 Aleksander also proposed a nice proposed a nice solution [1] which is to
 have a complex status for cluster. That, however, looks like a BP so I’ve
 created an excerpt [2] for it and we will try to discuss it scope it for
 7.0, if there is a consensus.


 References:

 1. http://lists.openstack.org/pipermail/openstack-dev/2015-May/064670.html
 2. https://etherpad.openstack.org/p/fuel-cluster-complex-status


 - romcheg

 22 трав. 2015 о 22:32 Oleg Gelbukh ogelb...@mirantis.com написав(ла):

 Roman,

 I'm totally for fixing Nailgun. However, the status of environment is not
 simply function of statuses of nodes in it. Ideally, it should depend on
 whether appropriate number of nodes of certain roles are in 'ready' status.
 For the meantime, it would be enough if environment was set to
 'operational' when all nodes in it become 'ready', no matter how they were
 deployed (i.e. via Web UI or CLI).

 --
 Best regards,
 Oleg Gelbukh

 On Fri, May 22, 2015 at 5:33 PM, Roman Prykhodchenko m...@romcheg.me
 wrote:

 Hi folks!

 Recently I encountered an issue [1] that the Deploy Changes button in the
 web ui is still active when a provisioning of single node is started using
 the command line client.
 The background for that issue is that the provisioning task does not seem
 to update the cluster status correctly and Nailgun’s API returns it as NEW
 even while some of the node are been provisioned.

 The reason for raising this thread in the mailing list is that
 provisioning a node is a feature for developers and basically end-users
 should not do that. What is the best solution for that: fix Nailgun to set
 the correct status, or make this provisioning feature available only for
 developers?

 1. https://bugs.launchpad.net/fuel/7.0.x/+bug/1449086


 - romcheg


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][oslo] Stepping down from oslo-ironic liaison

2015-05-27 Thread Haomeng, Wang
Thank you Ghe! Miss you!

On Wed, May 27, 2015 at 9:04 AM, Tan, Lin lin@intel.com wrote:
 Hi Doug and guys,

 I would like to work as oslo-ironic liasison to sync Ironic with Oslo.
 I will attend the regular Oslo meeting for sure. My IRC name is lintan, and 
 Launchpad id is tan-lin-good

 Thanks

 Tan

 -Original Message-
 From: Doug Hellmann [mailto:d...@doughellmann.com]
 Sent: Tuesday, May 26, 2015 9:17 PM
 To: openstack-dev
 Subject: Re: [openstack-dev] [Ironic][oslo] Stepping down from oslo-ironic 
 liaison

 Excerpts from Ghe Rivero's message of 2015-05-25 09:45:47 -0700:
 My focus on the Ironic project has been decreasing in the last cycles,
 so it's about time to relinquish my position as a oslo-ironic liaison
 so new contributors can take over it and help ironic to be the vibrant
 project it is.

 So long, and thanks for all the fish,

 Ghe Rivero

 Thanks for your help as liaison, Ghe, the Oslo team appreciates your effort!

 Doug

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Fox, Kevin M
Should RefStack be involved here? To integrate tightly with the App Catalog, 
the Cloud Provider would be required to run RefStack against their cloud, the 
results getting registered to an App Catalog service in that Cloud. The App 
Catalog UI in Horizon could then filter out from the global App Catalog any 
apps that would be incompatible with their cloud. I think the Android app store 
works kind of like that...

Thanks,
Kevin

From: Keith Bray [keith.b...@rackspace.com]
Sent: Wednesday, May 27, 2015 4:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

In-line responses.  Thanks for chipping in Monty.
-Keith

On 5/27/15 6:03 PM, Monty Taylor mord...@inaugust.com wrote:

On 05/27/2015 06:35 PM, Keith Bray wrote:
 Joe, regarding apps-catalog for any app deployable on OpenStack
 (regardless of deployment technology), my two cents is that is a good
 idea.  I also believe, however, that the app-catalog needs to evolve
 first with features that make it super simple to understand which
 artifacts will work on which clouds (out-of-the-box) vs. needing
 additional required dependencies or cloud operator software.   My
 guess is there will be a lot of discussions related to defcore,
 and/or tagging artifacts with known public/private cloud
 distributions  the artifacts are known to work on. To the extent an
 openstack operator or end user has to download/install 3rd party or
 stack forge or non defcore openstack components in order to deploy an
 artifact, the more sophisticated and complicated it becomes and we
 need a way to depict that for items shown in the catalog.

 For example, I'd like to see a way to tag items in the catalog as
 known-to-work on HP or Rackspace public cloud, or known to work on
 RDO.  Even a basic Heat template optimized for one cloud won't
 necessarily work on another cloud without modification.

That's an excellent point - I have two opposing thoughts to it.

a) That we have to worry about the _vendor_ side of that is a bug and
should be fixed. Since all clouds already have a service catalog,
mapping out a this app requires trove should be easy enough. The other
differences are ... let's just say as a user they do not provide me value

I wouldn't call it a bug.  By design, Heat is pluggable with different
resource implementations. And, different cloud run different plug-ins,
hence a template written for one cloud won't necessarily run on another
cloud unless that cloud also runs the same Heat plug-ins.


b) The state you describe is today's reality, and as much as wringing
out hands and spitting may feel good, it doesn't get us anywhere. You
do, in _fact_ need to know those things to use even basic openstack
functions today- so we might as well deal with it.

I don't buy the argument of you need to know those things to make
openstack function, because:  The catalog _today_ is targeted more at the
end user, not the operator.  The end user shouldn't need to know whether
trove is or is not set up, let alone how to do it.  Maybe that isn't the
intention of the catalog, and probably worth sorting out.


I'll take this as an opportunity to point people towards work in this
direction grew out of a collaboration between infra and ansible:

http://git.openstack.org/cgit/openstack-infra/shade/
and
http://git.openstack.org/cgit/openstack/os-client-config

os-client-config knows about the differences between the clouds. It has,
sadly, this file:

http://git.openstack.org/cgit/openstack/os-client-config/tree/os_client_co
nfig/vendors.py

Which lists as much knowledge as we've figured out so far about the
differences between clouds.

shade presents business logic to users so that they don't have to know.
For instance:

I'm all +1 on different artifact types with different deployment
mechanisms, including Ansible, in case that wasn't clear. As long as the
app-catalog supports letting the consumer know what they are in for and
expectations.  I'm not clear on how the infra stuff works, but agree we
don't want cloud specific logic... I especially don't want the application
architect authors (e.g. The folks writing Heat templates and/or Murano
packages) to have to account for Cloud specific checks in their authoring
files. It'd be better to automate this on the catalog testing side at
best, or use author submission + voting as a low cost human method (but
not without problems in up-keep).


import shade
cloud = shade.openstack_cloud()
cloud.create_image(
name='ubuntu-trusty',
filename='ubuntu-trusty.qcow2',
wait=True)

Should upload an image to an openstack cloud no matter the deployer
choices that are made.

The new upstream ansible modules build on this - so if you say:

os_server: name=ubuntu-test flavor_ram=1024 image='Ubuntu 14.04 LTS'
   config_drive=yes

It _should_ just work. Of course, image names and image content across

Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Fox, Kevin M
There's an alternate vision which is simply, resources that can be launched in 
an OpenStack environment directly.

Solum, Mistral, Glance, etc fit into that definition. This is kind of how it is 
arranged today.

Even with the high level app store only vision I proposed, these other types of 
catalog entries will probably be needed since the higher level app store's apps 
will probably depend on them...

Say I want to create a Cloud Application that as part of it, launches a rather 
obscure, hard to build database software as the backend of a set of web 
servers... Having that part of the app be a Glance Image might be the best way 
to go. So your Cloud App would depend on maybe a heat template, and a couple of 
Glance Images (the appliance and a generic Linux one being loaded too).

I know the glance folks want to support storing these things as part of their 
artefact api, which is good, but there still is the discovery part of it that 
the app catalog can provide...

So, maybe we really need two different types of things in the catalog.
 * Applications (thing that a user will launch, and optionally answer a few 
questions)
 * Resources (OpenStack resources. Glance, Heat, Solum, Mistral, etc Artefacts 
that Applications use, or users can manually use directly if they have the know 
how.)

Then when the application catalog users that are not developers go to the ui, 
it would only present the first category of things.

Thanks,
Kevin


From: Keith Bray [keith.b...@rackspace.com]
Sent: Wednesday, May 27, 2015 4:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

Kevin, I like your vision.  Today we have images, heat templates, Murano 
packages.  What are your thoughts on how to manage additions?  Should it be 
restricted to things in the OpenStack namespace under the big tent?  E.g., I'd 
like to see Solum language packs get added to the app-catalog.  Solum is 
currently in stack forge, but meets all the criteria I believe to enter 
OpenStack namespace.  We plan to propose it soon. Folks from various companies 
did a lot of work the past few summits to clearly distinguish, Heat, Murano, 
Mistral, and Solum as differentiated enough to co-exist and add value to the 
ecosystem.

Thanks,
-Keith

From: Fox, Kevin M kevin@pnnl.govmailto:kevin@pnnl.gov
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, May 27, 2015 6:27 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

I'd say, tools that utilize OpenStack, like the knife openstack plugin, are not 
something that you would probably go to the catalog to find. And also, the 
recipes that you would use with knife would not be specific to OpenStack in any 
way, so you would just be duplicating the config management system's own 
catalog in the OpenStack catalog, which would be error prone. Duplicating all 
the chef recipes, and docker containers, puppet stuff, and . is a lot of 
work...

The vision I have for the Catalog (I can be totally wrong here, lets please 
discuss) is a place where users (non computer scientists) can visit after 
logging into their Cloud, pick some app of interest, hit launch, and optionally 
fill out a form. They then have a running piece of software, provided by the 
greater OpenStack Community, that they can interact with, and their Cloud can 
bill them for. Think of it as the Apple App Store for OpenStack.  Having a 
reliable set of deployment engines (Murano, Heat, whatever) involved is 
critical to the experience I think. Having too many of them though will mean it 
will be rare to have a cloud that has all of them, restricting the utility of 
the catalog. Too much choice here may actually be a detriment.

If chef, or what ever other configuration management system became multitenant 
aware, and integrated into OpenStack and provided by the Cloud providers, then 
maybe it would fit into the app store vision?

Thanks,
Kevin

From: Joe Gordon [joe.gord...@gmail.commailto:joe.gord...@gmail.com]
Sent: Wednesday, May 27, 2015 3:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps



On Fri, May 22, 2015 at 9:06 PM, Christopher Aedo 
ca...@mirantis.commailto:ca...@mirantis.com wrote:
I want to start off by thanking everyone who joined us at the first
working session in Vancouver, and those folks who have already started
adding content to the app catalog. I was happy to see the enthusiasm
and excitement, and am looking forward to working with all of you to
build this into something that has a major impact on OpenStack

Re: [openstack-dev] [neutron] [fwaas] - Collecting use cases for API improvements

2015-05-27 Thread Vikram Choudhary
Hi German,

Thanks for the initiative. I am currently working for few of the FWaaS BP's
proposed for Liberty and definitely would like to be a part of this effort.

BTW did you mean FWaaS IRC meeting to take up this discussion further?

Thanks
Vikram


On Thu, May 28, 2015 at 4:20 AM, Kyle Mestery mest...@mestery.com wrote:

 On Wed, May 27, 2015 at 5:36 PM, Eichberger, German 
 german.eichber...@hp.com wrote:

 All,


 During the FWaaS session in Vancouver [1] it became apparent that both
 the FWaaS API and the Security Groups API are lacking in functionality and
 the connection between the two is not well defined.


 For instance if a cloud user opens up all ports in the security groups
 they still can’t connect and might figure out days later that there is a
 second API (FWaaS) which prevents him from connecting to his service. This
 will probably make for a frustrating experience.


 Similarly, the operators I spoke to all said that the current FWaaS
 implementation isn’t going far enough and needs a lot of missing
 functionality added to fulfill their requirements on a Firewall
 implementation.


 With that backdrop I am proposing to take a step back and assemble a
 group of operators and users to collect use cases for the firewall service
 – both FWaaS and Security Groups based. I believe it is important at this
 juncture to really focus on the users and less on technical limitations. I
 also think this reset is necessary to make a service which meets the needs
 of operators and users better.


 Once we have collected the use cases we can evaluate our current API’s
 and functionality and start making the necessary improvements to turn FWaaS
 into a service which covers most of the use cases and requirements.


 Please join me in this effort. We have set up an etherpad [2] to start
 collecting the use cases and will discuss them in an upcoming meeting.


 Thanks for sending this out German. I took home the same impressions that
 you did. Similar to what we did with the LBaaS project (to great success
 last summer), I think we should look at FWaaS API V2 with the new
 contributors coming on as equals and helping to define the new operator
 focused API. My suggestion is we look at doing the work to lay the
 foundation during Liberty for a successful launch of this API during the
 Mxx cycle. I'm happy to step in here and guide the new group of
 contributors similar to what we did for LBaaS.

 Thanks,
 Kyle



 Thanks,

 German





 [1]
 https://etherpad.openstack.org/p/YVR-neutron-sg-fwaas-future-direction

 [2] https://etherpad.openstack.org/p/fwaas_use_cases


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] technology choices ops working group summary

2015-05-27 Thread Robert Collins
In vancouver we had an ops working group session about technology choices.

I've created a wiki page from that meeting here:
https://wiki.openstack.org/wiki/TechnologyChoices - its not rigorous
enough to be a 'policy' as such - yet - but I think this can serve as
a frame of reference for developers and developer teams considering
new technologies, and we can iterate on this over a few cycles with an
eye to making it more formal.

Please do chime in (here, or in the page, or $however) to provide more
information or context.

Thanks!
-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Fox, Kevin M
I'd say, tools that utilize OpenStack, like the knife openstack plugin, are not 
something that you would probably go to the catalog to find. And also, the 
recipes that you would use with knife would not be specific to OpenStack in any 
way, so you would just be duplicating the config management system's own 
catalog in the OpenStack catalog, which would be error prone. Duplicating all 
the chef recipes, and docker containers, puppet stuff, and . is a lot of 
work...

The vision I have for the Catalog (I can be totally wrong here, lets please 
discuss) is a place where users (non computer scientists) can visit after 
logging into their Cloud, pick some app of interest, hit launch, and optionally 
fill out a form. They then have a running piece of software, provided by the 
greater OpenStack Community, that they can interact with, and their Cloud can 
bill them for. Think of it as the Apple App Store for OpenStack.  Having a 
reliable set of deployment engines (Murano, Heat, whatever) involved is 
critical to the experience I think. Having too many of them though will mean it 
will be rare to have a cloud that has all of them, restricting the utility of 
the catalog. Too much choice here may actually be a detriment.

If chef, or what ever other configuration management system became multitenant 
aware, and integrated into OpenStack and provided by the Cloud providers, then 
maybe it would fit into the app store vision?

Thanks,
Kevin

From: Joe Gordon [joe.gord...@gmail.com]
Sent: Wednesday, May 27, 2015 3:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps



On Fri, May 22, 2015 at 9:06 PM, Christopher Aedo 
ca...@mirantis.commailto:ca...@mirantis.com wrote:
I want to start off by thanking everyone who joined us at the first
working session in Vancouver, and those folks who have already started
adding content to the app catalog. I was happy to see the enthusiasm
and excitement, and am looking forward to working with all of you to
build this into something that has a major impact on OpenStack
adoption by making it easier for our end users to find and share the
assets that run on our clouds.

Great job. This is very exciting to see, I have been wanting something like 
this for some time now.


The catalog: http://apps.openstack.org
The repo: https://github.com/stackforge/apps-catalog
The wiki: https://wiki.openstack.org/wiki/App-Catalog

Please join us via IRC at #openstack-app-catalog on freenode.

Our initial core team is Christopher Aedo, Tom Fifield, Kevin Fox,
Serg Melikyan.

I’ve started a doodle poll to vote on the initial IRC meeting
schedule, if you’re interested in helping improve and build up this
catalog please vote for the day/time that works best and get involved!
http://doodle.com/vf3husyn4bdkui8w

At the summit we managed to get one planning session together. We
captured that on etherpad[1], but I’d like to highlight here a few of
the things we talked about working on together in the near term:

-More information around asset dependencies (like clarifying
requirements for Heat templates or Glance images for instance),
potentially just by providing better guidance in what should be in the
description and attributes sections.
-With respect to the assets that are listed in the catalog, there’s a
need to account for tagging, rating/scoring, and a way to have
comments or a forum for each asset so potential users can interact
outside of the gerrit review system.
-Supporting more resource types (Sahara, Trove, Tosca, others)

What about expanding the scope of the application catalog to any application 
that can run *on* OpenStack, versus the implied scope of applications that can 
be deployed *by* (heat, murano, etc.) OpenStack and *on* OpenStack services 
(nova, cinder etc.). This would mean adding room for Ansible roles that 
provision openstack resources [0]. And more generally it would reinforce the 
point that there is no 'blessed' method of deploying applications on OpenStack, 
you can use tools developed specifically for OpenStack or tools developed 
elsewhere.


[0] 
https://github.com/ansible/ansible-modules-core/blob/1f99382dfb395c1b993b2812122761371da1bad6/cloud/openstack/os_server.py

-Discuss using glance artifact repository as the backend rather than
flat YAML files
-REST API, enable searching/sorting, this would ease native
integration with other projects
-Federated catalog support (top level catalog including contents from
sub-catalogs)
- I’ll be working with the OpenStack infra team to get the server and
CI set up in their environment (though that work will not impact the
catalog as it stands today).

I am pleased to see moving this to OpenStack Infra is a high priority.

A quick nslookup of http://apps.openstack.org shows it us currently hosted on 
linode at http://nb-23-239-6-45.fremont.nodebalancer.linode.com/. And last I 
checked linode isn't OpenStack 

Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Jeremy Stanley
On 2015-05-28 00:20:48 + (+), Keith Bray wrote:
 Maybe. I'm not up to speed on defcore/refstack requirements.. But,
 to put the question on the table, do folks want the OpenStack
 App-catalog to only have support for the
 lowest-common-denominator of artifacts and cloud capabilities,
 or instead allow for showcasing all that is possible when using
 cloud technology that major vendors have adopted but are not yet
 part of refstack/defcore?

I sort of like the idea that the App Catalog can steer service
providers and solutions deployers to support more official OpenStack
services. Say I run ProviderX and haven't yet implemented/exposed
Heat to my customers. Now someone adds KillerAppY to
apps.openstack.org as a Heat template. It's entirely likely that
more customers may try to convince me to add Heat support (or will
start leaving for other providers who do). That's market pressure
driving companies toward better support for OpenStack, with a lot
more carrot than stick.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Joe Gordon
On Wed, May 27, 2015 at 4:27 PM, Fox, Kevin M kevin@pnnl.gov wrote:

  I'd say, tools that utilize OpenStack, like the knife openstack plugin,
 are not something that you would probably go to the catalog to find. And
 also, the recipes that you would use with knife would not be specific to
 OpenStack in any way, so you would just be duplicating the config
 management system's own catalog in the OpenStack catalog, which would be
 error prone. Duplicating all the chef recipes, and docker containers,
 puppet stuff, and . is a lot of work...


I am very much against duplicating things, including chef recipes that use
the openstack plugin for knife. But we can still easily point to external
resources from apps.openstack.org. In fact we already do (
http://apps.openstack.org/#tab=heat-templatesasset=Lattice).



 The vision I have for the Catalog (I can be totally wrong here, lets
 please discuss) is a place where users (non computer scientists) can visit
 after logging into their Cloud, pick some app of interest, hit launch, and
 optionally fill out a form. They then have a running piece of software,
 provided by the greater OpenStack Community, that they can interact with,
 and their Cloud can bill them for. Think of it as the Apple App Store for
 OpenStack.  Having a reliable set of deployment engines (Murano, Heat,
 whatever) involved is critical to the experience I think. Having too many
 of them though will mean it will be rare to have a cloud that has all of
 them, restricting the utility of the catalog. Too much choice here may
 actually be a detriment.


calling this a catalog, which it sounds accurate, is confusing since
keystone already has a catalog.   Naming things is unfortunately a
difficult problem.

I respectfully disagree with this vision. I mostly agree with the first
part about it being somewhere users can go to find applications that can be
quickly deployed on OpenStack (note all the gotchas that Monty described
here). The part I disagree with is about limiting the deployment engines to
invented here. Even if we have 100 deployment engines on apps.openstack.org,
it would be very easy for a user to filter by the deployment engines they
use so I do not agree with your concern about too many choices here being a
detriment (after all isn't OpenStack about choices?).

Secondly IMHO the notion that 'if it wasn't invented here we shouldn't
support it' [0] is a dangerous one that results on us constantly
re-inventing the wheel while alienating the larger developer community by
saying there solutions are no good, you should use the OpenStack version of
it.


OpenStack isn't a single 'thing' it is a collection of 'things' and user's
should be able to pick and choose which components they want and which
components they want to get from elsewhere.

[0] http://en.wikipedia.org/wiki/Not_invented_here


If chef, or what ever other configuration management system became
 multitenant aware, and integrated into OpenStack and provided by the Cloud
 providers, then maybe it would fit into the app store vision?


I am not sure why this matters?  As a dependency you simply state chef, and
either require users to provide it or tell them to use a chef heat
template, glance image, etc.



 Thanks,
 Kevin
 --
 *From:* Joe Gordon [joe.gord...@gmail.com]
 *Sent:* Wednesday, May 27, 2015 3:20 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [new][app-catalog] App Catalog next steps



 On Fri, May 22, 2015 at 9:06 PM, Christopher Aedo ca...@mirantis.com
 wrote:

 I want to start off by thanking everyone who joined us at the first
 working session in Vancouver, and those folks who have already started
 adding content to the app catalog. I was happy to see the enthusiasm
 and excitement, and am looking forward to working with all of you to
 build this into something that has a major impact on OpenStack
 adoption by making it easier for our end users to find and share the
 assets that run on our clouds.


  Great job. This is very exciting to see, I have been wanting something
 like this for some time now.



 The catalog: http://apps.openstack.org
 The repo: https://github.com/stackforge/apps-catalog
 The wiki: https://wiki.openstack.org/wiki/App-Catalog

 Please join us via IRC at #openstack-app-catalog on freenode.

 Our initial core team is Christopher Aedo, Tom Fifield, Kevin Fox,
 Serg Melikyan.

 I’ve started a doodle poll to vote on the initial IRC meeting
 schedule, if you’re interested in helping improve and build up this
 catalog please vote for the day/time that works best and get involved!
 http://doodle.com/vf3husyn4bdkui8w

 At the summit we managed to get one planning session together. We
 captured that on etherpad[1], but I’d like to highlight here a few of
 the things we talked about working on together in the near term:

 -More information around asset dependencies (like clarifying
 requirements for Heat templates 

Re: [openstack-dev] [app-catalog] [infra] App Catalog infrastructure integration (was: App Catalog next steps)

2015-05-27 Thread Christopher Aedo
Jeremy, thanks.  You're right, the deployment directory is not what
we'll need to merge this with the openstack infra.  Glad to see your
response though - at the summit I spoke with some folks from the
foundation side and they offered to support the transition however
it's needed.

I think the easiest way to get a jump on this will be a quick chat on
IRC (I'll find you on #openstack-infra).  The site content itself is
minimal, the only extra part we'll need to work out is access to
storage on swift (right now the binaries like zip files and glance
images are on a Rackspace).

-Christopher

On Wed, May 27, 2015 at 12:26 PM, Jeremy Stanley fu...@yuggoth.org wrote:
 [Operators ML dropped from Cc as my reply is off-topic there]

 On 2015-05-22 21:06:32 -0700 (-0700), Christopher Aedo wrote:
 [...]
 - I'll be working with the OpenStack infra team to get the server
 and CI set up in their environment (though that work will not
 impact the catalog as it stands today).
 [...]

 I'm extremely excited about this, and eager to help. OpenStack
 project infrastructure is maintained by the community through
 collaborative configuration management and automation kept in Git
 repositories and updated via changes proposed to our code review
 system just like all other OpenStack projects.

 http://docs.openstack.org/infra/system-config/project.html

 I see a deployment subdirectory in your stackforge/apps-catalog
 repo, but it looks like it's for building a third-party CI test
 system... it's also possible I'm just not understanding your
 deployment mechanisms.

 Anyway, how would be most convenient for you to engage with me and
 other members of our community so we can help get started on this
 with you? We can continue this here on the -dev mailing list, or
 move the subthread to the openstack-infra ML, or just do some ad-hoc
 planning in IRC if you prefer (perhaps in the #openstack-infra
 channel).
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] taskflow usage

2015-05-27 Thread Joshua Harlow

So I'll give it a shot, and let me know if this explanation helps,

The idea is that u have some work (composed of tasks, flows, ...); that 
work is ran by some engine[1]. Hopefully that makes sense so far. Those 
engines track (and can save) the execution state of what has been 
executed and what is to be executed (using [2]). Doing this kind of 
'check-pointing' allows for that engine (or a new instance of it) to be 
resumed (or restarted after crashes...)


So then the question becomes what 'thing/entity' does this resumption 
and where is the work to be done retained and transferred upon crashes. 
That is the purpose/goal of a job (to track/maintain ownership and to 
retain enough metadata to know what to resume/run). Hopefully the docs 
at [3] help make this more obvious (and the diagram @ [4]). The idea is 
that some 'entity' (program, user, or other) would place a job to be 
done on some location and then some other entities (specialized workers, 
conductors[5]) would attempt to 'claim' that job (and then work on its 
contents). During that time when that entity is working on its claimed 
job, it may 'crash' or die (as often happens in distributed systems), 
and a side-effect of this is that the claim *will* be lost (or would 
expire) and another entity would be able to acquire that claim and 
resume that job (using the jobs internal metadata about what was done or 
needs to be done...); so this in a way makes the job (and the work it 
contains) highly available (in that if those set of entities keep on 
crashing, as long as some forward progress is made, that the job and its 
associated work will eventually complete).


[1] http://docs.openstack.org/developer/taskflow/engines.html
[2] http://docs.openstack.org/developer/taskflow/persistence.html
[3] http://docs.openstack.org/developer/taskflow/jobs.html
[4] https://wiki.openstack.org/wiki/TaskFlow#Big_picture
[5] http://docs.openstack.org/developer/taskflow/conductors.html

Hopefully that helps :)

If not feel free to jump on #openstack-state-management or 
#openstack-oslo and poke the members there.


-Josh

ESWAR RAO wrote:

Hi All,

I am looking into taskflow userguide and examples.

http://www.giantflyingsaucer.com/blog/?p=4896

Can anyone please help me how the job/job-board is related to task and
flows.

I understood atom is similar to a abstract interface andtaskflow is an
atom that has execute()/revert() methods and a flow is structure that
links these tasks.

Is it that a job is broken into tasks??
Can a job be broken into set of tasks???

Thanks
Eswar Rao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Keith Bray
In-line responses.  Thanks for chipping in Monty.
-Keith

On 5/27/15 6:03 PM, Monty Taylor mord...@inaugust.com wrote:

On 05/27/2015 06:35 PM, Keith Bray wrote:
 Joe, regarding apps-catalog for any app deployable on OpenStack
 (regardless of deployment technology), my two cents is that is a good
 idea.  I also believe, however, that the app-catalog needs to evolve
 first with features that make it super simple to understand which
 artifacts will work on which clouds (out-of-the-box) vs. needing
 additional required dependencies or cloud operator software.   My
 guess is there will be a lot of discussions related to defcore,
 and/or tagging artifacts with known public/private cloud
 distributions  the artifacts are known to work on. To the extent an
 openstack operator or end user has to download/install 3rd party or
 stack forge or non defcore openstack components in order to deploy an
 artifact, the more sophisticated and complicated it becomes and we
 need a way to depict that for items shown in the catalog.
 
 For example, I'd like to see a way to tag items in the catalog as
 known-to-work on HP or Rackspace public cloud, or known to work on
 RDO.  Even a basic Heat template optimized for one cloud won't
 necessarily work on another cloud without modification.

That's an excellent point - I have two opposing thoughts to it.

a) That we have to worry about the _vendor_ side of that is a bug and
should be fixed. Since all clouds already have a service catalog,
mapping out a this app requires trove should be easy enough. The other
differences are ... let's just say as a user they do not provide me value

I wouldn't call it a bug.  By design, Heat is pluggable with different
resource implementations. And, different cloud run different plug-ins,
hence a template written for one cloud won't necessarily run on another
cloud unless that cloud also runs the same Heat plug-ins.


b) The state you describe is today's reality, and as much as wringing
out hands and spitting may feel good, it doesn't get us anywhere. You
do, in _fact_ need to know those things to use even basic openstack
functions today- so we might as well deal with it.

I don't buy the argument of you need to know those things to make
openstack function, because:  The catalog _today_ is targeted more at the
end user, not the operator.  The end user shouldn't need to know whether
trove is or is not set up, let alone how to do it.  Maybe that isn't the
intention of the catalog, and probably worth sorting out.


I'll take this as an opportunity to point people towards work in this
direction grew out of a collaboration between infra and ansible:

http://git.openstack.org/cgit/openstack-infra/shade/
and
http://git.openstack.org/cgit/openstack/os-client-config

os-client-config knows about the differences between the clouds. It has,
sadly, this file:

http://git.openstack.org/cgit/openstack/os-client-config/tree/os_client_co
nfig/vendors.py

Which lists as much knowledge as we've figured out so far about the
differences between clouds.

shade presents business logic to users so that they don't have to know.
For instance:

I'm all +1 on different artifact types with different deployment
mechanisms, including Ansible, in case that wasn't clear. As long as the
app-catalog supports letting the consumer know what they are in for and
expectations.  I'm not clear on how the infra stuff works, but agree we
don't want cloud specific logic... I especially don't want the application
architect authors (e.g. The folks writing Heat templates and/or Murano
packages) to have to account for Cloud specific checks in their authoring
files. It'd be better to automate this on the catalog testing side at
best, or use author submission + voting as a low cost human method (but
not without problems in up-keep).


import shade
cloud = shade.openstack_cloud()
cloud.create_image(
name='ubuntu-trusty',
filename='ubuntu-trusty.qcow2',
wait=True)

Should upload an image to an openstack cloud no matter the deployer
choices that are made.

The new upstream ansible modules build on this - so if you say:

os_server: name=ubuntu-test flavor_ram=1024 image='Ubuntu 14.04 LTS'
   config_drive=yes

It _should_ just work. Of course, image names and image content across
clouds vary - so you probably want:

os_image: name=ubuntu-trusty file=ubuntu-trusty.qcow2 wait=yes
  register=image
os_server: name=ubuntu-test flavor_ram=1024 image={{ image.id }}
   config_drive=yes

And it should mostly just work everywhere. It's not strictly true -
image uploading takes slightly more work (you need to know the needed
format per-cloud) - but there is a role for that:

https://github.com/emonty/ansible-build-image

point being - this SHOULD be as easy as the above, but it's not. We're
working on it out on the edges - but that work sadly has to be redone
for each language and each framework.

So - a) we should take note of the how hard this is and 

Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Fox, Kevin M
Ah. So maybe rather then filter out, gray out?

Thanks,
Kevin


From: Jeremy Stanley
Sent: Wednesday, May 27, 2015 5:31:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

On 2015-05-28 00:20:48 + (+), Keith Bray wrote:
 Maybe. I'm not up to speed on defcore/refstack requirements.. But,
 to put the question on the table, do folks want the OpenStack
 App-catalog to only have support for the
 lowest-common-denominator of artifacts and cloud capabilities,
 or instead allow for showcasing all that is possible when using
 cloud technology that major vendors have adopted but are not yet
 part of refstack/defcore?

I sort of like the idea that the App Catalog can steer service
providers and solutions deployers to support more official OpenStack
services. Say I run ProviderX and haven't yet implemented/exposed
Heat to my customers. Now someone adds KillerAppY to
apps.openstack.org as a Heat template. It's entirely likely that
more customers may try to convince me to add Heat support (or will
start leaving for other providers who do). That's market pressure
driving companies toward better support for OpenStack, with a lot
more carrot than stick.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [fwaas] - Collecting use cases for API improvements

2015-05-27 Thread Sridar Kandaswamy (skandasw)
Hi All:

Thanks German for articulating this – we did have this discussion on last Fri 
as well on the need to have more user inputs. FWaaS has been in a bit of a 
Catch22 situation with the experimental state. Regarding feature velocity –  it 
has definitely been frustrating and we also lost contributors along the journey 
due to their frustration with moving things forward making things worse.

Kilo has been interesting in that there are more new contributors, repo split 
and more in terms of vendor support has gone in than ever before. We hope that 
this will improve traction for the customers they represent as well. Adding 
more user inputs and having a concerted conversation will definitely help. I 
echo Kyle and can certainly speak for all the current contributors in also 
helping out in any way possible to get this going. New Contributors are always 
welcome – Slawek  Vikram among the  most recent new contributors know this 
well.

Thanks

Sridar

From: Vikram Choudhary viks...@gmail.commailto:viks...@gmail.com
Date: Wednesday, May 27, 2015 at 5:54 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [fwaas] - Collecting use cases for API 
improvements

Hi German,

Thanks for the initiative. I am currently working for few of the FWaaS BP's 
proposed for Liberty and definitely would like to be a part of this effort.

BTW did you mean FWaaS IRC meeting to take up this discussion further?

Thanks
Vikram


On Thu, May 28, 2015 at 4:20 AM, Kyle Mestery 
mest...@mestery.commailto:mest...@mestery.com wrote:
On Wed, May 27, 2015 at 5:36 PM, Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com wrote:
All,


During the FWaaS session in Vancouver [1] it became apparent that both the 
FWaaS API and the Security Groups API are lacking in functionality and the 
connection between the two is not well defined.


For instance if a cloud user opens up all ports in the security groups they 
still can’t connect and might figure out days later that there is a second API 
(FWaaS) which prevents him from connecting to his service. This will probably 
make for a frustrating experience.


Similarly, the operators I spoke to all said that the current FWaaS 
implementation isn’t going far enough and needs a lot of missing functionality 
added to fulfill their requirements on a Firewall implementation.


With that backdrop I am proposing to take a step back and assemble a group of 
operators and users to collect use cases for the firewall service – both FWaaS 
and Security Groups based. I believe it is important at this juncture to really 
focus on the users and less on technical limitations. I also think this reset 
is necessary to make a service which meets the needs of operators and users 
better.


Once we have collected the use cases we can evaluate our current API’s and 
functionality and start making the necessary improvements to turn FWaaS into a 
service which covers most of the use cases and requirements.


Please join me in this effort. We have set up an etherpad [2] to start 
collecting the use cases and will discuss them in an upcoming meeting.


Thanks for sending this out German. I took home the same impressions that you 
did. Similar to what we did with the LBaaS project (to great success last 
summer), I think we should look at FWaaS API V2 with the new contributors 
coming on as equals and helping to define the new operator focused API. My 
suggestion is we look at doing the work to lay the foundation during Liberty 
for a successful launch of this API during the Mxx cycle. I'm happy to step in 
here and guide the new group of contributors similar to what we did for LBaaS.

Thanks,
Kyle


Thanks,

German





[1] https://etherpad.openstack.org/p/YVR-neutron-sg-fwaas-future-direction

[2] https://etherpad.openstack.org/p/fwaas_use_cases


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Fox, Kevin M
Oh and one more thing... I think one of the first cloud apps we may want to 
consider is refstack. :)

That way users can easily deploy and test.

Thanks,
Kevin


From: Keith Bray
Sent: Wednesday, May 27, 2015 5:20:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

Maybe.  I'm not up to speed on defcore/refstack requirements.. But, to put
the question on the table, do folks want the OpenStack App-catalog to only
have support for the lowest-common-denominator of artifacts and cloud
capabilities, or instead allow for showcasing all that is possible when
using cloud technology that major vendors have adopted but are not yet
part of refstack/defcore?

-Keith

On 5/27/15 6:58 PM, Fox, Kevin M kevin@pnnl.gov wrote:

Should RefStack be involved here? To integrate tightly with the App
Catalog, the Cloud Provider would be required to run RefStack against
their cloud, the results getting registered to an App Catalog service in
that Cloud. The App Catalog UI in Horizon could then filter out from the
global App Catalog any apps that would be incompatible with their cloud.
I think the Android app store works kind of like that...

Thanks,
Kevin

From: Keith Bray [keith.b...@rackspace.com]
Sent: Wednesday, May 27, 2015 4:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

In-line responses.  Thanks for chipping in Monty.
-Keith

On 5/27/15 6:03 PM, Monty Taylor mord...@inaugust.com wrote:

On 05/27/2015 06:35 PM, Keith Bray wrote:
 Joe, regarding apps-catalog for any app deployable on OpenStack
 (regardless of deployment technology), my two cents is that is a good
 idea.  I also believe, however, that the app-catalog needs to evolve
 first with features that make it super simple to understand which
 artifacts will work on which clouds (out-of-the-box) vs. needing
 additional required dependencies or cloud operator software.   My
 guess is there will be a lot of discussions related to defcore,
 and/or tagging artifacts with known public/private cloud
 distributions  the artifacts are known to work on. To the extent an
 openstack operator or end user has to download/install 3rd party or
 stack forge or non defcore openstack components in order to deploy an
 artifact, the more sophisticated and complicated it becomes and we
 need a way to depict that for items shown in the catalog.

 For example, I'd like to see a way to tag items in the catalog as
 known-to-work on HP or Rackspace public cloud, or known to work on
 RDO.  Even a basic Heat template optimized for one cloud won't
 necessarily work on another cloud without modification.

That's an excellent point - I have two opposing thoughts to it.

a) That we have to worry about the _vendor_ side of that is a bug and
should be fixed. Since all clouds already have a service catalog,
mapping out a this app requires trove should be easy enough. The other
differences are ... let's just say as a user they do not provide me value

I wouldn't call it a bug.  By design, Heat is pluggable with different
resource implementations. And, different cloud run different plug-ins,
hence a template written for one cloud won't necessarily run on another
cloud unless that cloud also runs the same Heat plug-ins.


b) The state you describe is today's reality, and as much as wringing
out hands and spitting may feel good, it doesn't get us anywhere. You
do, in _fact_ need to know those things to use even basic openstack
functions today- so we might as well deal with it.

I don't buy the argument of you need to know those things to make
openstack function, because:  The catalog _today_ is targeted more at the
end user, not the operator.  The end user shouldn't need to know whether
trove is or is not set up, let alone how to do it.  Maybe that isn't the
intention of the catalog, and probably worth sorting out.


I'll take this as an opportunity to point people towards work in this
direction grew out of a collaboration between infra and ansible:

http://git.openstack.org/cgit/openstack-infra/shade/
and
http://git.openstack.org/cgit/openstack/os-client-config

os-client-config knows about the differences between the clouds. It has,
sadly, this file:

http://git.openstack.org/cgit/openstack/os-client-config/tree/os_client_c
o
nfig/vendors.py

Which lists as much knowledge as we've figured out so far about the
differences between clouds.

shade presents business logic to users so that they don't have to know.
For instance:

I'm all +1 on different artifact types with different deployment
mechanisms, including Ansible, in case that wasn't clear. As long as the
app-catalog supports letting the consumer know what they are in for and
expectations.  I'm not clear on how the infra stuff works, but agree we
don't want cloud specific logic... I especially 

Re: [openstack-dev] [nova] I think nova behaves poorly when booting multiple instances

2015-05-27 Thread Chris Friesen

On 05/27/2015 05:09 PM, Fox, Kevin M wrote:

If the current behavior is broken, and the behavior is causing problems with
things like fixing quota's, should it just be deprecated and pushed off to
orchestration rather then change it?


Is this causing problems with quotas?  The problem I brought up isn't with 
quotas but rather with all the instances being set to an error state when we 
could have actually booted up a bunch of them.


In any case, even if we wanted to deprecate it (and I think an argument could be 
made for that) we still have to decide what the correct behaviour should be 
for the API as it exists today.  In my view the current behaviour doesn't match 
what a reasonable person would expect given the description of the min count 
and max count parameters.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Neutron] Ironic/Neutron Integration weekly meeting kick off

2015-05-27 Thread Mitsuhiro SHIGEMATSU
Sukhdev,

Thank you for your settings! Looking forward to meeting you next week.

-- pshige

2015-05-28 13:59 GMT+09:00 Sukhdev Kapur sukhdevka...@gmail.com:
 Folks,

 Starting next monday (June 1, 2015), we are kicking off weekly meeting to
 discuss and track the integration of Ironic and Neutron (ML2).
 We are hoping to implement the Networking support within Liberty cycle. Come
 join and help us achieve this goal.

 Anybody who is interested in this topic, wants to contribute, share their
 wisdom with the team, are welcome to join us. Here are the details of the
 meeting:

 Weekly on Mondays at 1600 UTC (9am Pacific Time)

 IRC Channel - #openstack-meeting-4

 Meeting Agenda and team charter -
 https://wiki.openstack.org/wiki/Meetings/Ironic-neutron

 Feel free to add a topic to the agenda for discussion.

 Looking forward to meeting you in the channel.

 regards..
 -Sukhdev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Keith Bray
Kevin, I like your vision.  Today we have images, heat templates, Murano 
packages.  What are your thoughts on how to manage additions?  Should it be 
restricted to things in the OpenStack namespace under the big tent?  E.g., I'd 
like to see Solum language packs get added to the app-catalog.  Solum is 
currently in stack forge, but meets all the criteria I believe to enter 
OpenStack namespace.  We plan to propose it soon. Folks from various companies 
did a lot of work the past few summits to clearly distinguish, Heat, Murano, 
Mistral, and Solum as differentiated enough to co-exist and add value to the 
ecosystem.

Thanks,
-Keith

From: Fox, Kevin M kevin@pnnl.govmailto:kevin@pnnl.gov
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, May 27, 2015 6:27 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

I'd say, tools that utilize OpenStack, like the knife openstack plugin, are not 
something that you would probably go to the catalog to find. And also, the 
recipes that you would use with knife would not be specific to OpenStack in any 
way, so you would just be duplicating the config management system's own 
catalog in the OpenStack catalog, which would be error prone. Duplicating all 
the chef recipes, and docker containers, puppet stuff, and . is a lot of 
work...

The vision I have for the Catalog (I can be totally wrong here, lets please 
discuss) is a place where users (non computer scientists) can visit after 
logging into their Cloud, pick some app of interest, hit launch, and optionally 
fill out a form. They then have a running piece of software, provided by the 
greater OpenStack Community, that they can interact with, and their Cloud can 
bill them for. Think of it as the Apple App Store for OpenStack.  Having a 
reliable set of deployment engines (Murano, Heat, whatever) involved is 
critical to the experience I think. Having too many of them though will mean it 
will be rare to have a cloud that has all of them, restricting the utility of 
the catalog. Too much choice here may actually be a detriment.

If chef, or what ever other configuration management system became multitenant 
aware, and integrated into OpenStack and provided by the Cloud providers, then 
maybe it would fit into the app store vision?

Thanks,
Kevin

From: Joe Gordon [joe.gord...@gmail.commailto:joe.gord...@gmail.com]
Sent: Wednesday, May 27, 2015 3:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps



On Fri, May 22, 2015 at 9:06 PM, Christopher Aedo 
ca...@mirantis.commailto:ca...@mirantis.com wrote:
I want to start off by thanking everyone who joined us at the first
working session in Vancouver, and those folks who have already started
adding content to the app catalog. I was happy to see the enthusiasm
and excitement, and am looking forward to working with all of you to
build this into something that has a major impact on OpenStack
adoption by making it easier for our end users to find and share the
assets that run on our clouds.

Great job. This is very exciting to see, I have been wanting something like 
this for some time now.


The catalog: http://apps.openstack.org
The repo: https://github.com/stackforge/apps-catalog
The wiki: https://wiki.openstack.org/wiki/App-Catalog

Please join us via IRC at #openstack-app-catalog on freenode.

Our initial core team is Christopher Aedo, Tom Fifield, Kevin Fox,
Serg Melikyan.

I’ve started a doodle poll to vote on the initial IRC meeting
schedule, if you’re interested in helping improve and build up this
catalog please vote for the day/time that works best and get involved!
http://doodle.com/vf3husyn4bdkui8w

At the summit we managed to get one planning session together. We
captured that on etherpad[1], but I’d like to highlight here a few of
the things we talked about working on together in the near term:

-More information around asset dependencies (like clarifying
requirements for Heat templates or Glance images for instance),
potentially just by providing better guidance in what should be in the
description and attributes sections.
-With respect to the assets that are listed in the catalog, there’s a
need to account for tagging, rating/scoring, and a way to have
comments or a forum for each asset so potential users can interact
outside of the gerrit review system.
-Supporting more resource types (Sahara, Trove, Tosca, others)

What about expanding the scope of the application catalog to any application 
that can run *on* OpenStack, versus the implied scope of applications that can 
be deployed *by* (heat, murano, etc.) OpenStack and *on* OpenStack services 
(nova, 

Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Keith Bray
Maybe.  I'm not up to speed on defcore/refstack requirements.. But, to put
the question on the table, do folks want the OpenStack App-catalog to only
have support for the lowest-common-denominator of artifacts and cloud
capabilities, or instead allow for showcasing all that is possible when
using cloud technology that major vendors have adopted but are not yet
part of refstack/defcore?

-Keith

On 5/27/15 6:58 PM, Fox, Kevin M kevin@pnnl.gov wrote:

Should RefStack be involved here? To integrate tightly with the App
Catalog, the Cloud Provider would be required to run RefStack against
their cloud, the results getting registered to an App Catalog service in
that Cloud. The App Catalog UI in Horizon could then filter out from the
global App Catalog any apps that would be incompatible with their cloud.
I think the Android app store works kind of like that...

Thanks,
Kevin

From: Keith Bray [keith.b...@rackspace.com]
Sent: Wednesday, May 27, 2015 4:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

In-line responses.  Thanks for chipping in Monty.
-Keith

On 5/27/15 6:03 PM, Monty Taylor mord...@inaugust.com wrote:

On 05/27/2015 06:35 PM, Keith Bray wrote:
 Joe, regarding apps-catalog for any app deployable on OpenStack
 (regardless of deployment technology), my two cents is that is a good
 idea.  I also believe, however, that the app-catalog needs to evolve
 first with features that make it super simple to understand which
 artifacts will work on which clouds (out-of-the-box) vs. needing
 additional required dependencies or cloud operator software.   My
 guess is there will be a lot of discussions related to defcore,
 and/or tagging artifacts with known public/private cloud
 distributions  the artifacts are known to work on. To the extent an
 openstack operator or end user has to download/install 3rd party or
 stack forge or non defcore openstack components in order to deploy an
 artifact, the more sophisticated and complicated it becomes and we
 need a way to depict that for items shown in the catalog.

 For example, I'd like to see a way to tag items in the catalog as
 known-to-work on HP or Rackspace public cloud, or known to work on
 RDO.  Even a basic Heat template optimized for one cloud won't
 necessarily work on another cloud without modification.

That's an excellent point - I have two opposing thoughts to it.

a) That we have to worry about the _vendor_ side of that is a bug and
should be fixed. Since all clouds already have a service catalog,
mapping out a this app requires trove should be easy enough. The other
differences are ... let's just say as a user they do not provide me value

I wouldn't call it a bug.  By design, Heat is pluggable with different
resource implementations. And, different cloud run different plug-ins,
hence a template written for one cloud won't necessarily run on another
cloud unless that cloud also runs the same Heat plug-ins.


b) The state you describe is today's reality, and as much as wringing
out hands and spitting may feel good, it doesn't get us anywhere. You
do, in _fact_ need to know those things to use even basic openstack
functions today- so we might as well deal with it.

I don't buy the argument of you need to know those things to make
openstack function, because:  The catalog _today_ is targeted more at the
end user, not the operator.  The end user shouldn't need to know whether
trove is or is not set up, let alone how to do it.  Maybe that isn't the
intention of the catalog, and probably worth sorting out.


I'll take this as an opportunity to point people towards work in this
direction grew out of a collaboration between infra and ansible:

http://git.openstack.org/cgit/openstack-infra/shade/
and
http://git.openstack.org/cgit/openstack/os-client-config

os-client-config knows about the differences between the clouds. It has,
sadly, this file:

http://git.openstack.org/cgit/openstack/os-client-config/tree/os_client_c
o
nfig/vendors.py

Which lists as much knowledge as we've figured out so far about the
differences between clouds.

shade presents business logic to users so that they don't have to know.
For instance:

I'm all +1 on different artifact types with different deployment
mechanisms, including Ansible, in case that wasn't clear. As long as the
app-catalog supports letting the consumer know what they are in for and
expectations.  I'm not clear on how the infra stuff works, but agree we
don't want cloud specific logic... I especially don't want the application
architect authors (e.g. The folks writing Heat templates and/or Murano
packages) to have to account for Cloud specific checks in their authoring
files. It'd be better to automate this on the catalog testing side at
best, or use author submission + voting as a low cost human method (but
not without problems in up-keep).


import shade
cloud = 

Re: [openstack-dev] [Nova] Using depends-on for patches which require an approved spec

2015-05-27 Thread John Griffith
On Wed, May 27, 2015 at 12:15 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Tue, May 26, 2015 at 8:45 AM, Daniel P. Berrange berra...@redhat.com
 wrote:

 On Fri, May 22, 2015 at 02:57:23PM -0700, Michael Still wrote:
  Hey,
 
  it would be cool if devs posting changes for nova which depend on us
  approving their spec could use Depends-On to make sure their code
  doesn't land until the spec does.

 Does it actually bring any benefit ?  Any change for which there is
 a spec is already supposed to be tagged with 'Blueprint: foo-bar-wiz'
 and nova core devs are supposed to check the blueprint is approved
 before +A'ing it.  So also adding a Depends-on just feels redundant
 to me, and so is one more hurdle for contributors to remember to
 add. If we're concerned people forget the Blueprint tag, or forget
 to check blueprint approval, then we'll just have same problem with
 depends-on - people will forget to add it, and cores will forget
 to check the dependant change. So this just feels like extra rules
 for no gain and extra pain.


 I think it does have a benefit. Giving a spec implementation patches,
 commonly signals to reviewers to not review this patch (a -2 looks scary).
 Instead of there was a depends-on no scary -2 is needed, we also wouldn't
 need to hunt down the -2er and ask them to remove it (can be a delay due to
 timezones). Anything that reduces the number of procedural -2s we need is a
 good thing IMHO. But that doesn't mean we should require folks to do this,
 we can try it out on a few patches and see how it goes.



 Regards,
 Daniel
 --
 |: http://berrange.com  -o-
 http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o-
 http://virt-manager.org :|
 |: http://autobuild.org   -o-
 http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-
 http://live.gnome.org/gtk-vnc :|

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ​Seems ok, but I'm wondering if maybe others are doing specs differently.
What I mean is, we seem to be growing a long process tail:
1. spec
2. blueprint
3. patch with link to blueprint
and now
4. patch with tag Depends-On: spec

I think we used to say if there's a bp link and it's not approved don't
merge which seems similar.  We've had so many procedural steps
added/removed that who knows if I'm just completely out of sync or not.​

Certainly not saying I oppose the idea, just wondering about how much
red-tape we create and what we do with it all.

John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Package updates strategy

2015-05-27 Thread Fox, Kevin M
my gut feeling is that it will take way more work to do this well, then to 
dockerize the various parts, and then puppet (or whatever) could simply 
stop/start containers that had a new version. It could easily pick up what 
needed updating then. It would also fix once and for all the issue of wanting 
to run release X service A with release X+1 service B on the same controller. 
So using the same process to upgrade from one release to another, while still 
hard, would potentially be doable with the process, if done very carefully.

I don't think it would be difficult to make some very basic containers where 
the config is passed through as a volume managed by puppet? Most of the 
existing code would still work then?

Thanks,
Kevin

From: Zane Bitter [zbit...@redhat.com]
Sent: Wednesday, May 27, 2015 3:49 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [TripleO] Package updates strategy

Steve is working on a patch to allow package-based updates of overcloud
nodes[1] using the distro's package manager (yum in the case of RDO, but
conceivable apt in others). Note we're talking exclusively about minor
updates, not version-to-version upgrades here.

Dan mentioned at the summit that this approach fails to take into
account the complex ballet of service restarts required to update
OpenStack services. (/me shakes fist at OpenStack services.) And
furthermore, that the Puppet manifests already encode the necessary
relationships to do this properly. (Thanks Puppeteers!) Indeed we'd be
doing the Wrong Thing by Puppet if we changed this stuff from under it.

The problem of course is that neither Puppet nor yum/apt has a view of
the entire system. Yum doesn't know about the relationships between
services and Puppet doesn't know about all of the _other_ packages that
they depend on.

One solution proposed was to do a yum update first but specifically
exclude any packages that Puppet knows about (the --excludes flag
appears sufficient for this); then follow that up with another Puppet
run using ensure - latest.

A problem with that approach is that it still fails to restart services
which have had libraries updated but have not themselves been updated.
That's no worse than the pure yum approach though. We might need an
additional way to just manually trigger a restart of services.

What do folks think of this plan? Anybody got better ideas?

thanks,
Zane.

[1] https://review.openstack.org/#/c/179974

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-27 Thread Fox, Kevin M
I'm thinking that refstack would have tests for things that aren't always 
required, but if they were there, it would ensure they were up to spec? If so, 
then we could use it to detect which standard but optional features where there 
and filter appropriately?

Ideally every cloud would provide everything every app would need, but I 
realize that's totally unrealistic. So is catering to the lowest common 
denominator. That would be no NaaS, and a lot of my templates need it. :/ I 
fear lowest common denominator at this point is strictly glance only. :/

Thanks,
Kevin

From: Keith Bray [keith.b...@rackspace.com]
Sent: Wednesday, May 27, 2015 5:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

Maybe.  I'm not up to speed on defcore/refstack requirements.. But, to put
the question on the table, do folks want the OpenStack App-catalog to only
have support for the lowest-common-denominator of artifacts and cloud
capabilities, or instead allow for showcasing all that is possible when
using cloud technology that major vendors have adopted but are not yet
part of refstack/defcore?

-Keith

On 5/27/15 6:58 PM, Fox, Kevin M kevin@pnnl.gov wrote:

Should RefStack be involved here? To integrate tightly with the App
Catalog, the Cloud Provider would be required to run RefStack against
their cloud, the results getting registered to an App Catalog service in
that Cloud. The App Catalog UI in Horizon could then filter out from the
global App Catalog any apps that would be incompatible with their cloud.
I think the Android app store works kind of like that...

Thanks,
Kevin

From: Keith Bray [keith.b...@rackspace.com]
Sent: Wednesday, May 27, 2015 4:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

In-line responses.  Thanks for chipping in Monty.
-Keith

On 5/27/15 6:03 PM, Monty Taylor mord...@inaugust.com wrote:

On 05/27/2015 06:35 PM, Keith Bray wrote:
 Joe, regarding apps-catalog for any app deployable on OpenStack
 (regardless of deployment technology), my two cents is that is a good
 idea.  I also believe, however, that the app-catalog needs to evolve
 first with features that make it super simple to understand which
 artifacts will work on which clouds (out-of-the-box) vs. needing
 additional required dependencies or cloud operator software.   My
 guess is there will be a lot of discussions related to defcore,
 and/or tagging artifacts with known public/private cloud
 distributions  the artifacts are known to work on. To the extent an
 openstack operator or end user has to download/install 3rd party or
 stack forge or non defcore openstack components in order to deploy an
 artifact, the more sophisticated and complicated it becomes and we
 need a way to depict that for items shown in the catalog.

 For example, I'd like to see a way to tag items in the catalog as
 known-to-work on HP or Rackspace public cloud, or known to work on
 RDO.  Even a basic Heat template optimized for one cloud won't
 necessarily work on another cloud without modification.

That's an excellent point - I have two opposing thoughts to it.

a) That we have to worry about the _vendor_ side of that is a bug and
should be fixed. Since all clouds already have a service catalog,
mapping out a this app requires trove should be easy enough. The other
differences are ... let's just say as a user they do not provide me value

I wouldn't call it a bug.  By design, Heat is pluggable with different
resource implementations. And, different cloud run different plug-ins,
hence a template written for one cloud won't necessarily run on another
cloud unless that cloud also runs the same Heat plug-ins.


b) The state you describe is today's reality, and as much as wringing
out hands and spitting may feel good, it doesn't get us anywhere. You
do, in _fact_ need to know those things to use even basic openstack
functions today- so we might as well deal with it.

I don't buy the argument of you need to know those things to make
openstack function, because:  The catalog _today_ is targeted more at the
end user, not the operator.  The end user shouldn't need to know whether
trove is or is not set up, let alone how to do it.  Maybe that isn't the
intention of the catalog, and probably worth sorting out.


I'll take this as an opportunity to point people towards work in this
direction grew out of a collaboration between infra and ansible:

http://git.openstack.org/cgit/openstack-infra/shade/
and
http://git.openstack.org/cgit/openstack/os-client-config

os-client-config knows about the differences between the clouds. It has,
sadly, this file:

http://git.openstack.org/cgit/openstack/os-client-config/tree/os_client_c
o
nfig/vendors.py

Which lists as much knowledge as we've figured out so far about the

[openstack-dev] taskflow usage

2015-05-27 Thread ESWAR RAO
Hi All,

I am looking into taskflow userguide and examples.

http://www.giantflyingsaucer.com/blog/?p=4896

Can anyone please help me how the job/job-board is related to task and
flows.

I understood atom is similar to a abstract interface and taskflow is an
atom that has execute()/revert() methods and a flow is structure that links
these tasks.

Is it that a job is broken into tasks??
Can a job be broken into set of tasks???

Thanks
Eswar Rao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][Neutron] Ironic/Neutron Integration weekly meeting kick off

2015-05-27 Thread Sukhdev Kapur
Folks,

Starting next monday (June 1, 2015), we are kicking off weekly meeting to
discuss and track the integration of Ironic and Neutron (ML2).
We are hoping to implement the Networking support within Liberty cycle.
Come join and help us achieve this goal.

Anybody who is interested in this topic, wants to contribute, share their
wisdom with the team, are welcome to join us. Here are the details of the
meeting:

Weekly on Mondays at 1600 UTC (9am Pacific Time)

IRC Channel - #openstack-meeting-4

Meeting Agenda and team charter -
https://wiki.openstack.org/wiki/Meetings/Ironic-neutron

Feel free to add a topic to the agenda for discussion.

Looking forward to meeting you in the channel.

regards..
-Sukhdev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Neutron] Ironic/Neutron Integration weekly meeting kick off

2015-05-27 Thread Miguel Ángel Ajo
Thanks for sharing Sukhdev, I’ll join the meetings.

Miguel Ángel Ajo


On Thursday, 28 de May de 2015 at 6:59, Sukhdev Kapur wrote:

 Folks,  
  
 Starting next monday (June 1, 2015), we are kicking off weekly meeting to 
 discuss and track the integration of Ironic and Neutron (ML2).
 We are hoping to implement the Networking support within Liberty cycle. Come 
 join and help us achieve this goal.  
  
 Anybody who is interested in this topic, wants to contribute, share their 
 wisdom with the team, are welcome to join us. Here are the details of the 
 meeting:  
  
  Weekly on Mondays at 1600 UTC (9am Pacific Time)
  IRC Channel - #openstack-meeting-4
  
  Meeting Agenda and team charter - 
  https://wiki.openstack.org/wiki/Meetings/Ironic-neutron
   
 Feel free to add a topic to the agenda for discussion.  
  
 Looking forward to meeting you in the channel.  
  
 regards..
 -Sukhdev
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >