[openstack-dev] DB2 CI enablement on Keystone

2015-06-15 Thread Feng Xi BJ Yan
Hi, Keystone guys,

Could we have a talk about DB2 CI enablement on this Monday, 8PM central US
time? which is Tuesday 9AM beijeing time?

For your questions, here are my answers:

1) Is the team going to be responsive to requests unlike last time there
was a problem?
(yanfengxi) Yes, problems will be handled in time. In our current working
process, issues items will be opened internally automatically if the CI
fails. And our mainterner will try to handle the issues opened as soon as
possible. If there are others issues, please send email to
yanfen...@cn.ibm.com, or reach irc account yanfengxi on channel
#openstack-infra.

2) Is the job stable? I have no data about the stability I couldn't vouch
for it.
(yanfengxi) From the statictics of other running CIs, we have a 88%,
including environment failures, and tempest failures. Which I think is
acceptable.
Because keystone CI is not enabled, we do not have the statistics for it.
But from several test runnings, keystone CI can run properly.

By the way, I already updated the wiki page:
https://wiki.openstack.org/w/index.php?title=IBM/IBM_DB2_CI to include a
irc nick name.

Best Regard:)
Bruce Yan

Yan, Fengxi (闫凤喜)
Openstack Platform Team
IBM China Systems  Technology Lab, Beijing
E-Mail: yanfen...@cn.ibm.com
Tel: 86-10-82451418  Notes: Feng Xi FX Yan/China/IBM
Address: 3BW239, Ring Building. No.28 Building, ZhongGuanCun Software
Park,No.8
DongBeiWang West Road, ShangDi, Haidian District, Beijing, P.R.China



From:   Brant L Knudson/Rochester/IBM@IBMUS
To: Feng Xi BJ Yan/China/IBM@IBMCN@IBMAU
Cc: Dan Moravec/Rochester/IBM@IBMUS, Zhu ZZ Zhu/China/IBM@IBMCN
Date:   2015/06/10 02:51
Subject:Re: Could you help to talk about DB2 CI enablement on keystone
meeting



I brought this up at the keystone meeting and the community didn't want DB2
CI reporting was answered until a couple of questions were answered:

They asked a couple of questions that I wasn't able to answer:

1) Is the team going to be responsive to requests unlike last time there
was a problem?
2) Is the job stable? I have no data about the stability I couldn't vouch
for it.

They also wanted an irc nick fors someone that can answer questions and
respond to problems, on the wiki page:
https://wiki.openstack.org/w/index.php?title=IBM/IBM_DB2_CI

Rather than go back and forth on this week to week, they suggested that we
have a meeting on freenode irc in #openstack-keystone when it's convenient
enough for all of us. They were willing to meet any day at 8 PM central US
time, which I think is 9 AM Beijing time. So just pick a date when you can
meet and send a note to the openstack-dev mailing list when it is.

Also, you can include in the note any backup material that you have, such
as answers to the above questions.

Brant Knudson, OpenStack Development - Keystone core member
Phone:   507-253-8621 T/L:553-8621





From:   Brant L Knudson/Rochester/IBM
To: Feng Xi BJ Yan/China/IBM@IBMCN@IBMAU
Cc: Dan Moravec/Rochester/IBM@IBMUS, Zhu ZZ Zhu/China/IBM@IBMCN
Date:   06/03/2015 09:17 AM
Subject:Re: Could you help to talk about DB2 CI enablement on keystone
meeting



Looks OK to me. I'll add it to the agenda for next week.

Brant Knudson, OpenStack Development - Keystone core member
Phone:   507-253-8621 T/L:553-8621





From:   Feng Xi BJ Yan/China/IBM@IBMCN
To: Brant L Knudson/Rochester/IBM@IBMUS@IBMAU
Cc: Dan Moravec/Rochester/IBM@IBMUS, Zhu ZZ Zhu/China/IBM@IBMCN
Date:   06/02/2015 11:58 PM
Subject:Re: Could you help to talk about DB2 CI enablement on keystone
meeting


OK, please take a look at this patch
https://review.openstack.org/#/c/187751/




Best Regard:)
Bruce Yan

Yan, Fengxi (闫凤喜)
Openstack Platform Team
IBM China Systems  Technology Lab, Beijing
E-Mail: yanfen...@cn.ibm.com
Tel: 86-10-82451418  Notes: Feng Xi FX Yan/China/IBM
Address: 3BW239, Ring Building. No.28 Building, ZhongGuanCun Software
Park,No.8
DongBeiWang West Road, ShangDi, Haidian District, Beijing, P.R.China




From:   Brant L Knudson/Rochester/IBM@IBMUS
To: Feng Xi BJ Yan/China/IBM@IBMCN
Cc: Dan Moravec/Rochester/IBM@IBMUS, Zhu ZZ Zhu/China/IBM@IBMCN
Date:   2015/06/02 21:50
Subject:Re: Could you help to talk about DB2 CI enablement on keystone
meeting



You need to provide some evidence that this is working for keystone before
I'll bring it forward. There should be logs for successful keystone runs.

Brant Knudson, OpenStack Development - Keystone core member
Phone:   507-253-8621 T/L:553-8621





From:   Feng Xi BJ Yan/China/IBM@IBMCN
To: Brant L Knudson/Rochester/IBM@IBMUS
Cc: Dan Moravec/Rochester/IBM@IBMUS, Zhu ZZ Zhu/China/IBM@IBMCN
Date:   06/02/2015 04:43 AM
Subject:Could you help to talk about DB2 CI enablement on keystone
meeting


Hi, Brant,
Today is the time for keystone online meeting. But it's too late for me(3
AM in my time zone). Could you help to talk abut the DB2 CI 

Re: [openstack-dev] [all] DevStack switching from MySQL-python to PyMySQL

2015-06-15 Thread Sean Dague
On 06/11/2015 06:29 AM, Sean Dague wrote:
 On 06/09/2015 06:42 PM, Jeremy Stanley wrote:
 As discussed in the Liberty Design Summit Moving apps to Python 3
 cross-project workshop, the way forward in the near future is to
 switch to the pure-python PyMySQL library as a default.

 https://etherpad.openstack.org/p/liberty-cross-project-python3

 To that end, support has already been implemented and tested in
 DevStack, and when https://review.openstack.org/184493 merges in a
 day or two this will become its default. Any last-minute objections
 or concerns?

 Note that similar work is nearing completion is oslo.db with
 https://review.openstack.org/184392 and there are also a lot of
 similar changes in flight for other repos under the same review
 topic (quite a few of which have already merged).
 
 Ok, we've had 2 days fair warning, I'm pushing the merge button here.
 Welcome to the world of pure python mysql.

As a heads up for where we stand. The switch was flipped, but a lot of
neutron jobs (rally  tempest) went into a pretty high failure rate
after it was (all the other high volume jobs seemed fine).

We reverted the change here to unwedge things -
https://review.openstack.org/#/c/191010/

After a long conversation with Henry and Armando we came up with a new
plan, because we want the driver switch, and we want to figure out why
it causes a high Neutron failure rate, but we don't want to block
everything.

https://review.openstack.org/#/c/191121/ - make the default Neutron jobs
set some safe defaults (which are different than non Neutron job
defaults), but add a flag to make it possible to expose these issues.

Then add new non-voting check jobs to Neutron queue to expose these
issues - https://review.openstack.org/#/c/191141/. Hopefully allowing
interested parties to get to the bottom of these issues around the db
layer. It's in the check queue instead of the experimental queue to get
enough volume to figure out the pattern for the failures, because they
aren't 100%, and they seem to move around a bit.

Once https://review.openstack.org/#/c/191121/ is landed we'll revert
revert - https://review.openstack.org/#/c/191113/ and get everything
else back onto pymysql.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stackforge projects are not second class citizens

2015-06-15 Thread Boris Pavlovic
Joe,

When looking at stackalytics [2] for each project, we don't see any
 noticeably change in number of reviews, contributors, or number of commits
 from before and after each project joined OpenStack.


I can't agree on this.

*) Rally is facing core-reviewers bottleneck currently.
We have about 130 (40 at the begging on kilo) patches on review.
*) In IRC +15 online members in average
*) We merged about x2 if we compare to kilo-1 vs liberty-1
*) I see a lot of interest from various companies to use Rally (because it
is *official* now)


Best regards,
Boris Pavlovic



On Mon, Jun 15, 2015 at 2:12 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 06/15/2015 06:20 AM, Joe Gordon wrote:

 One of the stated problems the 'big tent' is supposed to solve is:

 'The binary nature of the integrated release results in projects outside
 the integrated release failing to get the recognition they deserve.
 Non-official projects are second- or third-class citizens which can't
 get development resources. Alternative solutions can't emerge in the
 shadow of the blessed approach. Becoming part of the integrated release,
 which was originally designed to be a technical decision, quickly became
 a life-or-death question for new projects, and a political/community
 minefield.' [0]

 Meaning projects should see an uptick in development once they drop
 their second-class citizenship and join OpenStack. Now that we have been
 living in the world of the big tent for several months now, we can see
 if this claim is true.

 Below is a list of the first few few projects to join OpenStack after
 the big tent, All of which have now been part of OpenStack for at least
 two months.[1]

 * Mangum -  Tue Mar 24 20:17:36 2015
 * Murano - Tue Mar 24 20:48:25 2015
 * Congress - Tue Mar 31 20:24:04 2015
 * Rally - Tue Apr 7 21:25:53 2015

 When looking at stackalytics [2] for each project, we don't see any
 noticeably change in number of reviews, contributors, or number of
 commits from before and after each project joined OpenStack.

 So what does this mean? At least in the short term moving from
 Stackforge to OpenStack does not result in an increase in development
 resources (too early to know about the long term).  One of the three
 reasons for the big tent appears to be unfounded, but the other two
 reasons hold.


 You have not given enough time to see the effects of the Big Tent, IMHO.
 Lots of folks in the corporate world just found out about it at the design
 summit, frankly.

  The only thing I think this information changes is what

 peoples expectations should be when applying to join OpenStack.


 What is your assumption of what people's expectations are when applying to
 join OpenStack?

 Best,
 -jay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing YAMAMOTO Takashi for the Control Plane core team

2015-06-15 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Not on the list either, but I want to +1 what Henry said. Yamamoto's
reviews expand to the whole code base and are pretty always *very* usefu
l.

On 06/12/2015 11:39 PM, Henry Gessau wrote:
 Although I am not on your list I would like to add my +1! Yamamoto
 shows great attention to detail in code reviews and frequently
 finds real issues that were not spotted by others.
 
 On Thu, Jun 11, 2015, Kevin Benton blak...@gmail.com wrote:
 Hello all!
 
 As the Lieutenant of the built-in control plane[1], I would like
 YAMAMOTO Takashi to be a member of the control plane core
 reviewer team.
 
 He has been extensively reviewing the entire codebase[2] and his
 feedback on patches related to the reference implementation has
 been very useful. This includes everything ranging from the AMPQ
 API to OVS flows.
 
 Existing cores that have spent time working on the reference
 implementation (agents and AMQP code), please vote +1/-1 for his
 addition to the team. Aaron, Gary, Assaf, Maru, Kyle, Armando,
 Carl and Oleg; you have all been reviewing things in these areas
 recently so I would like to hear from you specifically.
 
 1.
 http://docs.openstack.org/developer/neutron/policies/core-reviewers.h
tml#core-review-hierarchy

 
2. http://stackalytics.com/report/contribution/neutron-group/90
 
 
 Cheers -- Kevin Benton
 
 
 __


 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVfpgHAAoJEC5aWaUY1u57z40IAL8HpGOEQY7aW1aa39C9ig3Y
bNLEabeQ0K5a4+5ynVLIm1sEl4s3GF2r0gCXak9FrkqjHx2r8VeyOx8TqrCj8gX3
+4aHI7n6rJoJtINuTd+bN7T65uaZE86erOcZN+yma5V69ObfIZhIfQKr3BMaBeZT
ah5hkUKl88ckLL5zqc0HnT4wcFH/3Ved2fuhP7hw/IEGMFsqTDS8QUTdXg7/OF5t
fLt0NluKoOuNMOk8dTqpqtQtiyS/E5TH+miVOzyrUeUmZOEayOO3O2b/9QRkX/hL
ijv1j+clT69fhoVAWSaeR7IsXSfqMuKK/hrVtjnyDpBH4KuZnl3QbRpKJ6Yi3bw=
=qnTt
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove]Put all alternative configurations in default trove.conf

2015-06-15 Thread 陈迪豪
Hi all,


I have created the blueprint about the default configuration file. I think we 
should add the essential configuration like datastore_manager in default 
trove.conf.


The blueprint is here 
https://blueprints.launchpad.net/trove/+spec/default-configuration-items


Any suggestion about this?__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Getting rid of suds, which is unmaintained, and which we want out of Debian

2015-06-15 Thread Joe Gordon
On Sun, Jun 14, 2015 at 10:46 PM, Thomas Goirand z...@debian.org wrote:

 On 06/11/2015 11:31 PM, Nikhil Manchanda wrote:
  Hi Thomas:
 
  I just checked and I don't see suds as a requirement for trove.
  I don't think it should be a requirement for the trove debian package,
  either.
 
  Thanks,
  Nikhil

 Hi,

 I fixed the package and removed the Suggests: python-suds in both Trove
  Ironic. Now there's still the issue in:
 - nova


Nova itself doesn't depend on suds anymore. Oslo.vmware has a suds
dependency, but that is only needed if you are using the vmware virt driver
in nova.

So nova's vmware driver depends on suds (it may be suds-jurko these days),
but not nova in general.



 - cinder
 - oslo.vmware

 It'd be nice to do something about them. As I mentioned, I'll do the
 packaging work for anything that will replace it if needed.

 FYI, I filed bugs:
 https://bugs.launchpad.net/oslo.vmware/+bug/1465015
 https://bugs.launchpad.net/nova/+bug/1465016
 https://bugs.launchpad.net/cinder/+bug/1465017

 Cheers,

 Thomas Goirand (zigo)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taskflow] Returning information from reverted flow

2015-06-15 Thread Dulko, Michal
 -Original Message-
 From: Joshua Harlow [mailto:harlo...@outlook.com]
 Sent: Friday, June 12, 2015 5:49 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [taskflow] Returning information from reverted
 flow
 
 Dulko, Michal wrote:
  Hi,
 
  In Cinder we had merged a complicated piece of code[1] to be able to
  return something from flow that was reverted. Basically outside we
  needed an information if volume was rescheduled or not. Right now this
  is done by injecting information needed into exception thrown from the
  flow. Another idea was to use notifications mechanism of TaskFlow.
  Both ways are rather workarounds than real solutions.
 
 Unsure about notifications being a workaround (basically u are notifying to
 some other entities that rescheduling happened, which seems like exactly
 what it was made for) but I get the point ;)

Please take a look at this review - https://review.openstack.org/#/c/185545/. 
Notifications cannot help if some further revert decision needs to be based on 
something that happened earlier.
 
 
  I wonder if TaskFlow couldn't provide a mechanism to mark stored element
  to not be removed when revert occurs. Or maybe another way of returning
  something from reverted flow?
 
  Any thoughts/ideas?
 
 I have a couple, I'll make some paste(s) and see what people think,
 
 How would this look (as pseudo-code or other) to you, what would be your
 ideal, and maybe we can work from there (maybe u could do some paste(s)
 to and we can prototype it), just storing information that is returned
 from revert() somewhere? Or something else? There has been talk about
 task 'local storage' (or something like that/along those lines) that
 could also be used for this similar purpose.

I think that the easiest idea from the perspective of an end user would be to 
save items returned from revert into flow engine's storage *and* do not remove 
it from storage when whole flow gets reverted. This is completely backward 
compatible, because currently revert doesn't return anything. And if revert has 
to record some information for further processing - this will also work.

 
 
  [1] https://review.openstack.org/#/c/154920/
 
 
 __
 
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] stackforge projects are not second class citizens

2015-06-15 Thread Joe Gordon
One of the stated problems the 'big tent' is supposed to solve is:

'The binary nature of the integrated release results in projects outside
the integrated release failing to get the recognition they deserve.
Non-official projects are second- or third-class citizens which can't get
development resources. Alternative solutions can't emerge in the shadow of
the blessed approach. Becoming part of the integrated release, which was
originally designed to be a technical decision, quickly became a
life-or-death question for new projects, and a political/community
minefield.' [0]

Meaning projects should see an uptick in development once they drop their
second-class citizenship and join OpenStack. Now that we have been living
in the world of the big tent for several months now, we can see if this
claim is true.

Below is a list of the first few few projects to join OpenStack after the
big tent, All of which have now been part of OpenStack for at least two
months.[1]

* Mangum -  Tue Mar 24 20:17:36 2015
* Murano - Tue Mar 24 20:48:25 2015
* Congress - Tue Mar 31 20:24:04 2015
* Rally - Tue Apr 7 21:25:53 2015

When looking at stackalytics [2] for each project, we don't see any
noticeably change in number of reviews, contributors, or number of commits
from before and after each project joined OpenStack.

So what does this mean? At least in the short term moving from Stackforge
to OpenStack does not result in an increase in development resources (too
early to know about the long term).  One of the three reasons for the big
tent appears to be unfounded, but the other two reasons hold.  The only
thing I think this information changes is what peoples expectations should
be when applying to join OpenStack.

[0]
https://github.com/openstack/governance/blob/master/resolutions/20141202-project-structure-reform-spec.rst
[1] Ignoring OpenStackClent since the repos were always in OpenStack it
just didn't have a formal home in the governance repo.
[2] h 
http://stackalytics.com/?module=openstackclient-groupmetric=commits*http://stackalytics.com/?module=magnum-groupmetric=commits
http://stackalytics.com/?module=magnum-groupmetric=commits*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.vmware] Bump oslo.vmware to 1.0.0

2015-06-15 Thread Gary Kotton
Hi,
The email mail below was a little cryptic so here is the general plan to
move forwards:

1. Oslo.vmware:- rebase and update the patch
https://review.openstack.org/#/c/114503. This will sort out the issues
that we have with the exception hierarchy
2. Nova: hopefully manage to get reviews for
https://review.openstack.org/191569. This will ensure that Nova works with
the exceptions in the existing and future formats (and ensure that we do
not go through the debacle that we hit with 0.13.0). Please note that I
have tested this with the current version and the aforementioned evrsion
and it works
3. We will make a few extra cleanups in Nova as there are some exceptions
in the driver that are imported from oslo.vmware and they are specific for
nova.

So hopefully we can get concensus on the plan and move forwards.

A luta continua
Thanks
Gary

On 6/14/15, 5:34 PM, Gary Kotton gkot...@vmware.com wrote:

Hi,
I agree with Vipin. Can we please address the exception handling. We
already have the patches.
Thanks
Gary

On 6/14/15, 12:41 PM, Vipin Balachandran vbalachand...@vmware.com
wrote:

Dims,

There are some problems with exception hierarchy which need to be fixed.

-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com]
Sent: Tuesday, June 09, 2015 7:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [oslo.vmware] Bump oslo.vmware to 1.0.0

Gary, Tracy, Vipin and other contributors,

Is oslo.vmware API solid enough for us to bump to 1.0.0? if not, what's
left to be done?

thanks,
dims

--
Davanum Srinivas ::
https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_dimsd=B
Q
ICAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=CTAUyaHvyUUJ-0QHvizt
Q
xBhCDLLSg1DksoSE4TOfZ8m=kJNU_fkxhspoxLdOSLde2j2GO0QDJLhUi8W9uh6x5V4s=_U
0
Uo0Fogc_CkxUAPs9E2ql0AbSNYzFJ4YRjsq7qdv8e=

_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][python3] use of six.iteritems()

2015-06-15 Thread Robert Collins
On 12 June 2015 at 05:39, Dolph Mathews dolph.math...@gmail.com wrote:

 On Thu, Jun 11, 2015 at 12:34 AM, Robert Collins robe...@robertcollins.net
 wrote:

 On 11 June 2015 at 17:16, Robert Collins robe...@robertcollins.net
 wrote:

  This test conflates setup and execution. Better like my example,
 ...

 Just had it pointed out to me that I've let my inner asshole out again
 - sorry. I'm going to step away from the thread for a bit; my personal
 state (daughter just had a routine but painful operation) shouldn't be
 taken out on other folk, however indirectly.


 Ha, no worries. You are completely correct about conflating setup and
 execution. As far as I can tell though, even if I isolate the dict setup
 from the benchmark, I get the same relative differences in results.
 iteritems() was introduced for a reason!

Absolutely: the key question is whether that reason is applicable to us.

 If you don't need to go back to .items()'s copy behavior in py2, then
 six.iteritems() seems to be the best general purpose choice.

 I think Gordon said it best elsewhere in this thread:

 again, i just want to reiterate, i'm not saying don't use items(), i just
 think we should not blindly use items() just as we shouldn't blindly use
 iteritems()/viewitems()

I'd like to recap and summarise a bit.

I think its broadly agreed that:

The three view based methods -- iteritems, iterkeys, iteritems -- in
Python2 became unified with the list-form equivalents in Python3.

The view based methods are substantially faster and lower overhead
than the list form methods, approximately 3x.

We don't have any services today that expect to hold million item
dicts, or even 10K item dicts in a persistent fashion.

There's some cognitive overhead involved in reading six.iteritems(d)
vs d.items().

We should use d.items() except where it matters.


Where does it matter?
We have several process architectures in OpenStack:
- We have API servers that are eventlet (except keystone) WSGI
servers. They respond to requests on HTTP[S], each request is
independent and loads all its state from the DB and/or memcache each
time. We don't expect large numbers of concurrent active requests per
process. (Where large would be e.g. 1000).
- We have MQ servers that are conceptually the same as WSGI, just a
different listening protocol. They do sometimes have background tasks,
and for some (e.g. neutron-l3-agent) may hold significant cached state
between requests. But thats still scoped to medium size datasets. We
expect moderate numbers of concurrent active requests, as these are
the actual backends doing things for users, but since these servers
are typically working with actual slow things (e.g. the hypervisor)
high concurrency typically goes badly :).
- We have CLIs that start up, process some data and exit. This
includes python-novaclient and nova-manage. They generally work with
very small datasets and have no concurrency at all.

There are two ways that iteritems vs items etc could matter. One A) is
memorycpu on single use of very large dicts. The other B) is
aggregate overhead on many concurrent uses of a single shared dict (or
C) possibly N similar-sized dicts).

A) Doesn't apply to us in any case I can think of.
B) Doesn't apply to us either - our peak concurrency on any single
process is still low (we may manage to make it higher now we're moving
on the PyMYSql thing, but thats still in progress - and of course
there are tradeoffs with high concurrency depending on the ratio of
work-to-wait each request has. Very high concurrency depends on a very
low ratio: to have 1000 concurrent requests that aren't slowing each
other down requires that each requests wall clock be 1000x the time
spent in-process actioning it; and that there be enough backend
capacity (whatever that is) to dispatch the work to without causing
queuing in that part of the system.
C) We can eliminate via both the argument on B, and on relative
overheads: if we had 1 1000-item dicts in process at once, the
relative overhead of making items() from them all is approx the size
of the dicts: but its almost certain we have much more state hanging
around in each of those 1 threads than each dict: so the
incremental cost will not dominate the process overheads.

I'm not - and haven't - said that iteritems() is never applicable *in
general*, rather I don't believe its ever applicable *to us* today:
and I'm arguing that we should default to items() and bring in
iteritems() if and when we need it.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stackforge projects are not second class citizens

2015-06-15 Thread Jay Pipes

On 06/15/2015 07:30 AM, Boris Pavlovic wrote:

Joe,

When looking at stackalytics [2] for each project, we don't see any
noticeably change in number of reviews, contributors, or number of
commits from before and after each project joined OpenStack.


I can't agree on this.

*) Rally is facing core-reviewers bottleneck currently.
We have about 130 (40 at the begging on kilo) patches on review.
*) In IRC +15 online members in average
*) We merged about x2 if we compare to kilo-1 vs liberty-1
*) I see a lot of interest from various companies to use Rally (because
it is *official* now)


I'd also like to note that Rally is the only project in openstack/ that 
has independent, non-affiliated contributors in the top 5 contributors 
to the project. I think that is an excellent sign that the Rally 
contributor community is growing slowly but surely in ways that we (as a 
community) want to encourage.


What degree the Big Tent has to do with this is, of course, up for 
debate. Just wanted to point out something that differentiates the Rally 
contributor team from other openstack/ project teams.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-15 Thread Bogdan Dobrelya
 On 06/12/2015 07:58 AM, Bogdan Dobrelya wrote:
 
 I'm actually happy to hear from you, since we were discussing together
 about that over the last 2 summits, without real plan between both groups.

I believe as a first steep, the contribution policy to Fuel library
should be clear and *prevent new forks of upstream modules* to be
accepted in future.  This will prevent the technical dept and fork
maintain cost to be increased in the future even more.

I suggested few changes to the following wiki section [0], see Case B.
Adding a new module. Let's please discuss this change as I took
initiative and edited the current version just in-place (good for me,
there is a history in wiki). The main point of the change is to allow
only pulling in changes to existing forks, and prohibit adding of new
forks, like [1] or [2] (related revert [3]).

There was also a solution suggested for new upstream modules should be
added to Fuel as plugins [4] and distributed as packages. Any emergency
custom patches may be added as usual patches for packages.
Submodules could be also an option.

[0]
https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Adding_new_puppet_modules_to_fuel-library
[1] https://review.openstack.org/#/c/190612/
[2] https://review.openstack.org/#/c/128575/
[3] https://review.openstack.org/#/c/191769/
[4] https://wiki.openstack.org/wiki/Fuel/Plugins

 
 
 Good feedback from the patch's author.
 
 
 Sounds like another plan here, which sounds great.
 
 
 Can you clarify what must be done by upstream manifests?

The OpenStack should be deployed from upstream packages with the
help of upstream puppet modules instead of forks in Fuel library, we
should go a bit further in acceptance criteria, that is that I mean.

-- 
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing YAMAMOTO Takashi for the Control Plane core team

2015-06-15 Thread Oleg Bondarev
+1

On Mon, Jun 15, 2015 at 12:16 PM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Not on the list either, but I want to +1 what Henry said. Yamamoto's
 reviews expand to the whole code base and are pretty always *very* usefu
 l.

 On 06/12/2015 11:39 PM, Henry Gessau wrote:
  Although I am not on your list I would like to add my +1! Yamamoto
  shows great attention to detail in code reviews and frequently
  finds real issues that were not spotted by others.
 
  On Thu, Jun 11, 2015, Kevin Benton blak...@gmail.com wrote:
  Hello all!
 
  As the Lieutenant of the built-in control plane[1], I would like
  YAMAMOTO Takashi to be a member of the control plane core
  reviewer team.
 
  He has been extensively reviewing the entire codebase[2] and his
  feedback on patches related to the reference implementation has
  been very useful. This includes everything ranging from the AMPQ
  API to OVS flows.
 
  Existing cores that have spent time working on the reference
  implementation (agents and AMQP code), please vote +1/-1 for his
  addition to the team. Aaron, Gary, Assaf, Maru, Kyle, Armando,
  Carl and Oleg; you have all been reviewing things in these areas
  recently so I would like to hear from you specifically.
 
  1.
  http://docs.openstack.org/developer/neutron/policies/core-reviewers.h
 tml#core-review-hierarchy
 
 
 2. http://stackalytics.com/report/contribution/neutron-group/90
 
 
  Cheers -- Kevin Benton
 
 
  __
 
 
 
 OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iQEcBAEBCAAGBQJVfpgHAAoJEC5aWaUY1u57z40IAL8HpGOEQY7aW1aa39C9ig3Y
 bNLEabeQ0K5a4+5ynVLIm1sEl4s3GF2r0gCXak9FrkqjHx2r8VeyOx8TqrCj8gX3
 +4aHI7n6rJoJtINuTd+bN7T65uaZE86erOcZN+yma5V69ObfIZhIfQKr3BMaBeZT
 ah5hkUKl88ckLL5zqc0HnT4wcFH/3Ved2fuhP7hw/IEGMFsqTDS8QUTdXg7/OF5t
 fLt0NluKoOuNMOk8dTqpqtQtiyS/E5TH+miVOzyrUeUmZOEayOO3O2b/9QRkX/hL
 ijv1j+clT69fhoVAWSaeR7IsXSfqMuKK/hrVtjnyDpBH4KuZnl3QbRpKJ6Yi3bw=
 =qnTt
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Rossella Sblendido for the Control Plane core team

2015-06-15 Thread Oleg Bondarev
+1

On Mon, Jun 15, 2015 at 12:14 PM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 No doubt +1.

 On 06/12/2015 09:44 PM, Kevin Benton wrote:
  Hello!
 
  As the Lieutenant of the built-in control plane[1], I would like
  Rossella Sblendido to be a member of the control plane core
  reviewer team.
 
  Her review stats are in line with other cores[2] and her feedback
  on patches related to the agents has been great. Additionally, she
  has been working quite a bit on the blueprint to restructure the L2
  agent code so she is very familiar with the agent code and the APIs
  it leverages.
 
  Existing cores that have spent time working on the reference
  implementation (agents and AMQP code), please vote +1/-1 for her
  addition to the team. Aaron, Gary, Assaf, Maru, Kyle, Armando, Carl
  and Oleg; you have all been reviewing things in these areas
  recently so I would like to hear from you specifically.
 
  1.
  http://docs.openstack.org/developer/neutron/policies/core-reviewers.ht
 ml#core-review-hierarchy
 
 
 2. http://stackalytics.com/report/contribution/neutron-group/30
 
  Cheers -- Kevin Benton
 
 
  __
 
 
 
 OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iQEcBAEBCAAGBQJVfpdmAAoJEC5aWaUY1u574skIAOm15mNKujH3X3XSpwtmy+V4
 cWNvjYZy0lCG2lbyAwGwgQD0Ybs88/RKOZPPpu9gugAuWcFQRHMSMcsahaTLcH5B
 6imK0tEUUn2HdCd9522kEWtf7Hkpux6p5TLQQo2tucvIBQohJuU42EidfKs9Peat
 ed/2kiXm+TGRSnZHAYwzJIrttux3ntTLQeTvElo+4vUQEQJNwm8eguXiAPENEjr9
 7Ou3TgnYeLXsfVopHXNV+jtkHDr92giB4joODqwc8RNDKE2+dhaqDxCrXq6ksYq7
 RFiyVy1qve394ebP0Ta0dq6LKmAYZCnRC9i928GSK5RvQBvcnJRiyFGIgtn4Qu0=
 =59av
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Missing openvswitch filter rules

2015-06-15 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 06/13/2015 04:38 PM, Jeff Feng wrote:
 *I'm using OVSHybridIptablesFirewallDriver in
 ovs_neutron_plugin.ini* / [securitygroup] firewall_driver = 
 neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

 
enable_security_group = True/
 
 *But I can not see any related rules added in iptables after
 restart neutron-openvswitch-agent.** ** Anyone have seen same issue
 before ? This is in Juno release.* *any idea which configuration
 could be wrong/missed ? **

I would start from looking into ovs agent log. Do
iptables-save/restore calls succeed there? Also, obviously a dumb
question, but worth being asked: have you actually started any instances
?

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVfpczAAoJEC5aWaUY1u57dVMH/R0wgGXNH/BwROCGcgm+q+L8
6kyvmiczyhZMn6T0261tnyem4aHONgy3BbZ9bDTSFEGNd8DgpTfSFF5HPb77/nju
NbKdPj52pIFlKY5tH54DWrrBAQ/Gulahj+/WxpNFT71s3OUGhaFfhnUKDiusIKYW
LflttkA1p++5uSFtxjBZshz4sc/hJLOJneYtxBscwD6QhMbHi5Rx2HuHTjz1LKp+
1Sjsd/S0jrKtLvmY4A19CFgcpC3Nl/+LUeZZpp6cytfQvDwSH/bLDJiAJGAkvA2g
tT9nrjuM7Yx86ki7vX/cTuUPZiNQpsZ6lf7BtGbahmWIsKeteLXr6W22RSXpmk4=
=YMpI
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.vmware] Bump oslo.vmware to 1.0.0

2015-06-15 Thread Davanum Srinivas
+1 to the plan garyk, vipin et.al

-- dims

On Mon, Jun 15, 2015 at 7:00 AM, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 The email mail below was a little cryptic so here is the general plan to
 move forwards:

 1. Oslo.vmware:- rebase and update the patch
 https://review.openstack.org/#/c/114503. This will sort out the issues
 that we have with the exception hierarchy
 2. Nova: hopefully manage to get reviews for
 https://review.openstack.org/191569. This will ensure that Nova works with
 the exceptions in the existing and future formats (and ensure that we do
 not go through the debacle that we hit with 0.13.0). Please note that I
 have tested this with the current version and the aforementioned evrsion
 and it works
 3. We will make a few extra cleanups in Nova as there are some exceptions
 in the driver that are imported from oslo.vmware and they are specific for
 nova.

 So hopefully we can get concensus on the plan and move forwards.

 A luta continua
 Thanks
 Gary

 On 6/14/15, 5:34 PM, Gary Kotton gkot...@vmware.com wrote:

Hi,
I agree with Vipin. Can we please address the exception handling. We
already have the patches.
Thanks
Gary

On 6/14/15, 12:41 PM, Vipin Balachandran vbalachand...@vmware.com
wrote:

Dims,

There are some problems with exception hierarchy which need to be fixed.

-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com]
Sent: Tuesday, June 09, 2015 7:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [oslo.vmware] Bump oslo.vmware to 1.0.0

Gary, Tracy, Vipin and other contributors,

Is oslo.vmware API solid enough for us to bump to 1.0.0? if not, what's
left to be done?

thanks,
dims

--
Davanum Srinivas ::
https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_dimsd=B
Q
ICAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=CTAUyaHvyUUJ-0QHvizt
Q
xBhCDLLSg1DksoSE4TOfZ8m=kJNU_fkxhspoxLdOSLde2j2GO0QDJLhUi8W9uh6x5V4s=_U
0
Uo0Fogc_CkxUAPs9E2ql0AbSNYzFJ4YRjsq7qdv8e=

_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Rossella Sblendido for the Control Plane core team

2015-06-15 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

No doubt +1.

On 06/12/2015 09:44 PM, Kevin Benton wrote:
 Hello!
 
 As the Lieutenant of the built-in control plane[1], I would like
 Rossella Sblendido to be a member of the control plane core 
 reviewer team.
 
 Her review stats are in line with other cores[2] and her feedback
 on patches related to the agents has been great. Additionally, she
 has been working quite a bit on the blueprint to restructure the L2
 agent code so she is very familiar with the agent code and the APIs
 it leverages.
 
 Existing cores that have spent time working on the reference 
 implementation (agents and AMQP code), please vote +1/-1 for her 
 addition to the team. Aaron, Gary, Assaf, Maru, Kyle, Armando, Carl
 and Oleg; you have all been reviewing things in these areas
 recently so I would like to hear from you specifically.
 
 1.
 http://docs.openstack.org/developer/neutron/policies/core-reviewers.ht
ml#core-review-hierarchy

 
2. http://stackalytics.com/report/contribution/neutron-group/30
 
 Cheers -- Kevin Benton
 
 
 __


 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVfpdmAAoJEC5aWaUY1u574skIAOm15mNKujH3X3XSpwtmy+V4
cWNvjYZy0lCG2lbyAwGwgQD0Ybs88/RKOZPPpu9gugAuWcFQRHMSMcsahaTLcH5B
6imK0tEUUn2HdCd9522kEWtf7Hkpux6p5TLQQo2tucvIBQohJuU42EidfKs9Peat
ed/2kiXm+TGRSnZHAYwzJIrttux3ntTLQeTvElo+4vUQEQJNwm8eguXiAPENEjr9
7Ou3TgnYeLXsfVopHXNV+jtkHDr92giB4joODqwc8RNDKE2+dhaqDxCrXq6ksYq7
RFiyVy1qve394ebP0Ta0dq6LKmAYZCnRC9i928GSK5RvQBvcnJRiyFGIgtn4Qu0=
=59av
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stackforge projects are not second class citizens

2015-06-15 Thread Jay Pipes

On 06/15/2015 06:20 AM, Joe Gordon wrote:

One of the stated problems the 'big tent' is supposed to solve is:

'The binary nature of the integrated release results in projects outside
the integrated release failing to get the recognition they deserve.
Non-official projects are second- or third-class citizens which can't
get development resources. Alternative solutions can't emerge in the
shadow of the blessed approach. Becoming part of the integrated release,
which was originally designed to be a technical decision, quickly became
a life-or-death question for new projects, and a political/community
minefield.' [0]

Meaning projects should see an uptick in development once they drop
their second-class citizenship and join OpenStack. Now that we have been
living in the world of the big tent for several months now, we can see
if this claim is true.

Below is a list of the first few few projects to join OpenStack after
the big tent, All of which have now been part of OpenStack for at least
two months.[1]

* Mangum -  Tue Mar 24 20:17:36 2015
* Murano - Tue Mar 24 20:48:25 2015
* Congress - Tue Mar 31 20:24:04 2015
* Rally - Tue Apr 7 21:25:53 2015

When looking at stackalytics [2] for each project, we don't see any
noticeably change in number of reviews, contributors, or number of
commits from before and after each project joined OpenStack.

So what does this mean? At least in the short term moving from
Stackforge to OpenStack does not result in an increase in development
resources (too early to know about the long term).  One of the three
reasons for the big tent appears to be unfounded, but the other two
reasons hold.


You have not given enough time to see the effects of the Big Tent, IMHO. 
Lots of folks in the corporate world just found out about it at the 
design summit, frankly.


 The only thing I think this information changes is what

peoples expectations should be when applying to join OpenStack.


What is your assumption of what people's expectations are when applying 
to join OpenStack?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove]Put all alternative configurations in default trove.conf

2015-06-15 Thread Amrith Kumar
Hello!

I’ve never had to set datastore_manager in trove.conf and I can launch Trove 
just fine with either one of three setup methods, devstack, redstack, or 
following the detailed installation steps provided in the documentation.

My suspicion is that the steps you are using to register your guest image are 
not correct (i.e. the invocation of the trove-manage command or any wrappers 
for it).

I would like to understand the problem you are facing because this solution 
appears baffling.

-amrith

--

Amrith Kumar, CTO Tesora (www.tesora.com)

Twitter: @amrithkumar
IRC: amrith @freenode



From: 陈迪豪 [mailto:chendi...@unitedstack.com]
Sent: Monday, June 15, 2015 7:15 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [trove]Put all alternative configurations in default 
trove.conf

Hi all,

I have created the blueprint about the default configuration file. I think we 
should add the essential configuration like datastore_manager in default 
trove.conf.

The blueprint is here 
https://blueprints.launchpad.net/trove/+spec/default-configuration-items

Any suggestion about this?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

2015-06-15 Thread Ryan Hallisey
+1 Great job with Cinder.

-Ryan

- Original Message -
From: Steven Dake (stdake) std...@cisco.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Sunday, June 14, 2015 1:48:48 PM
Subject: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

Hey folks, 

I am proposing Harm Waites for the Kolla core team. He did a fantastic job 
implementing Designate in a container[1] which I’m sure was incredibly 
difficult and never gave up even though there were 13 separate patch reviews :) 
Beyond Harm’s code contributions, he is responsible for 32% of the 
“independent” reviews[1] where independents compose 20% of our total reviewer 
output. I think we should judge core reviewers on more then output, and I knew 
Harm was core reviewer material with his fantastic review of the cinder 
container where he picked out 26 specific things that could be broken that 
other core reviewers may have missed ;) [3]. His other reviews are also as 
thorough as this particular review was. Harm is active in IRC and in our 
meetings for which his TZ fits. Finally Harm has agreed to contribute to the 
ansible-multi implementation that we will finish in the liberty-2 cycle. 

Consider my proposal to count as one +1 vote. 

Any Kolla core is free to vote +1, abstain, or vote –1. A –1 vote is a veto for 
the candidate, so if you are on the fence, best to abstain :) Since our core 
team has grown a bit, I’d like 3 core reviewer +1 votes this time around (vs 
Sam’s 2 core reviewer votes). I will leave the voting open until June 21  
UTC. If the vote is unanimous prior to that time or a veto vote is received, 
I’ll close voting and make appropriate adjustments to the gerrit groups. 

Regards 
-steve 

[1] https://review.openstack.org/#/c/182799/ 
[2] 
http://stackalytics.com/?project_type=allmodule=kollacompany=%2aindependent 
[3] https://review.openstack.org/#/c/170965/ 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove]Put all alternative configurations in default trove.conf

2015-06-15 Thread Doug Shelley
I agree with Amrith – we should try to understand what the problem first.

The datastore_manager config setting is passed down to the guest in the 
guest_info.conf file along with a few other config settings that the guest 
uses. I believe the intention is that trove_guestagent.conf is passed down 
unmodified (I.e. All guests have the same trove_guestagent.conf).

Regards,
Doug


From: Amrith Kumar amr...@tesora.commailto:amr...@tesora.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 15, 2015 at 7:27 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove]Put all alternative configurations in 
default trove.conf

Hello!

I’ve never had to set datastore_manager in trove.conf and I can launch Trove 
just fine with either one of three setup methods, devstack, redstack, or 
following the detailed installation steps provided in the documentation.

My suspicion is that the steps you are using to register your guest image are 
not correct (i.e. the invocation of the trove-manage command or any wrappers 
for it).

I would like to understand the problem you are facing because this solution 
appears baffling.

-amrith

--

Amrith Kumar, CTO Tesora (www.tesora.com)

Twitter: @amrithkumar
IRC: amrith @freenode



From:陈迪豪 [mailto:chendi...@unitedstack.com]
Sent: Monday, June 15, 2015 7:15 AM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [trove]Put all alternative configurations in default 
trove.conf

Hi all,

I have created the blueprint about the default configuration file. I think we 
should add the essential configuration like datastore_manager in default 
trove.conf.

The blueprint is here 
https://blueprints.launchpad.net/trove/+spec/default-configuration-items

Any suggestion about this?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VLAN-aware VMs meeting

2015-06-15 Thread Ildikó Váncsa
Hi Kyle,

 -Original Message-
 From: Kyle Mestery [mailto:mest...@mestery.com]
 Sent: June 15, 2015 04:26
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] VLAN-aware VMs meeting
 
 On Fri, Jun 12, 2015 at 8:51 AM, Ildikó Váncsa ildiko.van...@ericsson.com
 wrote:
 
 
   Hi,
 
 
 
   Since we reopened the review for this blueprint we’ve got a
 large number of comments. It can be clearly seen that the original proposal
 has to be changed, although it still requires some discussion to define a
 reasonable design that provides the desired feature and is aligned with the
 architecture and guidelines of Neutron. In order to speed up the process to
 fit into the Liberty timeframe, we would like to have a discussion about this.
 The goal is to discuss the alternatives we have, decide which to go on with
 and sort out the possible issues. After this discussion the blueprint will be
 updated with the desired solution.
 
 
 
   I would like to propose a time slot for _next Tuesday (06. 16.),
 17:00UTC – 18:00UTC_. I would like to have the discussion on the
 #openstack-neutron channel, that gives a chance to guys who might be
 interested, but missed this mail to attend. I tried to check the slot, but 
 please
 let me know if it collides with any Neutron related meeting.
 
 
 
 
 This looks to be fine. I would suggest that it may make more sense to have it
 in an #openstack-meeting channel, though we can certainly do a free-form
 chat in #openstack-neutron as well. I think the desired end-goal here should
 be to figure out any remaining nits that are being discussed on the spec so
 we can move forward in Liberty.
 

I wasn’t sure that it is a good idea to bring an unscheduled meeting to the 
meeting channels. Is it acceptable to hold an ad-hoc meeting there or does it 
have to be registered somewhere even if it's one occasion? As much as I saw the 
#openstack-meeting-4 channel is available although the list of meetings and the 
.ical file is not in sync, it's not taken in either. Should we try it there?

I agree with the desired outcome, so that we can start the implementation as 
soon as possible. I will try to send out an agenda before the meeting with the 
points that we should discuss.

Thanks and Best Regards,
Ildikó

 
 Thanks,
 
 Kyle
 
 
 
 
 
 
   Thanks and Best Regards,
 
   Ildikó
 
 
   ___
 ___
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
 dev
 
 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Getting rid of suds, which is unmaintained, and which we want out of Debian

2015-06-15 Thread Davanum Srinivas
Thomas,

Is anyone willing to the work needed to get the existing Nova vmware
driver scenarios with any alternate python library acceptable to
Debian? If not, this discussion is moot.

-- dims

On Mon, Jun 15, 2015 at 9:16 AM, Thomas Goirand z...@debian.org wrote:
 On 06/15/2015 11:31 AM, Joe Gordon wrote:
 Nova itself doesn't depend on suds anymore.

 A quick grep still shows references to suds (that's in Kilo, but the
 master branch shows similar results):

 etc/nova/logging_sample.conf:qualname = suds

 nova/tests/unit/test_hacking.py: def
 fake_suds_context(calls={}):

 nova/tests/unit/virt/vmwareapi/test_vim_util.py:with
 stubs.fake_suds_context(calls):

 nova/tests/unit/virt/vmwareapi/stubs.py:def fake_suds_context(calls=None):

 nova/tests/unit/virt/vmwareapi/stubs.py:Generate a suds client
 which automatically mocks all SOAP method calls.

 nova/tests/unit/virt/vmwareapi/stubs.py:
 mock.patch('suds.client.Client', fake_client),

 nova/tests/unit/virt/vmwareapi/test_driver_api.py:import suds

 nova/tests/unit/virt/vmwareapi/test_driver_api.py:
 mock.patch.object(suds.client.Client,

 nova/tests/unit/virt/vmwareapi/fake.py:Fake factory class for the
 suds client.

 nova/tests/unit/virt/vmwareapi/fake.py:Initializes the suds
 client object, sets the service content

 nova/virt/vmwareapi/vim_util.py:import suds

 nova/virt/vmwareapi/vim_util.py:for k, v in
 suds.sudsobject.asdict(obj).iteritems():

 nova/config.py:   'qpid=WARN', 'sqlalchemy=WARN',
 'suds=INFO',

 test-requirements.txt:suds=0.4


 Oslo.vmware has a suds
 dependency, but that is only needed if you are using the vmware virt
 driver in nova.

 It's used in unit tests, no?

 So nova's vmware driver depends on suds (it may be suds-jurko these
 days)

 As I wrote, suds-jurko isn't acceptable either, as it's also not
 maintained upstream.

 but not nova in general.

 If we don't want suds, we don't want suds. Not just it's only in some
 parts kind of answer. Especially, it should appear in
 tests-requirements.txt and in vmwareapi unit tests. Don't you think?

 Cheers,

 Thomas Goirand (zigo)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Adding success message to succeed actions

2015-06-15 Thread John Garbutt
On 15 June 2015 at 08:03, 郑振宇 zheng.zhe...@outlook.com wrote:
 Hi All,

 When querying instance actions using API: nova instance-action-list, nova
 will response with a table show in below:
 root@controller:~# nova instance-action-list
 fcbba82f-60a1-4785-84f2-88bcf2da7e7e
 ++-+-++
 | Action | Request_ID
 | Message | Start_Time   |
 ++-+-++
 | create | req-78e63d14-5177-4bcf-8d94-7a60af4f276f | - |
 2015-06-11T07:36:20.00 |
 ++-+-++
 this instance has been successfully created and we can see that the message
 about this action is empty.
 root@controller:~# nova list
 +++++-+--+
 | ID  | Name   |
 Status | Task State | Power State | Networks |
 +++++-+--+
 | fcbba82f-60a1-4785-84f2-88bcf2da7e7e | test_1 | ACTIVE | -
 | Running | sample_network=20.20.0.7 |
 +++++-+--+

 On the other hand, when an action uses nova-scheduler and the action fails
 before or within nova-scheduler, the message of response table will also be
 empty. For example, when an instance failed to be created when there are no
 valid host (an oversized flavor has been chosen), querying instance action
 using nova instance-action-list will show the below table:

 root@controller:~# nova instance-action-list
 101756c5-6d6b-412b-9cc6-1628fa7c0b9c
 ++-+-++
 | Action | Request_ID
 | Message | Start_Time |
 ++-+-++
 | create | req-88d93eeb-fad9-4039-8ba6-1d2f01a0605d | - |
 2015-06-12T04:03:10.00 |
 ++-+-++

 root@controller:~# nova list
 +---++++-+--+
 | ID  | Name
 | Status | Task State | Power State | Networks |
 +--++++-+--+
 | 101756c5-6d6b-412b-9cc6-1628fa7c0b9c | event_test | ERROR  | -  |
 NOSTATE |  |
 | fcbba82f-60a1-4785-84f2-88bcf2da7e7e | test_1 | ACTIVE | -  |
 Running | sample_network=20.20.0.7 |
 +--++++-+--+

 but other failed action will have an error message:

 root@controller:/var/log/nova# nova instance-action-list
 4525360f-75da-4d5e-bed7-a21e62212eab
 ++--+-++
 | Action | Request_ID   | Message | Start_Time
 |
 ++--+-++
 | create | req-7be58a31-0243-43a1-8a21-45ad1e90d279 | -   |
 2015-06-12T07:38:05.00 |
 | resize | req-120f3379-313c-471e-b6e5-4d8d6a7d1357 | Error   |
 2015-06-15T06:36:38.00 |
 ++--+-++

 As we can see from the above example, we can not identify such failed
 actions from others only using nova instance-action-list API.

 I suggest we adding success messages to succeed actions and/or adding error
 messages to the above mentioned failed actions to fix this problem.

This is a big known weak point in the Nova API right now.

My gut tells me we should add an extra status field for
Pending/Success/Failure, rather than making users parse a string
message.

This general area is being revisited as part of the tasks topic that
we mention here:
http://specs.openstack.org/openstack/nova-specs/priorities/liberty-priorities.html#priorities-without-a-clear-plan

It would be great if you could submit a backlog spec to describe this
issue, so we don't loose that when we draw up the bigger plans:
http://specs.openstack.org/openstack/nova-specs/specs/backlog/index.html

It would be even better if you are happy to help with Tasks effort.
Although we are not quite yet in a position which people can help out
with that, although I do hope we can get there soon.


Re: [openstack-dev] [trove]Put all alternative configurationsin default trove.conf

2015-06-15 Thread 陈迪豪
Thanks for your reply @amrith.


The datastore_manager refer to the manager you gonna use, mysql or others. If 
you don't set it, the default value is None. And if it's None, guest agent will 
fail to start up.


That's why I think it's the necessary configuration we need to focus on. I 
don't know why you can setup without setting it. So what's your datastore if 
you don't configure it? MySQL?
 
-- Original --
From:  Amrith Kumaramr...@tesora.com;
Date:  Mon, Jun 15, 2015 07:27 PM
To:  openstack-dev@lists.openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [trove]Put all alternative configurationsin   
default trove.conf

 
  
Hello!
 
 
 
I’ve never had to set datastore_manager in trove.conf and I can launch Trove 
just fine with either one of three setup methods, devstack, redstack, or 
following the detailed  installation steps provided in the documentation.
 
 
 
My suspicion is that the steps you are using to register your guest image are 
not correct (i.e. the invocation of the trove-manage command or any wrappers 
for it). 
 
 
 
I would like to understand the problem you are facing because this solution 
appears baffling.
 
 
 
-amrith
 
 
 
--
 
 
 
Amrith Kumar, CTO Tesora (www.tesora.com)
 
 
 
Twitter: @amrithkumar  
 
IRC: amrith @freenode 
 
 
 
 
 
 

From: 陈迪豪 [mailto:chendi...@unitedstack.com] 
 Sent: Monday, June 15, 2015 7:15 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [trove]Put all alternative configurations in default 
trove.conf
 
 
 
 
  
Hi all,
 
  
 
 
  
I have created the blueprint about the default configuration file. I think we 
should add the essential configuration like datastore_manager in default 
trove.conf.
 
  
 
 
  
The blueprint is here 
https://blueprints.launchpad.net/trove/+spec/default-configuration-items
 
  
 
 
  
Any suggestion about this?__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Fuel API settings reference

2015-06-15 Thread Oleg Gelbukh
Good day, fellow fuelers

Fuel API is a powerful tool that allow for very fine tuning of deployment
settings and parameters, and we all know that UI exposes only a fraction of
the full range of attributes client can pass to Fuel installer.

However, there are very little documentation that explains what settings
are accepted by Fuel objects, what are they meanings and what is their
syntax. There is a main reference document for API [1], but it does give
almost no insight into payload of parameters that every entity accepts.
Which are they and what they for seems to be mostly scattered as a tribal
knowledge.

I would like to understand if there is a need in such a document among
developers and deployers who consume Fuel API? Or might be there is already
such document or effort to create it going on?

--
Best regards,
Oleg Gelbukh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DB2 CI enablement on Keystone

2015-06-15 Thread Lance Bragstad
On Mon, Jun 15, 2015 at 5:00 AM, Feng Xi BJ Yan yanfen...@cn.ibm.com
wrote:

 Hi, Keystone guys,

 Could we have a talk about DB2 CI enablement on this Monday, 8PM central
 US time? which is Tuesday 9AM beijeing time?

Works for me, I'll make a note to be in the channel at 8 PM central.

Thanks for the update.



 For your questions, here are my answers:

 1) Is the team going to be responsive to requests unlike last time there
 was a problem?
 *(yanfengxi) Yes, problems will be handled in time. In our current working
 process, issues items will be opened internally automatically if the CI
 fails. And our mainterner will try to handle the issues opened as soon as
 possible. If there are others issues, please send email to
 yanfen...@cn.ibm.com yanfen...@cn.ibm.com, **or reach irc account
 yanfengxi on channel #openstack-infra.*

 2) Is the job stable? I have no data about the stability I couldn't vouch
 for it.
 *(yanfengxi) From the statictics of other running CIs, we have a 88%,
 including environment failures, and tempest failures. Which I think is
 acceptable.*
 *Because keystone CI is not enabled, we do not have the statistics for it.
 But from several test runnings, keystone CI can run properly.*

 By the way, I already updated the wiki page:
 https://wiki.openstack.org/w/index.php?title=IBM/IBM_DB2_CI to include a
 irc nick name.

 Best Regard:)
 Bruce Yan

 Yan, Fengxi (闫凤喜)
 Openstack Platform Team
 IBM China Systems  Technology Lab, Beijing
 E-Mail: yanfen...@cn.ibm.com
 Tel: 86-10-82451418  Notes: Feng Xi FX Yan/China/IBM
 Address: 3BW239, Ring Building. No.28 Building, ZhongGuanCun Software
 Park,No.8
 DongBeiWang West Road, ShangDi, Haidian District, Beijing, P.R.China

 [image: Inactive hide details for Brant L Knudson---2015/06/10
 02:51:14---I brought this up at the keystone meeting and the community d]Brant
 L Knudson---2015/06/10 02:51:14---I brought this up at the keystone meeting
 and the community didn't want DB2 CI reporting was answere

 From: Brant L Knudson/Rochester/IBM@IBMUS
 To: Feng Xi BJ Yan/China/IBM@IBMCN@IBMAU
 Cc: Dan Moravec/Rochester/IBM@IBMUS, Zhu ZZ Zhu/China/IBM@IBMCN
 Date: 2015/06/10 02:51
 Subject: Re: Could you help to talk about DB2 CI enablement on keystone
 meeting
 --



 I brought this up at the keystone meeting and the community didn't want
 DB2 CI reporting was answered until a couple of questions were answered:

 They asked a couple of questions that I wasn't able to answer:

 1) Is the team going to be responsive to requests unlike last time there
 was a problem?
 2) Is the job stable? I have no data about the stability I couldn't vouch
 for it.

 They also wanted an irc nick fors someone that can answer questions and
 respond to problems, on the wiki page:
 https://wiki.openstack.org/w/index.php?title=IBM/IBM_DB2_CI

 Rather than go back and forth on this week to week, they suggested that we
 have a meeting on freenode irc in #openstack-keystone when it's convenient
 enough for all of us. They were willing to meet any day at 8 PM central US
 time, which I think is 9 AM Beijing time. So just pick a date when you can
 meet and send a note to the openstack-dev mailing list when it is.

 Also, you can include in the note any backup material that you have, such
 as answers to the above questions.

 Brant Knudson, OpenStack Development - Keystone core member
 Phone:   507-253-8621 T/L:553-8621



 [image: Inactive hide details for Brant L Knudson---06/03/2015 09:17:07
 AM---Looks OK to me. I'll add it to the agenda for next week.]Brant L
 Knudson---06/03/2015 09:17:07 AM---Looks OK to me. I'll add it to the
 agenda for next week.  Brant Knudson, OpenStack Development - Key

 From: Brant L Knudson/Rochester/IBM
 To: Feng Xi BJ Yan/China/IBM@IBMCN@IBMAU
 Cc: Dan Moravec/Rochester/IBM@IBMUS, Zhu ZZ Zhu/China/IBM@IBMCN
 Date: 06/03/2015 09:17 AM
 Subject: Re: Could you help to talk about DB2 CI enablement on keystone
 meeting
 --



 Looks OK to me. I'll add it to the agenda for next week.

 Brant Knudson, OpenStack Development - Keystone core member
 Phone:   507-253-8621 T/L:553-8621



 [image: Inactive hide details for Feng Xi BJ Yan---06/02/2015 11:58:22
 PM---OK, please take a look at this patch https://review.opensta]Feng Xi
 BJ Yan---06/02/2015 11:58:22 PM---OK, please take a look at this patch
 https://review.openstack.org/#/c/187751/ Best Regard:)

 From: Feng Xi BJ Yan/China/IBM@IBMCN
 To: Brant L Knudson/Rochester/IBM@IBMUS@IBMAU
 Cc: Dan Moravec/Rochester/IBM@IBMUS, Zhu ZZ Zhu/China/IBM@IBMCN
 Date: 06/02/2015 11:58 PM
 Subject: Re: Could you help to talk about DB2 CI enablement on keystone
 meeting
 --


 OK, please take a look at this patch
 https://review.openstack.org/#/c/187751/




 Best Regard:)
 Bruce Yan

 Yan, Fengxi (闫凤喜)
 Openstack Platform Team
 IBM China Systems  Technology Lab, Beijing
 E-Mail: yanfen...@cn.ibm.com
 Tel: 86-10-82451418  Notes: Feng Xi 

Re: [openstack-dev] Online Migrations.

2015-06-15 Thread Philip Schwartz
This weekend, I discussed the requested change at length with Mike. I think 
before moving forward, we need a better understanding of what is trying to be 
achieved.

Request: Add the ability to verify migrations are completed prior to contract.

As discussed here previously, I worked out a setup using the .info dict of the 
Column’s that are to be removed or Migrated. But came across and issue that is 
more concerning.

In order to contract the DB, the columns need to be removed from the Model 
Classes. We can do this just prior to scanning the Model Classes to determine 
schema migrations, but the columns would still exist in the Model Class for all 
other loading of the model, i.e. at ORM query’s and such.

After discussing this with Mike, there looks to be 3 options:


  1.  Remove the columns at contract time and also build a mechanism to scan 
for the same .info entry at Query time to prevent it from being used.
  2.  Remove the columns at contract time and create a mechanism that on load 
of data models once the migration is complete would remove the columns (forced 
restart of service once migration is done will allow the ORM queries to no 
longer see the column)
  3.  Build the controls into our process with a way of storing the current 
release cycle information to only allow contract’s to occur at a set major 
release and maintain the column in the model till it is ready to be removed 
major release + 1 since the migration was added.

Personally, for ease of development and functionality, I think the best option 
is the 3rd as it will not require reinventing the wheel around data loading and 
handling of already loaded data models that could affect ORM queries.

-Ph




On Jun 10, 2015, at 5:16 PM, Dan Smith 
d...@danplanet.commailto:d...@danplanet.com wrote:

Well as long as you want to be able to load data and perform updates on
Instance.name using normal ORM patterns, you'd still have that column
mapped, if you want to put extra things into it to signal your migration
tool, there is an .info dictionary available on the Column that you can
use to do that.  A function like removed_field() can certainly wrap
around a normal column mapping and populate a hint into the .info
dictionary that online migrations will pick up on.

I intentionally didn't (try to) define RemovedField because I figured it
might contain some nasty proxy bits to make it continue to work.
However, glad to hear that there is already a provision here that we can
use -- that sounds perfect.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Fuel API settings reference

2015-06-15 Thread Andrew Woodward
I think there is some desire to see more documentation around here as there
are some odd interactions with parts of the data payload, and perhaps
documenting these may improve some of them.

I think the gaps in order of most used are:
* node object create / update
* environment networks ( the fact that metadata cant be updated kills me)
* environment settings (the separate api for hidden and non kills me)
* release update
* role add/update

After these are updated I think we can move on to common but less used
* node interface assignment
* node disk assignment



On Mon, Jun 15, 2015 at 8:09 AM Oleg Gelbukh ogelb...@mirantis.com wrote:

 Good day, fellow fuelers

 Fuel API is a powerful tool that allow for very fine tuning of deployment
 settings and parameters, and we all know that UI exposes only a fraction of
 the full range of attributes client can pass to Fuel installer.

 However, there are very little documentation that explains what settings
 are accepted by Fuel objects, what are they meanings and what is their
 syntax. There is a main reference document for API [1], but it does give
 almost no insight into payload of parameters that every entity accepts.
 Which are they and what they for seems to be mostly scattered as a tribal
 knowledge.

 I would like to understand if there is a need in such a document among
 developers and deployers who consume Fuel API? Or might be there is already
 such document or effort to create it going on?

 --
 Best regards,
 Oleg Gelbukh
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
--
Andrew Woodward
Mirantis
Fuel Community Ambassador
Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Getting rid of suds, which is unmaintained, and which we want out of Debian

2015-06-15 Thread Joe Gordon
On Mon, Jun 15, 2015 at 4:16 PM, Thomas Goirand z...@debian.org wrote:

 On 06/15/2015 11:31 AM, Joe Gordon wrote:
  Nova itself doesn't depend on suds anymore.

 A quick grep still shows references to suds (that's in Kilo, but the
 master branch shows similar results):


Your git repo is out of date.


https://github.com/openstack/nova/search?utf8=%E2%9C%93q=suds



 etc/nova/logging_sample.conf:qualname = suds


this doesn't actually require suds.

We can remove this line



 nova/tests/unit/test_hacking.py: def
 fake_suds_context(calls={}):

 nova/tests/unit/virt/vmwareapi/test_vim_util.py:with
 stubs.fake_suds_context(calls):

 nova/tests/unit/virt/vmwareapi/stubs.py:def fake_suds_context(calls=None):

 nova/tests/unit/virt/vmwareapi/stubs.py:Generate a suds client
 which automatically mocks all SOAP method calls.

 nova/tests/unit/virt/vmwareapi/stubs.py:
 mock.patch('suds.client.Client', fake_client),

 nova/tests/unit/virt/vmwareapi/test_driver_api.py:import suds

 nova/tests/unit/virt/vmwareapi/test_driver_api.py:
 mock.patch.object(suds.client.Client,

 nova/tests/unit/virt/vmwareapi/fake.py:Fake factory class for the
 suds client.

 nova/tests/unit/virt/vmwareapi/fake.py:Initializes the suds
 client object, sets the service content

 nova/virt/vmwareapi/vim_util.py:import suds


this was removed in https://review.openstack.org/#/c/181554/



 nova/virt/vmwareapi/vim_util.py:for k, v in
 suds.sudsobject.asdict(obj).iteritems():

 nova/config.py:   'qpid=WARN', 'sqlalchemy=WARN',
 'suds=INFO',


We missed this, so here is a patch https://review.openstack.org/#/c/191795/



 test-requirements.txt:suds=0.4


  Oslo.vmware has a suds
  dependency, but that is only needed if you are using the vmware virt
  driver in nova.

 It's used in unit tests, no?


as explained above, nope.



  So nova's vmware driver depends on suds (it may be suds-jurko these
  days)

 As I wrote, suds-jurko isn't acceptable either, as it's also not
 maintained upstream.


Agreed, we have more work to do.



  but not nova in general.

 If we don't want suds, we don't want suds. Not just it's only in some
 parts kind of answer. Especially, it should appear in
 tests-requirements.txt and in vmwareapi unit tests. Don't you think?

 Cheers,

 Thomas Goirand (zigo)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VLAN-aware VMs meeting

2015-06-15 Thread Kyle Mestery
On Mon, Jun 15, 2015 at 7:40 AM, Ildikó Váncsa ildiko.van...@ericsson.com
wrote:

 Hi Kyle,

  -Original Message-
  From: Kyle Mestery [mailto:mest...@mestery.com]
  Sent: June 15, 2015 04:26
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron] VLAN-aware VMs meeting
 
  On Fri, Jun 12, 2015 at 8:51 AM, Ildikó Váncsa 
 ildiko.van...@ericsson.com
  wrote:
 
 
Hi,
 
 
 
Since we reopened the review for this blueprint we’ve got a
  large number of comments. It can be clearly seen that the original
 proposal
  has to be changed, although it still requires some discussion to define a
  reasonable design that provides the desired feature and is aligned with
 the
  architecture and guidelines of Neutron. In order to speed up the process
 to
  fit into the Liberty timeframe, we would like to have a discussion about
 this.
  The goal is to discuss the alternatives we have, decide which to go on
 with
  and sort out the possible issues. After this discussion the blueprint
 will be
  updated with the desired solution.
 
 
 
I would like to propose a time slot for _next Tuesday (06. 16.),
  17:00UTC – 18:00UTC_. I would like to have the discussion on the
  #openstack-neutron channel, that gives a chance to guys who might be
  interested, but missed this mail to attend. I tried to check the slot,
 but please
  let me know if it collides with any Neutron related meeting.
 
 
 
 
  This looks to be fine. I would suggest that it may make more sense to
 have it
  in an #openstack-meeting channel, though we can certainly do a free-form
  chat in #openstack-neutron as well. I think the desired end-goal here
 should
  be to figure out any remaining nits that are being discussed on the spec
 so
  we can move forward in Liberty.
 

 I wasn’t sure that it is a good idea to bring an unscheduled meeting to
 the meeting channels. Is it acceptable to hold an ad-hoc meeting there or
 does it have to be registered somewhere even if it's one occasion? As much
 as I saw the #openstack-meeting-4 channel is available although the list of
 meetings and the .ical file is not in sync, it's not taken in either.
 Should we try it there?


I think as long as there isn't a scheduled meeting in place, we can use the
channel for a one-off meeting. It has the added benefit of being logged in
the same way as other meetings, and we can focus the discussion.

I agree with the desired outcome, so that we can start the implementation
 as soon as possible. I will try to send out an agenda before the meeting
 with the points that we should discuss.


Perfect! I've added a note to the Neutron meeting for tomorrow [1] to
highlight this. If you get the agenda written, please add a pointer in the
Neutron meeting agenda in the same place.

[1]
https://wiki.openstack.org/wiki/Network/Meetings#Announcements_.2F_Reminders


 Thanks and Best Regards,
 Ildikó

 
  Thanks,
 
  Kyle
 
 
 
 
 
 
Thanks and Best Regards,
 
Ildikó
 
 
___
  ___
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
  requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
  dev
 
 
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] DevStack switching from MySQL-python to PyMySQL

2015-06-15 Thread Kyle Mestery
On Mon, Jun 15, 2015 at 6:30 AM, Sean Dague s...@dague.net wrote:

 On 06/11/2015 06:29 AM, Sean Dague wrote:
  On 06/09/2015 06:42 PM, Jeremy Stanley wrote:
  As discussed in the Liberty Design Summit Moving apps to Python 3
  cross-project workshop, the way forward in the near future is to
  switch to the pure-python PyMySQL library as a default.
 
  https://etherpad.openstack.org/p/liberty-cross-project-python3
 
  To that end, support has already been implemented and tested in
  DevStack, and when https://review.openstack.org/184493 merges in a
  day or two this will become its default. Any last-minute objections
  or concerns?
 
  Note that similar work is nearing completion is oslo.db with
  https://review.openstack.org/184392 and there are also a lot of
  similar changes in flight for other repos under the same review
  topic (quite a few of which have already merged).
 
  Ok, we've had 2 days fair warning, I'm pushing the merge button here.
  Welcome to the world of pure python mysql.

 As a heads up for where we stand. The switch was flipped, but a lot of
 neutron jobs (rally  tempest) went into a pretty high failure rate
 after it was (all the other high volume jobs seemed fine).

 We reverted the change here to unwedge things -
 https://review.openstack.org/#/c/191010/

 After a long conversation with Henry and Armando we came up with a new
 plan, because we want the driver switch, and we want to figure out why
 it causes a high Neutron failure rate, but we don't want to block
 everything.

 https://review.openstack.org/#/c/191121/ - make the default Neutron jobs
 set some safe defaults (which are different than non Neutron job
 defaults), but add a flag to make it possible to expose these issues.

 Then add new non-voting check jobs to Neutron queue to expose these
 issues - https://review.openstack.org/#/c/191141/. Hopefully allowing
 interested parties to get to the bottom of these issues around the db
 layer. It's in the check queue instead of the experimental queue to get
 enough volume to figure out the pattern for the failures, because they
 aren't 100%, and they seem to move around a bit.

 Once https://review.openstack.org/#/c/191121/ is landed we'll revert
 revert - https://review.openstack.org/#/c/191113/ and get everything
 else back onto pymysql.


Thanks for the excellent summary of where things stand Sean! And thanks to
Armando for jumping on this last week to get things moving so we can
stabilize everything else and sort out the issues Neutron has with pymsql.

Kyle


 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stackforge projects are not second class citizens

2015-06-15 Thread Flavio Percoco

On 15/06/15 19:20 +0900, Joe Gordon wrote:

One of the stated problems the 'big tent' is supposed to solve is:

'The binary nature of the integrated release results in projects outside the
integrated release failing to get the recognition they deserve. Non-official
projects are second- or third-class citizens which can't get development
resources. Alternative solutions can't emerge in the shadow of the blessed
approach. Becoming part of the integrated release, which was originally
designed to be a technical decision, quickly became a life-or-death question
for new projects, and a political/community minefield.' [0]

Meaning projects should see an uptick in development once they drop their
second-class citizenship and join OpenStack. Now that we have been living in
the world of the big tent for several months now, we can see if this claim is
true.

Below is a list of the first few few projects to join OpenStack after the big
tent, All of which have now been part of OpenStack for at least two months.[1]

* Mangum -  Tue Mar 24 20:17:36 2015
* Murano - Tue Mar 24 20:48:25 2015
* Congress - Tue Mar 31 20:24:04 2015
* Rally - Tue Apr 7 21:25:53 2015 


We should also add Zaqar to this list. It was *incubated* when the Big
Tent came in and that's the only (?) reason why the project was not
requested to go through the Big Tent request process.

Zaqar has gotten more contributors - most of them at the end of Kilo -
from the OpenStack community. Some of them without affiliation.

I don't believe it's completely related to the Big Tent change but I
do think not having that integrated tag helped the project to gain
more attention from the rest of the community.

Cheers,
Flavio



When looking at stackalytics [2] for each project, we don't see any noticeably
change in number of reviews, contributors, or number of commits from before and
after each project joined OpenStack.

So what does this mean? At least in the short term moving from Stackforge to
OpenStack does not result in an increase in development resources (too early to
know about the long term).  One of the three reasons for the big tent appears
to be unfounded, but the other two reasons hold.  The only thing I think this
information changes is what peoples expectations should be when applying to
join OpenStack.

[0] https://github.com/openstack/governance/blob/master/resolutions/
20141202-project-structure-reform-spec.rst
[1] Ignoring OpenStackClent since the repos were always in OpenStack it just
didn't have a formal home in the governance repo.
[2] hhttp://stackalytics.com/?module=magnum-groupmetric=commits



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgpDKL5YxNa0r.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Volume creation fails in Horizon

2015-06-15 Thread Jayanthi, Swaroop
Hi All,

I am trying to create a Volume for VMFS with a Volume Type (selected Volume 
Type selected has extra_specs).I am receiving an error Volume creation 
failed incase if the volume-type has extra-specs.

Cinder doesn't support Volume creation if the volume-type has extra-specs?   Is 
this expected behavior can you please let me know your thoughts.

If not how to overcome this issue from Horizon UI incase if the Volume-Type has 
extra-specs ?

Thanks and Regards,
--Swaroop


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [fuel] more collaboration request

2015-06-15 Thread Bogdan Dobrelya
On 15.06.2015 13:59, Bogdan Dobrelya wrote:
 
 I believe as a first steep, the contribution policy to Fuel library

Sorry, the step, it is not so steep.

 should be clear and *prevent new forks of upstream modules* to be
 accepted in future.  This will prevent the technical dept and fork
 maintain cost to be increased in the future even more.
 
 I suggested few changes to the following wiki section [0], see Case B.
 Adding a new module. Let's please discuss this change as I took
 initiative and edited the current version just in-place (good for me,
 there is a history in wiki). The main point of the change is to allow
 only pulling in changes to existing forks, and prohibit adding of new
 forks, like [1] or [2] (related revert [3]).
 
 There was also a solution suggested for new upstream modules should be
 added to Fuel as plugins [4] and distributed as packages. Any emergency
 custom patches may be added as usual patches for packages.
 Submodules could be also an option.
 
 [0]
 https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Adding_new_puppet_modules_to_fuel-library
 [1] https://review.openstack.org/#/c/190612/
 [2] https://review.openstack.org/#/c/128575/
 [3] https://review.openstack.org/#/c/191769/
 [4] https://wiki.openstack.org/wiki/Fuel/Plugins
 

And the second step, as I can see it, is for Fuel build system to be
switched on upstream puppet modules plus custom patches - as it was
suggested above in this mail-thread. The similar way we do for the
fuel-library6.1 package by related bp [0].

Fuel library components should be bundled as packages, like
fuel-puppet-nova, fuel-puppet-neutron and so on. These packages should
be build from upstream repositories of corresponding release branches.
All custom changes in Fuel library should be applied atop of these
builds and be maintained (rebase, if build failed) by Fuel dev team,
until got contributed upstream and removed from custom patches. Here is
related blueprint for this step [1]

[0] https://blueprints.launchpad.net/fuel/+spec/package-fuel-components
[1]
https://blueprints.launchpad.net/fuel/+spec/build-fuel-library-from-upstream

 
 The OpenStack should be deployed from upstream packages with the
 help of upstream puppet modules instead of forks in Fuel library, we
 should go a bit further in acceptance criteria, that is that I mean.
 


-- 
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Getting rid of suds, which is unmaintained, and which we want out of Debian

2015-06-15 Thread Thomas Goirand
On 06/15/2015 11:31 AM, Joe Gordon wrote:
 Nova itself doesn't depend on suds anymore.

A quick grep still shows references to suds (that's in Kilo, but the
master branch shows similar results):

etc/nova/logging_sample.conf:qualname = suds

nova/tests/unit/test_hacking.py: def
fake_suds_context(calls={}):

nova/tests/unit/virt/vmwareapi/test_vim_util.py:with
stubs.fake_suds_context(calls):

nova/tests/unit/virt/vmwareapi/stubs.py:def fake_suds_context(calls=None):

nova/tests/unit/virt/vmwareapi/stubs.py:Generate a suds client
which automatically mocks all SOAP method calls.

nova/tests/unit/virt/vmwareapi/stubs.py:
mock.patch('suds.client.Client', fake_client),

nova/tests/unit/virt/vmwareapi/test_driver_api.py:import suds

nova/tests/unit/virt/vmwareapi/test_driver_api.py:
mock.patch.object(suds.client.Client,

nova/tests/unit/virt/vmwareapi/fake.py:Fake factory class for the
suds client.

nova/tests/unit/virt/vmwareapi/fake.py:Initializes the suds
client object, sets the service content

nova/virt/vmwareapi/vim_util.py:import suds

nova/virt/vmwareapi/vim_util.py:for k, v in
suds.sudsobject.asdict(obj).iteritems():

nova/config.py:   'qpid=WARN', 'sqlalchemy=WARN',
'suds=INFO',

test-requirements.txt:suds=0.4


 Oslo.vmware has a suds
 dependency, but that is only needed if you are using the vmware virt
 driver in nova.

It's used in unit tests, no?

 So nova's vmware driver depends on suds (it may be suds-jurko these
 days)

As I wrote, suds-jurko isn't acceptable either, as it's also not
maintained upstream.

 but not nova in general.

If we don't want suds, we don't want suds. Not just it's only in some
parts kind of answer. Especially, it should appear in
tests-requirements.txt and in vmwareapi unit tests. Don't you think?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Fuel API settings reference

2015-06-15 Thread Przemyslaw Kaminski
Well, I suggest continuing

https://review.openstack.org/#/c/179051/

It basically requires to update docstrings of handler functions
according to [1]. This way the documentation is as close to the code as
possible.

With some work one could add automatic generation of docs out of
JSONSchema probably.

P.

[1] http://pythonhosted.org/sphinxcontrib-httpdomain/

On 06/15/2015 03:21 PM, Andrew Woodward wrote:
 I think there is some desire to see more documentation around here as
 there are some odd interactions with parts of the data payload, and
 perhaps documenting these may improve some of them.
 
 I think the gaps in order of most used are:
 * node object create / update
 * environment networks ( the fact that metadata cant be updated kills me)
 * environment settings (the separate api for hidden and non kills me)
 * release update
 * role add/update
 
 After these are updated I think we can move on to common but less used
 * node interface assignment
 * node disk assignment
 
 
 
 On Mon, Jun 15, 2015 at 8:09 AM Oleg Gelbukh ogelb...@mirantis.com
 mailto:ogelb...@mirantis.com wrote:
 
 Good day, fellow fuelers
 
 Fuel API is a powerful tool that allow for very fine tuning of
 deployment settings and parameters, and we all know that UI exposes
 only a fraction of the full range of attributes client can pass to
 Fuel installer.
 
 However, there are very little documentation that explains what
 settings are accepted by Fuel objects, what are they meanings and
 what is their syntax. There is a main reference document for API
 [1], but it does give almost no insight into payload of parameters
 that every entity accepts. Which are they and what they for seems to
 be mostly scattered as a tribal knowledge.
 
 I would like to understand if there is a need in such a document
 among developers and deployers who consume Fuel API? Or might be
 there is already such document or effort to create it going on?
 
 --
 Best regards,
 Oleg Gelbukh
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -- 
 --
 Andrew Woodward
 Mirantis
 Fuel Community Ambassador
 Ceph Community 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release]oslo mox3 release 0.8.0 (liberty)

2015-06-15 Thread Doug Hellmann
We are happy to announce the release of:

mox3 0.8.0: Mock object framework for Python

This release is part of the liberty release series.

With source available at:

http://git.openstack.org/cgit/openstack/mox3

For more details, please see the git log history below and:

http://launchpad.net/python-mox3/+milestone/0.8.0

Please report issues through launchpad:

http://bugs.launchpad.net/python-mox3

This is the first release since importing the project into our gerrit
server. It includes changes to the requirements for pbr.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DB2 CI enablement on Keystone

2015-06-15 Thread Steve Martinelli
It would be great to see the DB2 CI tests for Keystone. Can we see results 
of a pass first? Before enabling the CI?

Thanks,

Steve Martinelli
OpenStack Keystone Core



From:   Feng Xi BJ Yan yanfen...@cn.ibm.com
To: openstack-dev@lists.openstack.org
Cc: Dan Moravec mora...@us.ibm.com
Date:   06/15/2015 06:01 AM
Subject:[openstack-dev] DB2 CI enablement on Keystone



Hi, Keystone guys,

Could we have a talk about DB2 CI enablement on this Monday, 8PM central 
US time? which is Tuesday 9AM beijeing time?

For your questions, here are my answers:

1) Is the team going to be responsive to requests unlike last time there 
was a problem?
(yanfengxi) Yes, problems will be handled in time. In our current working 
process, issues items will be opened internally automatically if the CI 
fails. And our mainterner will try to handle the issues opened as soon as 
possible. If there are others issues, please send email to 
yanfen...@cn.ibm.com, or reach irc account yanfengxi on channel 
#openstack-infra.

2) Is the job stable? I have no data about the stability I couldn't vouch 
for it.
(yanfengxi) From the statictics of other running CIs, we have a 88%, 
including environment failures, and tempest failures. Which I think is 
acceptable.
Because keystone CI is not enabled, we do not have the statistics for it. 
But from several test runnings, keystone CI can run properly.

By the way, I already updated the wiki page: 
https://wiki.openstack.org/w/index.php?title=IBM/IBM_DB2_CI to include a 
irc nick name.

Best Regard:)
Bruce Yan

Yan, Fengxi (闫凤喜)
Openstack Platform Team
IBM China Systems  Technology Lab, Beijing
E-Mail: yanfen...@cn.ibm.com
Tel: 86-10-82451418  Notes: Feng Xi FX Yan/China/IBM
Address: 3BW239, Ring Building. No.28 Building, ZhongGuanCun Software 
Park,No.8 
DongBeiWang West Road, ShangDi, Haidian District, Beijing, P.R.China

Brant L Knudson---2015/06/10 02:51:14---I brought this up at the keystone 
meeting and the community didn't want DB2 CI reporting was answere

From: Brant L Knudson/Rochester/IBM@IBMUS
To: Feng Xi BJ Yan/China/IBM@IBMCN@IBMAU
Cc: Dan Moravec/Rochester/IBM@IBMUS, Zhu ZZ Zhu/China/IBM@IBMCN
Date: 2015/06/10 02:51
Subject: Re: Could you help to talk about DB2 CI enablement on keystone 
meeting



I brought this up at the keystone meeting and the community didn't want 
DB2 CI reporting was answered until a couple of questions were answered:

They asked a couple of questions that I wasn't able to answer:

1) Is the team going to be responsive to requests unlike last time there 
was a problem?
2) Is the job stable? I have no data about the stability I couldn't vouch 
for it.

They also wanted an irc nick fors someone that can answer questions and 
respond to problems, on the wiki page: 
https://wiki.openstack.org/w/index.php?title=IBM/IBM_DB2_CI

Rather than go back and forth on this week to week, they suggested that we 
have a meeting on freenode irc in #openstack-keystone when it's convenient 
enough for all of us. They were willing to meet any day at 8 PM central US 
time, which I think is 9 AM Beijing time. So just pick a date when you can 
meet and send a note to the openstack-dev mailing list when it is.

Also, you can include in the note any backup material that you have, such 
as answers to the above questions.

Brant Knudson, OpenStack Development - Keystone core member
Phone:   507-253-8621 T/L:553-8621



Brant L Knudson---06/03/2015 09:17:07 AM---Looks OK to me. I'll add it to 
the agenda for next week.  Brant Knudson, OpenStack Development - Key

From: Brant L Knudson/Rochester/IBM
To: Feng Xi BJ Yan/China/IBM@IBMCN@IBMAU
Cc: Dan Moravec/Rochester/IBM@IBMUS, Zhu ZZ Zhu/China/IBM@IBMCN
Date: 06/03/2015 09:17 AM
Subject: Re: Could you help to talk about DB2 CI enablement on keystone 
meeting



Looks OK to me. I'll add it to the agenda for next week.

Brant Knudson, OpenStack Development - Keystone core member
Phone:   507-253-8621 T/L:553-8621



Feng Xi BJ Yan---06/02/2015 11:58:22 PM---OK, please take a look at this 
patch https://review.openstack.org/#/c/187751/ Best Regard:)

From: Feng Xi BJ Yan/China/IBM@IBMCN
To: Brant L Knudson/Rochester/IBM@IBMUS@IBMAU
Cc: Dan Moravec/Rochester/IBM@IBMUS, Zhu ZZ Zhu/China/IBM@IBMCN
Date: 06/02/2015 11:58 PM
Subject: Re: Could you help to talk about DB2 CI enablement on keystone 
meeting


OK, please take a look at this patch 
https://review.openstack.org/#/c/187751/




Best Regard:)
Bruce Yan

Yan, Fengxi (闫凤喜)
Openstack Platform Team
IBM China Systems  Technology Lab, Beijing
E-Mail: yanfen...@cn.ibm.com
Tel: 86-10-82451418  Notes: Feng Xi FX Yan/China/IBM
Address: 3BW239, Ring Building. No.28 Building, ZhongGuanCun Software 
Park,No.8 
DongBeiWang West Road, ShangDi, Haidian District, Beijing, P.R.China


Brant L Knudson---2015/06/02 21:50:42---You need to provide some evidence 
that this is working for keystone before I'll bring it forward. Th

From: Brant L Knudson/Rochester/IBM@IBMUS
To: Feng Xi BJ 

Re: [openstack-dev] [Solum] Why do app names have to be unique?

2015-06-15 Thread Devdatta Kulkarni
Hi Adrian,


The new app resource that is being implemented 
(https://review.openstack.org/#/c/185147/)
does not enforce name uniqueness.


This issue was discussed here sometime back.
Earlier thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-March/058858.html


The main argument for requiring unique app names within a tenant was usability. 
In customer research

it was found that users were more comfortable with using names than UUIDs. 
Without the unique app name constraint, users would have to resort to using 
UUIDs when they create multiple apps with the same name.


As a way to accommodate both requirements -- unique names when desired, and 
making them optional when

its not an issue -- we had said that we could make app uniqueness per tenant 
configurable.

So I would be okay if we open a new blueprint to track this work.


Thanks,

- Devdatta



From: Adrian Otto adrian.o...@rackspace.com
Sent: Friday, June 12, 2015 6:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Solum] Why do app names have to be unique?

Team,

While triaging this bug, I got to thinking about unique names:

https://bugs.launchpad.net/solum/+bug/1434293

Should our app names be unique? Why? Should I open a blueprint for a new 
feature to make name uniqueness optional, and default it to on. If not, why?

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-15 Thread James Page
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi All

On 27/05/15 09:14, Thomas Goirand wrote:
 tl;dr: - We'd like to push distribution packaging of OpenStack on
 upstream gerrit with reviews. - The intention is to better share
 the workload, and improve the overall QA for packaging *and*
 upstream. - The goal is *not* to publish packages upstream -
 There's an ongoing discussion about using stackforge or openstack. 
 This isn't, IMO, that important, what's important is to get
 started. - There's an ongoing discussion about using a distribution
 specific namespace, my own opinion here is that using
 /openstack-pkg-{deb,rpm} or /stackforge-pkg-{deb,rpm} would be the
 most convenient because of a number of technical reasons like the
 amount of Git repository. - Finally, let's not discuss for too long
 and let's do it!!!

While working to re-align the dependency chain for OpenStack between
Debian and Ubuntu 15.10, and in preparation for the first Liberty
milestone, my team and I have been reflecting  on the proposal to make
Ubuntu and Debian packaging of OpenStack a full OpenStack project.

We’ve come to the conclusion that for Debian and Ubuntu the best place
for us to collaborate on packaging is actually in the distributions,
as we do today, and not upstream in OpenStack.

Ubuntu has collaborated effectively with Debian since its inception
and has an effective set of tools to support the flow of packaging
(from Debian), bug reports and patches (to Debian) that have proven
effective, in terms of both efficiency and value, ever since I’ve been
working across both distributions.

This process allows each distribution to maintain its own direction,
whilst ensuring that any value that might be derived from
collaboration is supported with as minimal overhead as possible.

We understand and have communicated from the start of this
conversation that we will need to be able to maintain deltas between
Debian and Ubuntu - for both technical reasons, in the way the
distributions work (think Ubuntu main vs universe), as well as
objectives that each distribution has in terms of the way packaging
should work.

We don’t think that’s going to be made any easier by moving all of the
packaging under the OpenStack project - it just feels like we’re
trying to push a solved problem somewhere else, and then re-solve it
in a whole new set of ways.

The problem of managing delta and allowing a good level of
distribution independence is still going to continue to exist and will
be more difficult to manage due to the tighter coupling of development
process and teams than we have today.

On this basis, we're -1 on taking this proposal forward.

That said, we do appreciate that the Ubuntu packaging for OpenStack is
not as accessible as it might be using Bazaar as a VCS. In order to
provide a more familiar experience to developers and operators looking
to contribute to the wider Openstack ecosystem we will be moving our
OpenStack packaging branches over to the new Git support in Launchpad
in the next few weeks.

Regards

James

- -- 
James Page
Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJVfudNAAoJEL/srsug59jD2isQAKtp9gSQ7FC0dy664wkSIqp3
ztmtIGPu5k0kZsg0sSxie3lA6mmRtv0m3sZWweNXfObXKwrWSgJSNynYDOhhiC7u
zlijQwEoY264byd7I+qacCdGBPi8fkXImB+6yx6OdJuHO+DcF/lBhF/5XW+wEwMa
j5GLN/UML+AO/Vp1BNBWholdCy/vm8SYDWtD3952R3fasBusCzpGj/52Pe3JifV6
kYWhnoihSQn+U02SXUc4/JETl/3o94EKp5/eu9We49sEdgHudSF3o6MdyLom2NfM
BNMpWs4iNWz7BlgqoDULotrFORRjQawru9R5StouB+wORUJrgVG+5lFINiR4RA+h
EMGXAshda+xwqm3KrdtHDLHRgFyfYov6w7s+caUMyV7gky1zmrB/NR+vG8Di2U2/
wyK+4y/c/Qt1CFhZSmuZ0zqzRzX7J2oxlT4P9FVdapnL5AYfXe6hZWhHJERjXmeS
GPovCQO/tBqRUiL9RwX6rcYbxykh9oseP4yxp5QZwLIO7cuIaStgMIN8z1vpZoBf
r7l3Bbd+ppRcq8NDqa7elRP0uiHm0wg7gMMcMWJJOJMU2Jm5DAw7PvZSA2FbksDL
Cu8WAloTsFCg11at6oTz6IcxdXsXTkpNp8O8Qv2yICj3Kw7gwX22Mc4V/CclrtFp
8lKFkTacJVbMkihgOFpu
=QN+b
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove]Put all alternative configurationsin default trove.conf

2015-06-15 Thread Amrith Kumar
Hi,

The value is sent down from Task Manager based on a value that is setup as part 
of the datastore.

It is actually a bad thing if you set a value of the datastore_manager in the 
taskmanager.conf file on the Trove controller, and attempt to launch some other 
datastore.

The way this works is that the Task Manager reads the information from the 
datastore configuration and generates a configuration file (two files actually) 
and renders them dynamically and sends them down to the newly spawned instance.

Thanks,

-amrith

From: 陈迪豪 [mailto:chendi...@unitedstack.com]
Sent: Monday, June 15, 2015 8:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove]Put all alternative configurationsin 
default trove.conf

Thanks for your reply @amrith.

The datastore_manager refer to the manager you gonna use, mysql or others. If 
you don't set it, the default value is None. And if it's None, guest agent will 
fail to start up.

That's why I think it's the necessary configuration we need to focus on. I 
don't know why you can setup without setting it. So what's your datastore if 
you don't configure it? MySQL?

-- Original --
From:  Amrith Kumaramr...@tesora.commailto:amr...@tesora.com;
Date:  Mon, Jun 15, 2015 07:27 PM
To:  
openstack-dev@lists.mailto:openstack-dev@lists.openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org;
Subject:  Re: [openstack-dev] [trove]Put all alternative configurationsin 
default trove.conf

Hello!

I’ve never had to set datastore_manager in trove.conf and I can launch Trove 
just fine with either one of three setup methods, devstack, redstack, or 
following the detailed installation steps provided in the documentation.

My suspicion is that the steps you are using to register your guest image are 
not correct (i.e. the invocation of the trove-manage command or any wrappers 
for it).

I would like to understand the problem you are facing because this solution 
appears baffling.

-amrith

--

Amrith Kumar, CTO Tesora (www.tesora.comhttp://www.tesora.com)

Twitter: @amrithkumar
IRC: amrith @freenode



From: 陈迪豪 [mailto:chendi...@unitedstack.com]
Sent: Monday, June 15, 2015 7:15 AM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [trove]Put all alternative configurations in default 
trove.conf

Hi all,

I have created the blueprint about the default configuration file. I think we 
should add the essential configuration like datastore_manager in default 
trove.conf.

The blueprint is here 
https://blueprints.launchpad.net/trove/+spec/default-configuration-items

Any suggestion about this?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron[[vpnaas] Next meeting June 16th at 1600 UTC

2015-06-15 Thread Paul Michali
Planning on weekly meetings for a while, since this are several things to
discuss. See the agenda on the wiki page:
https://wiki.openstack.org/wiki/Meetings/VPNaaS

There are a bunch of questions to discuss.

See you Tuesday!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Fox, Kevin M
If your asking the cloud provider to go through the effort to install Magnum, 
its not that much extra effort to install Barbican at the same time. Making it 
a dependency isn't too bad then IMHO.

Thanks,
Kevin

From: Adrian Otto [adrian.o...@rackspace.com]
Sent: Sunday, June 14, 2015 11:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Madhuri,

On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:

Hi All,

This is to bring the blueprint  
secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
 in discussion. I have been trying to figure out what could be the possible 
change area to support this feature in Magnum. Below is just a rough idea on 
how to proceed further on it.

This task can be further broken in smaller pieces.

1. Add support for TLS in python-k8sclient.
The current auto-generated code doesn't support TLS. So this work will be to 
add TLS support in kubernetes python APIs.

2. Add support for Barbican in Magnum.
Barbican will be used to store all the keys and certificates.

Keep in mind that not all clouds will support Barbican yet, so this approach 
could impair adoption of Magnum until Barbican is universally supported. It 
might be worth considering a solution that would generate all keys on the 
client, and copy them to the Bay master for communication with other Bay nodes. 
This is less secure than using Barbican, but would allow for use of Magnum 
before Barbican is adopted.

If both methods were supported, the Barbican method should be the default, and 
we should put warning messages in the config file so that when the 
administrator relaxes the setting to use the non-Barbican configuration he/she 
is made aware that it requires a less secure mode of operation.

My suggestion is to completely implement the Barbican support first, and follow 
up that implementation with a non-Barbican option as a second iteration for the 
feature.

Another possibility would be for Magnum to use its own private installation of 
Barbican in cases where it is not available in the service catalog. I dislike 
this option because it creates an operational burden for maintaining the 
private Barbican service, and additional complexities with securing it.

3. Add support of TLS in Magnum.
This work mainly involves supporting the use of key and certificates in magnum 
to support TLS.

The user generates the keys, certificates and store them in Barbican. Now there 
is two way to access these keys while creating a bay.

Rather than the user generates the keys…, perhaps it might be better to word 
that as the magnum client library code generates the keys for the user…”.

1. Heat will access Barbican directly.
While creating bay, the user will provide this key and heat templates will 
fetch this key from Barbican.

I think you mean that Heat will use the Barbican key to fetch the TLS key for 
accessing the native API service running on the Bay.

2. Magnum-conductor access Barbican.
While creating bay, the user will provide this key and then Magnum-conductor 
will fetch this key from Barbican and provide this key to heat.

Then heat will copy this files on kubernetes master node. Then bay will use 
this key to start a Kubernetes services signed with these keys.

Make sure that the Barbican keys used by Heat and magnum-conductor to store the 
various TLS certificates/keys are unique per tenant and per bay, and are not 
shared among multiple tenants. We don’t want it to ever be possible to trick 
Magnum into revealing secrets belonging to other tenants.

After discussion when we all come to same point, I will create separate 
blueprints for each task.
I am currently working on configuring Kubernetes services with TLS keys.

Please provide your suggestions if any.

Thanks for kicking off this discussion.

Regards,

Adrian



Regards,
Madhuri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-15 Thread Thomas Goirand
On 06/08/2015 01:55 PM, Kuvaja, Erno wrote:
 One thing I like about plan D
 is that it would give also indicator how much the stable branch has moved in
 each individual project. 

The only indication you will get is how many patches it has. I fail to
see how this is valuable information. No info on how important they are
or anything of this kind, which is a way more important.

Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VLAN-aware VMs meeting

2015-06-15 Thread Ildikó Váncsa
Hi Kyle,

Thanks for your support. Let's go for the meeting channel option then, so the 
final details for the tomorrow's meeting are:

Time: Tuesday (06. 16.), 17:00UTC - 18:00UTC
Location: #openstack-meeting-4

I will add a pointer to the agenda, when we have it!

Best Regards,
Ildikó

 -Original Message-
 From: Kyle Mestery [mailto:mest...@mestery.com]
 Sent: June 15, 2015 15:52
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] VLAN-aware VMs meeting
 
 On Mon, Jun 15, 2015 at 7:40 AM, Ildikó Váncsa ildiko.van...@ericsson.com 
 wrote:
 
 
   Hi Kyle,
 
 
-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com]
Sent: June 15, 2015 04:26
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] VLAN-aware VMs meeting
   
On Fri, Jun 12, 2015 at 8:51 AM, Ildikó Váncsa 
 ildiko.van...@ericsson.com
wrote:
   
   
  Hi,
   
   
   
  Since we reopened the review for this blueprint we’ve got a
large number of comments. It can be clearly seen that the original 
 proposal
has to be changed, although it still requires some discussion to 
 define a
reasonable design that provides the desired feature and is aligned 
 with the
architecture and guidelines of Neutron. In order to speed up the 
 process to
fit into the Liberty timeframe, we would like to have a discussion 
 about this.
The goal is to discuss the alternatives we have, decide which to go 
 on with
and sort out the possible issues. After this discussion the blueprint 
 will be
updated with the desired solution.
   
   
   
  I would like to propose a time slot for _next Tuesday (06. 16.),
17:00UTC – 18:00UTC_. I would like to have the discussion on the
#openstack-neutron channel, that gives a chance to guys who might be
interested, but missed this mail to attend. I tried to check the 
 slot, but please
let me know if it collides with any Neutron related meeting.
   
   
   
   
This looks to be fine. I would suggest that it may make more sense to 
 have it
in an #openstack-meeting channel, though we can certainly do a 
 free-form
chat in #openstack-neutron as well. I think the desired end-goal here 
 should
be to figure out any remaining nits that are being discussed on the 
 spec so
we can move forward in Liberty.
   
 
 
   I wasn’t sure that it is a good idea to bring an unscheduled meeting to 
 the meeting channels. Is it acceptable to hold an
 ad-hoc meeting there or does it have to be registered somewhere even if it's 
 one occasion? As much as I saw the #openstack-
 meeting-4 channel is available although the list of meetings and the .ical 
 file is not in sync, it's not taken in either. Should we try it
 there?
 
 
 
 
 I think as long as there isn't a scheduled meeting in place, we can use the 
 channel for a one-off meeting. It has the added benefit of
 being logged in the same way as other meetings, and we can focus the 
 discussion.
 
 
 
   I agree with the desired outcome, so that we can start the 
 implementation as soon as possible. I will try to send out an
 agenda before the meeting with the points that we should discuss.
 
 
 
 
 Perfect! I've added a note to the Neutron meeting for tomorrow [1] to 
 highlight this. If you get the agenda written, please add a
 pointer in the Neutron meeting agenda in the same place.
 
 [1] 
 https://wiki.openstack.org/wiki/Network/Meetings#Announcements_.2F_Reminders
 
 
 
   Thanks and Best Regards,
   Ildikó
 
   
Thanks,
   
Kyle
   
   
   
   
   
   
  Thanks and Best Regards,
   
  Ildikó
   
   
  ___
___
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: OpenStack-dev-
requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
dev
   
   
   
 
   
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-15 Thread Thomas Goirand
On 06/10/2015 03:46 PM, Thierry Carrez wrote:
 The main issue with B is that it doesn't work well once server component
 versions start to diverge, which will be the case starting with Liberty.

Review that policy then.

 We already couldn't (with Swift using separate versioning), but we worked
 around that by ignoring Swift from stable releases. As more projects opt
 into that, that will no longer be a possible workaround.

The only issue is having a correct version number. Swift could well
iterate after 20151, for example doing 20151.1 then 20151.2, etc.

 So we could do what you're asking (option B) for Kilo stable releases,
 but I think it's not really a viable option for stable/liberty onward.

I fail to understand how version numbers are related to doing
synchronized point releases. Sure, we have a problem with the version
numbers, but we can still do a point release of all the server projects
at once.

 This is really one of the things I think we want to get away from...
 If *every* stable commit is treated with the seriousness of it
 creating a release, lets make every commit a release.

 This means that Debian may be using a (micro)patch release newer or
 older than a different distro, but the key is that it empowers the
 vendors and/or users to select a release cadence that best fits them,
 rather than being tied to an arbitrary upstream community wide date.
 
 +1

FYI, after a CVE, I have uploaded cinder with this version number:
2015.1.0+2015.06.06.git24.493b6c7f12-1

IMO, it looks ugly, and not comprehensive to Debian users. If I really
have to go this way, I will, but I would have  not to.

 It also removes the stupid encouragement to use all components from the
 same date. With everything tagged at the same date, you kinda send the
 message that those various things should be used together. With
 everything tagged separately, you send te message that you can mix and
 match components from stable/* as you see fit. I mean, it's totally
 valid to use stable branch components from various points in time
 together, since they are all supposed to work.

Though there's now zero guidance at what should be the speed of
releasing server packages to our users.

 So I totally get that we should still have reference points to be able
 to tell this is fixed in openstack/nova stable/liberty starting with
 12.1.134post34 (or whatever we settle with). I totally get that any of
 those should ship with relevant release notes. But that's about all I
 think we need ?

That's not a little thing that you're pointing at: having reference
points is very important. But yeah, it may be the only thing... Like
we will only loose track of what is fixed in which version. :(

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-15 Thread Devdatta Kulkarni
Yes, the log deletion should be optional.

The question is what should be the default behavior. Should the default be to 
delete the logs and provide a flag to keep them, or keep the logs by default 
and provide a override flag to delete them?

Delete-by-default is consistent with the view that when an app is deleted, all 
its artifacts are deleted (the app's meta data, the deployment units (DUs), and 
the logs). This behavior is also useful in our current state when the app 
resource and the CLI are in flux. For now, without a way to specify a flag, 
either to delete the logs or to keep them, delete-by-default behavior helps us 
clean all the log files from the application's cloud files container when an 
app is deleted.

This is very useful for our CI jobs. Without this, we end up with lots of log 
files in the application's container,

and have to resort to separate scripts to delete them up after an app is 
deleted.


Once the app resource and CLI stabilize it should be straightforward to change 
the default behavior if required.


- Devdatta



From: Adrian Otto adrian.o...@rackspace.com
Sent: Friday, June 12, 2015 6:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

Team,

We currently delete logs for an app when we delete the app[1].

https://bugs.launchpad.net/solum/+bug/1463986

Perhaps there should be an optional setting at the tenant level that determines 
whether your logs are deleted or not by default (set to off initially), and an 
optional parameter to our DELETE calls that allows for the opposite action from 
the default to be specified if the user wants to override it at the time of the 
deletion. Thoughts?

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-15 Thread Thomas Goirand
On 06/10/2015 11:27 AM, Dave Walker wrote:
 On 10 June 2015 at 09:53, Thomas Goirand z...@debian.org wrote:
 On 06/05/2015 02:46 PM, Thierry Carrez wrote:
 So.. summarizing the various options again:

 Plan A
 Just drop stable point releases.
 (-) No more release notes
 (-) Lack of reference points to compare installations

 Plan B
 Push date-based tags across supported projects from time to time.
 (-) Encourages to continue using same version across the board
 (-) Almost as much work as making proper releases

 Plan C
 Let projects randomly tag point releases whenever
 (-) Still a bit costly in terms of herding cats

 Plan D
 Drop stable point releases, publish per-commit tarballs
 (-) Requires some infra changes, takes some storage space

 Plans B, C and D also require some release note / changelog generation
 from data maintained *within* the repository.

 Personally I think the objections raised against plan A are valid. I
 like plan D, since it's more like releasing every commit than not
 releasing anymore. I think it's the most honest trade-off. I could go
 with plan C, but I think it's added work for no additional value to the
 user.

 What would be your preferred option ?

 I see no point of doing D. I already don't use tarballs, and those who
 do could as well switch to generating them (how hard is it to run
 python setup.py sdist or git archive?).

 What counts is having a schedule date, where all distros are releasing a
 point release, so we have a common reference point. If that is a fully
 automated process, then great, less work for everyone, and it wont
 change anything from what we had in the past (we can even collectively
 decide for point release dates...).

 Cheers,

 Thomas Goirand (zigo)
 
 This is really one of the things I think we want to get away from...
 If *every* stable commit is treated with the seriousness of it
 creating a release, lets make every commit a release.
 
 This means that Debian may be using a (micro)patch release newer or
 older than a different distro, but the key is that it empowers the
 vendors and/or users to select a release cadence that best fits them,
 rather than being tied to an arbitrary upstream community wide date.

What you don't get here is that downstream distributions *DO* want the
arbitrary upstream community wide date, and don't want to just use any
commit.

 Yes, this might mean that your cadence might be more or less regular
 than an alternative vendor / distribution, but the key is that it
 empowers the vendor to meet the needs of their users/customers.

When did a distribution vendors express this as an issue? Having
community-wide release dates doesn't prevent any vendor to do a patch
release if he wants to.

 For example, you could select a cadence of rebasing to a release every
 6 months - where as another consumer could choose to do one every 6
 weeks.

Which is what would be nice to avoid, so we have the same code base.
Otherwise we may be affected differently by a CVE.

 The difference is how much of a jump, at which intervals..
 Alternatively, a vendor might choose just to go with stock release +
 their own select cherry picked patches from stable/*, which is also a
 model that works.

This already happens, we don't need to remove point releases to do that.

 The issue around producing tarballs, is really about having forwards
 and backwards verification by means of sha/md5 sums, which is hard to
 do when generating your own orig tarball.

Opposite way. When generating my tarball, I do it with a GPG signed tag.
This is verifiable very easily. By the way, sha  md5 are in no way
tools to sign a release, for that, we have PGP (and cross-signing of keys).

 Debian, Ubuntu and I
 believe Arch have made varying use of 'pristine-tar' - which was an
 effort to re-producible tarballs using xdelta to make the sum match.
 However, Maintainers seem to be moving away from this now.

As much as I know, I'm the only one using git archive ... | xz ... to
generate my own tarballs, but maybe this will change some day.

 When I perform source NEW reviews for Ubuntu Archive, I always check
 that getting the source orig tarball can be done with either
 get-orig-source (inspecting the generation method) or uscan and then
 diff the tarballs with the one included on the upload and the one
 generated.  Timestamps (or even shasums) haven't been an important
 issue for me, but the actual content and verifiable source is what has
 mattered more.

Correct. Though for me, a signed git tag is a way better than any md5 in
the world.

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][openstackclient] python-openstackclient release 1.4.0 (liberty)

2015-06-15 Thread doug
We are thrilled to announce the release of:

python-openstackclient 1.4.0: OpenStack Command-line Client

This release is part of the liberty release series.

With source available at:

https://git.openstack.org/cgit/openstack/python-openstackclient

For more details, please see the git log history below and:

https://launchpad.net/python-openstackclient/+milestone/1.4.0

Please report issues through launchpad:

https://bugs.launchpad.net/python-openstackclient

Changes in python-openstackclient 1.3.0..1.4.0
--

9f69b43 Improve the hint message
b328960 Fix the typo in `openstackclient/shell.py`
ec903a1 Add oidc plugin for listing federation projects
aac0d58 Skip trying to set project_domain_id if not using password
f3725b4 Updated from global requirements
18991ab Updated from global requirements
1f1ed4c Imported Translations from Transifex
4afd308 Include links to developer workflow documentation
f7feef7 Enable specifying domain for group and role commands
7cf7790 Not use the deprecated argument
b808566 Create 1.4.0 release notes
43d12db Updated from global requirements
31d785e Allow --insecure to override --os-cacert
3fa0bbc Clean up ec2 credentials help text
8d185a6 Add functional tests for volume set and unset
7665d52 Add domain support for ec2creds in v3 identity
15d3717 Add EC2 support for identity v3 API
db7d4eb Imported Translations from Transifex
bf99218 Add a reference to the IRC channels
226fc6c Change Credentials header to Blob from data
f737160 Get rid of oslo_i18n deprecation notice
b2cf651 Fix security group list command
a05cbf4 Rework shell tests
746f642 Add image functional tests
f9fa307 Add volume functional tests
0c9f5c2 Ignore cover directory from git
3ae247f Set tenant options on parsed namespace
5361652 Add support for volume v2 API
d14316a add domain scope arguments to v3 role add in doc
01573be project create is missing --parent in doc
542f587 add --domain argument to v3 project set
224d375 Add --wait to server delete
ae29f7f Use ostestr for test runs
2c4b878 Add cli tests for --verify and friends
da083d1 Small tweaks to osc plugin docs
211c14c Fix shell tests

Diffstat (except docs and test files)
-

.gitignore |   1 +
README.rst |   5 +
examples/object_api.py |   4 +-
examples/osc-lib.py|   4 +-
functional/common/test.py  |   2 +-
openstackclient/api/auth.py|  10 +-
openstackclient/common/clientmanager.py|   4 +
openstackclient/common/utils.py|  46 ++
openstackclient/compute/v2/security_group.py   |   2 +-
openstackclient/compute/v2/server.py   |  17 +
openstackclient/i18n.py|   4 +-
openstackclient/identity/common.py |  16 +-
openstackclient/identity/v2_0/ec2creds.py  |  18 +-
openstackclient/identity/v3/credential.py  |   5 +-
openstackclient/identity/v3/ec2creds.py| 251 +++
openstackclient/identity/v3/group.py   | 116 ++-
openstackclient/identity/v3/project.py |   8 +
openstackclient/identity/v3/role.py| 321 +++--
openstackclient/identity/v3/unscoped_saml.py   |   2 +-
openstackclient/identity/v3/user.py|   2 +-
openstackclient/shell.py   |  50 +-
openstackclient/volume/client.py   |   3 +-
openstackclient/volume/v2/__init__.py  |   0
openstackclient/volume/v2/backup.py|  70 ++
openstackclient/volume/v2/snapshot.py  |  71 ++
openstackclient/volume/v2/volume.py|  83 +++
openstackclient/volume/v2/volume_type.py   |  68 ++
.../de/LC_MESSAGES/python-openstackclient.po   | 783 +++--
.../locale/python-openstackclient.pot  | 208 +++---
.../zh_TW/LC_MESSAGES/python-openstackclient.po| 687 +-
requirements.txt   |   8 +-
setup.cfg  |  18 +
test-requirements.txt  |   1 +
tox.ini|   4 +-
60 files changed, 2907 insertions(+), 1337 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 415d27a..d420b1a 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10 +10 @@ cliff-tablib=1.0
-os-client-config
+os-client-config=1.2.0
@@ -14,2 +14,2 @@ oslo.utils=1.4.0   # Apache-2.0
-python-glanceclient=0.17.1
-python-keystoneclient=1.3.0
+python-glanceclient=0.18.0
+python-keystoneclient=1.6.0
@@ -20 +20 @@ requests=2.5.2
-stevedore=1.3.0  # Apache-2.0
+stevedore=1.5.0  # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 24f5c68..f82e48d 

Re: [openstack-dev] [Congress] summit events

2015-06-15 Thread D'ANDREA, JOE (JOE)
Tim,

 On Jun 9, 2015, at 10:04 AM, Tim Hinrichs t...@styra.com wrote:
 
 Hi Joe,
 
 The telco slides are powerpoint, but are hosted on google drive ...

Thanks very much for the OpEx slides (and corresponding design doc)!

Is there a deck link for Congress: Introduction, Status, and Future Plans as 
well?

The OpEx slides look similar to a degree, but I recall the other deck went into 
different detail and made for a really nice slide-based summary of things.

Please advise. Thanks!

jd
 
—
Joe D’Andrea
ATT Labs - Research
Cloud Technologies  Services
Bedminster, NJ



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] summit events

2015-06-15 Thread Tim Hinrichs
Hi Joe,

Here's the link to the Intro to Congress slide deck.
https://docs.google.com/file/d/0ByDz-eYOtswScTlmamlhLXpmTXc/edit

While I'm at it, here are the instructions for the Congress Hands On Lab.
https://docs.google.com/document/u/1/d/1lXmMkUhiSZYK45POd5ungPjVR--Fs_wJHeQ6bXWwP44/pub

All these links are available on the wiki.
https://wiki.openstack.org/wiki/Congress

Tim


On Mon, Jun 15, 2015 at 7:51 AM D'ANDREA, JOE (JOE) 
jdand...@research.att.com wrote:

 Tim,

  On Jun 9, 2015, at 10:04 AM, Tim Hinrichs t...@styra.com wrote:
 
  Hi Joe,
 
  The telco slides are powerpoint, but are hosted on google drive ...

 Thanks very much for the OpEx slides (and corresponding design doc)!

 Is there a deck link for Congress: Introduction, Status, and Future
 Plans as well?

 The OpEx slides look similar to a degree, but I recall the other deck went
 into different detail and made for a really nice slide-based summary of
 things.

 Please advise. Thanks!

 jd

 —
 Joe D’Andrea
 ATT Labs - Research
 Cloud Technologies  Services
 Bedminster, NJ



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-06-15 Thread Ian Cordasco
On 6/15/15, 09:24, Thomas Goirand z...@debian.org wrote:

On 06/08/2015 01:55 PM, Kuvaja, Erno wrote:
 One thing I like about plan D
 is that it would give also indicator how much the stable branch has
moved in
 each individual project.

The only indication you will get is how many patches it has. I fail to
see how this is valuable information. No info on how important they are
or anything of this kind, which is a way more important.

Thomas

Are you implying that stable point releases as they exist today provide
importance? How is that case? They're coordinated to happen at (nearly)
the same time and that's about all. Perhaps the most important changes are
CVE fixes. Let's look at a two cases for a stable point release now:

1. A point release without a CVE fix
2. A point release with a CVE fix (or more than one)

In the first case, how does a tagged version provide information about
importance? A release would have been tagged whether the latest commit (or
N commits) had been merged or not. In the second case, downstream
redistributors (or at least Debian) has already shipped a new version with
the fix. The importance of that CVE fix being included in a tag that was
created arbitrarily is then different than the importance it might have if
Debian didn't patch the existing versions. (Note, I'm not advocating you
change this practice.) I don't see how tags detail the importance of the
included commits any more than their existence on a stable branch.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] Difference between Sahara and CloudBrak

2015-06-15 Thread Jay Lau
Hi Sahara Team,

Just notice that the CloudBreak (https://github.com/sequenceiq/cloudbreak)
also support running on top of OpenStack, can anyone show me some
difference between Sahara and CloudBreak when both of them using OpenStack
as Infrastructure Manager?

-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] summit events

2015-06-15 Thread D'ANDREA, JOE (JOE)

 On Jun 15, 2015, at 11:06 AM, Tim Hinrichs t...@styra.com wrote:
 
 Hi Joe,
 
 Here's the link to the Intro to Congress slide deck.

Thank you! I should have re-checked the wiki. :)

jd

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Fox, Kevin M
Please see https://review.openstack.org/#/c/186617 - Nova Instance Users and 
review.

We're working hard on trying to get heat - nova - instance - barbican secret 
storage workflow working smoothly.

Also related are: https://review.openstack.org/#/c/190404/ - Barbican ACL's and 
https://review.openstack.org/#/c/190732/ - Unscoped Service Catalog.

Thanks,
Kevin

From: Madhuri Rai [madhuri.ra...@gmail.com]
Sent: Sunday, June 14, 2015 10:30 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Magnum] TLS Support in Magnum

Hi All,

This is to bring the blueprint  
secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
 in discussion. I have been trying to figure out what could be the possible 
change area to support this feature in Magnum. Below is just a rough idea on 
how to proceed further on it.

This task can be further broken in smaller pieces.

1. Add support for TLS in python-k8sclient.
The current auto-generated code doesn't support TLS. So this work will be to 
add TLS support in kubernetes python APIs.

2. Add support for Barbican in Magnum.
Barbican will be used to store all the keys and certificates.

3. Add support of TLS in Magnum.
This work mainly involves supporting the use of key and certificates in magnum 
to support TLS.

The user generates the keys, certificates and store them in Barbican. Now there 
is two way to access these keys while creating a bay.

1. Heat will access Barbican directly.
While creating bay, the user will provide this key and heat templates will 
fetch this key from Barbican.


2. Magnum-conductor access Barbican.
While creating bay, the user will provide this key and then Magnum-conductor 
will fetch this key from Barbican and provide this key to heat.

Then heat will copy this files on kubernetes master node. Then bay will use 
this key to start a Kubernetes services signed with these keys.


After discussion when we all come to same point, I will create separate 
blueprints for each task.
I am currently working on configuring Kubernetes services with TLS keys.

Please provide your suggestions if any.


Regards,
Madhuri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Online Migrations.

2015-06-15 Thread Mike Bayer



On 6/15/15 9:21 AM, Philip Schwartz wrote:
This weekend, I discussed the requested change at length with Mike. I 
think before moving forward, we need a better understanding of what is 
trying to be achieved.


Request: Add the ability to verify migrations are completed prior to 
contract.


As discussed here previously, I worked out a setup using the .info 
dict of the Column’s that are to be removed or Migrated. But came 
across and issue that is more concerning.


In order to contract the DB, the columns need to be removed from the 
Model Classes. We can do this just prior to scanning the Model Classes 
to determine schema migrations, but the columns would still exist in 
the Model Class for all other loading of the model, i.e. at ORM 
query’s and such.


After discussing this with Mike, there looks to be 3 options:

 1. Remove the columns at contract time and also build a mechanism to
scan for the same .info entry at Query time to prevent it from
being used.
 2. Remove the columns at contract time and create a mechanism that on
load of data models once the migration is complete would remove
the columns (forced restart of service once migration is done will
allow the ORM queries to no longer see the column)
 3. Build the controls into our process with a way of storing the
current release cycle information to only allow contract’s to
occur at a set major release and maintain the column in the model
till it is ready to be removed major release + 1 since the
migration was added.


Personally, for ease of development and functionality, I think the 
best option is the 3rd as it will not require reinventing the wheel 
around data loading and handling of already loaded data models that 
could affect ORM queries.


Well one advantage to the way contract is totally automated here is 
that as long as the ORM models have a column X present, contract won't 
remove it.What problem would storing the release cycle information 
solve ?  (also by store you mean a DB table? ).


I'm writing out a spec for Neutron this week that reboots the OSM 
concept without dropping the concept of versioned migration files and 
versioning as I've discussed.As I've mentioned elsewhere i hope to 
enhance Alembic's functionality so that Nova's current OSM approach as 
well as the file/ version-number based approach can both leverage the 
same codebase for generating the stream of expand/contract steps.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Madhuri
Adrian,

On Tue, Jun 16, 2015 at 2:39 AM, Adrian Otto adrian.o...@rackspace.com
wrote:

  Madhuri,

  On Jun 15, 2015, at 12:47 AM, Madhuri Rai madhuri.ra...@gmail.com
 wrote:

  Hi,

 Thanks Adrian for the quick response. Please find my response inline.

 On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 Madhuri,

  On Jun 14, 2015, at 10:30 PM, Madhuri Rai madhuri.ra...@gmail.com
 wrote:

Hi All,

 This is to bring the blueprint  secure-kubernetes
 https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes in 
 discussion.
 I have been trying to figure out what could be the possible change area
 to support this feature in Magnum. Below is just a rough idea on how to
 proceed further on it.

 This task can be further broken in smaller pieces.

 *1. Add support for TLS in python-k8sclient.*
 The current auto-generated code doesn't support TLS. So this work will be
 to add TLS support in kubernetes python APIs.

 *2. Add support for Barbican in Magnum.*
  Barbican will be used to store all the keys and certificates.


  Keep in mind that not all clouds will support Barbican yet, so this
 approach could impair adoption of Magnum until Barbican is universally
 supported. It might be worth considering a solution that would generate all
 keys on the client, and copy them to the Bay master for communication with
 other Bay nodes. This is less secure than using Barbican, but would allow
 for use of Magnum before Barbican is adopted.


 +1, I agree. One question here, we are trying to secure the communication
 between magnum-conductor and kube-apiserver. Right?


  We need API services that are on public networks to be secured with TLS,
 or another approach that will allow us to implement access control so that
 these API’s can only be accessed by those with the correct keys. This need
 extends to all places in Magnum where we are exposing native API’s.


Ok, I understand.


  If both methods were supported, the Barbican method should be the
 default, and we should put warning messages in the config file so that when
 the administrator relaxes the setting to use the non-Barbican configuration
 he/she is made aware that it requires a less secure mode of operation.


  In non-Barbican support, client will generate the keys and pass the
 location of the key to the magnum services. Then again heat template will
 copy and configure the kubernetes services on master node. Same as the
 below step.


  Good!

  My suggestion is to completely implement the Barbican support first,
 and follow up that implementation with a non-Barbican option as a second
 iteration for the feature.


  How about implementing the non-Barbican support first as this would be
 easy to implement, so that we can first concentrate on Point 1 and 3. And
 then after it, we can work on Barbican support with more insights.


  Another possibility would be for Magnum to use its own private
 installation of Barbican in cases where it is not available in the service
 catalog. I dislike this option because it creates an operational burden for
 maintaining the private Barbican service, and additional complexities with
 securing it.


 In my opinion, installation of Barbican should be independent of Magnum.
 My idea here is, if user wants to store his/her keys in Barbican then
 he/she will install it.
 We will have a config paramter like store_secure when True means we have
 to store the keys in Barbican or else not.
  What do you think?


*3. Add support of TLS in Magnum.*
  This work mainly involves supporting the use of key and certificates in
 magnum to support TLS.

 The user generates the keys, certificates and store them in Barbican. Now
 there is two way to access these keys while creating a bay.


  Rather than the user generates the keys…, perhaps it might be better
 to word that as the magnum client library code generates the keys for the
 user…”.


 It is user here. In my opinion, there could be users who don't want to
 use magnum client rather the APIs directly, in that case the user will
 generate the key themselves.


  Good point.

In our first implementation, we can support the user generating the
 keys and then later client generating the keys.


  Users should not require any knowledge of how TLS works, or related
 certificate management tools in order to use Magnum. Let’s aim for this.

  I do agree that’s a good logical first step, but I am reluctant to agree
 to it without confidence that we will add the additional security later. I
 want to achieve a secure-by-default configuration in Magnum. I’m happy to
 take measured forward progress toward this, but I don’t want the less
 secure option(s) to be the default once more secure options come along. By
 doing the more secure one first, and making it the default, we allow other
 options only when the administrator makes a conscious action to relax
 security to meet their constraints.


Barbican will be the default option.


  So, if 

Re: [openstack-dev] [nova] Adding success message to succeed actions

2015-06-15 Thread 郑振宇
Hi, John
Thanks for your reply, I will follow your instructions. 
It will be great if I can help out with any part of the plan, I'm really happy 
to do it.
BR,
Zheng

 Date: Mon, 15 Jun 2015 14:50:41 +0100
 From: j...@johngarbutt.com
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] Adding success message to succeed actions
 
 On 15 June 2015 at 08:03, 郑振宇 zheng.zhe...@outlook.com wrote:
  Hi All,
 
  When querying instance actions using API: nova instance-action-list, nova
  will response with a table show in below:
  root@controller:~# nova instance-action-list
  fcbba82f-60a1-4785-84f2-88bcf2da7e7e
  ++-+-++
  | Action | Request_ID
  | Message | Start_Time   |
  ++-+-++
  | create | req-78e63d14-5177-4bcf-8d94-7a60af4f276f | - |
  2015-06-11T07:36:20.00 |
  ++-+-++
  this instance has been successfully created and we can see that the message
  about this action is empty.
  root@controller:~# nova list
  +++++-+--+
  | ID  | Name   |
  Status | Task State | Power State | Networks |
  +++++-+--+
  | fcbba82f-60a1-4785-84f2-88bcf2da7e7e | test_1 | ACTIVE | -
  | Running | sample_network=20.20.0.7 |
  +++++-+--+
 
  On the other hand, when an action uses nova-scheduler and the action fails
  before or within nova-scheduler, the message of response table will also be
  empty. For example, when an instance failed to be created when there are no
  valid host (an oversized flavor has been chosen), querying instance action
  using nova instance-action-list will show the below table:
 
  root@controller:~# nova instance-action-list
  101756c5-6d6b-412b-9cc6-1628fa7c0b9c
  ++-+-++
  | Action | Request_ID
  | Message | Start_Time |
  ++-+-++
  | create | req-88d93eeb-fad9-4039-8ba6-1d2f01a0605d | - |
  2015-06-12T04:03:10.00 |
  ++-+-++
 
  root@controller:~# nova list
  +---++++-+--+
  | ID  | Name
  | Status | Task State | Power State | Networks |
  +--++++-+--+
  | 101756c5-6d6b-412b-9cc6-1628fa7c0b9c | event_test | ERROR  | -  |
  NOSTATE |  |
  | fcbba82f-60a1-4785-84f2-88bcf2da7e7e | test_1 | ACTIVE | -  |
  Running | sample_network=20.20.0.7 |
  +--++++-+--+
 
  but other failed action will have an error message:
 
  root@controller:/var/log/nova# nova instance-action-list
  4525360f-75da-4d5e-bed7-a21e62212eab
  ++--+-++
  | Action | Request_ID   | Message | Start_Time
  |
  ++--+-++
  | create | req-7be58a31-0243-43a1-8a21-45ad1e90d279 | -   |
  2015-06-12T07:38:05.00 |
  | resize | req-120f3379-313c-471e-b6e5-4d8d6a7d1357 | Error   |
  2015-06-15T06:36:38.00 |
  ++--+-++
 
  As we can see from the above example, we can not identify such failed
  actions from others only using nova instance-action-list API.
 
  I suggest we adding success messages to succeed actions and/or adding error
  messages to the above mentioned failed actions to fix this problem.
 
 This is a big known weak point in the Nova API right now.
 
 My gut tells me we should add an extra status field for
 Pending/Success/Failure, rather than making users parse a string
 message.
 
 This general area is being revisited as part of the tasks topic that
 we mention here:
 

Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port Blueprint

2015-06-15 Thread Kai Qiang Wu
Hi Adrian,

If I summarize your option, it would be,

1) Have a function like this,

 magnum bay-create --name swarmbay --baymodel swarmbaymodel
--baymodel-property-override apiserver_port=8766


And then magnum pass that property to override baymodel default properties,
and create the bay.


2) You talked another BP, about adjust bay api_address to be a URL, For bay
attribute api_address should be return format like following
tcp://192.168.45.12:7622 or http://192.168.45.12:8234




Is it ?


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Adrian Otto adrian.o...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   06/13/2015 02:04 PM
Subject:Re: [openstack-dev] [Magnum]Discuss
configurable-coe-api-port   Blueprint



Hongbin,

Good use case. I suggest that we add a parameter to magnum bay-create that
will allow the user to override the baymodel.apiserver_port attribute with
a new value that will end up in the bay.api_address attribute as part of
the URL. This approach assumes implementation of the magnum-api-address-url
blueprint. This way we solve for the use case, and don't need a new
attribute on the bay resource that requires users to concatenate multiple
attribute values in order to get a native client tool working.

Adrian

On Jun 12, 2015, at 6:32 PM, Hongbin Lu hongbin...@huawei.com wrote:

  A use case could be the cloud is behind a proxy and the API port is
  filtered. In this case, users have to start the service in an
  alternative port.

  Best regards,
  Hongbin

  From: Adrian Otto [mailto:adrian.o...@rackspace.com]
  Sent: June-12-15 2:22 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Magnum] Discuss
  configurable-coe-api-port Blueprint

  Thanks for raising this for discussion. Although I do think that the
  API port humber should be expressed in a URL that the local client
  can immediately use for connecting a native client to the API, I am
  not convinced that this needs to be a separate attribute on the Bay
  resource.

  In general, I think it’s a reasonable assumption that nova instances
  will have unique IP addresses assigned to them (public or private is
  not an issue here) so unique port numbers for running the API
  services on alternate ports seems like it may not be needed. I’d like
  to have input from at least one Magnum user explaining an actual use
  case for this feature before accepting this blueprint.

  One possible workaround for this would be to instruct those who want
  to run nonstandard ports to copy the heat template, and specify a new
  heat template as an alternate when creating the BayModel, which can
  implement the port number as a parameter. If we learn that this
  happens a lot, we should revisit this as a feature in Magnum rather
  than allowing it through an external workaround.

  I’d like to have a generic feature that allows for arbitrary
  key/value pairs for parameters and values to be passed to the heat
  stack create call so that this, and other values can be passed in
  using the standard magnum client and API without further
  modification. I’m going to look to see if we have a BP for this, and
  if not, I will make one.

  Adrian



On Jun 11, 2015, at 6:05 PM, Kai Qiang Wu(Kennan) 
wk...@cn.ibm.com wrote:

If I understand the bp correctly,

the apiserver_port is for public access or API call service
endpoint. If it is that case, user would use that info

htttp(s)://ip:port

so port is good information for users.


If we believe above assumption is right. Then

1) Some user not needed to change port, since the heat have
default hard code port in that

2) If some users want to change port, (through heat, we can do
that)  We need add such flexibility for users.
That's  bp

https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port
 try to solve.

It depends on how end-users use with magnum.


Welcome to more inputs about this, If many of us think it is
not necessary to customize the ports. we can drop the bp.


Thanks


Best Wishes,


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Adam Young

On 06/15/2015 08:45 PM, Madhuri wrote:
+1 Kevin. We will make Barbican a dependency to make it the default 
option to secure keys.


Regards,
Madhuri

On Tue, Jun 16, 2015 at 12:48 AM, Fox, Kevin M kevin@pnnl.gov 
mailto:kevin@pnnl.gov wrote:


If your asking the cloud provider to go through the effort to
install Magnum, its not that much extra effort to install Barbican
at the same time. Making it a dependency isn't too bad then IMHO.



Please use Certmonger on the the Magnum side, with an understanding that 
the Barbican team is writing a Certmonger plugin.


Certmonger can do self signed, and can talk to Dogtag if you need a real 
CA.  If we need to talk to other CAs, you write a helper script that 
Certmonger calls to post the CSR and fetch the signed Cert, but 
certmonger does the openssl/NSS work to properly mange the signing requests.




Thanks,
Kevin

*From:* Adrian Otto [adrian.o...@rackspace.com
mailto:adrian.o...@rackspace.com]
*Sent:* Sunday, June 14, 2015 11:09 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [Magnum] TLS Support in Magnum

Madhuri,


On Jun 14, 2015, at 10:30 PM, Madhuri Rai
madhuri.ra...@gmail.com mailto:madhuri.ra...@gmail.com wrote:

Hi All,

This is to bring the blueprint secure-kubernetes
https://blueprints.launchpad.net/magnum/+spec/secure-kubernetesin
discussion. I have been trying to figure out what could be the
possible change area to support this feature in Magnum. Below is
just a rough idea on how to proceed further on it.

This task can be further broken in smaller pieces.

*1. Add support for TLS in python-k8sclient.*
The current auto-generated code doesn't support TLS. So this work
will be to add TLS support in kubernetes python APIs.

*2. Add support for Barbican in Magnum.*
Barbican will be used to store all the keys and certificates.


Keep in mind that not all clouds will support Barbican yet, so
this approach could impair adoption of Magnum until Barbican is
universally supported. It might be worth considering a solution
that would generate all keys on the client, and copy them to the
Bay master for communication with other Bay nodes. This is less
secure than using Barbican, but would allow for use of Magnum
before Barbican is adopted.

If both methods were supported, the Barbican method should be the
default, and we should put warning messages in the config file so
that when the administrator relaxes the setting to use the
non-Barbican configuration he/she is made aware that it requires a
less secure mode of operation.

My suggestion is to completely implement the Barbican support
first, and follow up that implementation with a non-Barbican
option as a second iteration for the feature.

Another possibility would be for Magnum to use its own private
installation of Barbican in cases where it is not available in the
service catalog. I dislike this option because it creates an
operational burden for maintaining the private Barbican service,
and additional complexities with securing it.


*3. Add support of TLS in Magnum.*
This work mainly involves supporting the use of key and
certificates in magnum to support TLS.

The user generates the keys, certificates and store them in
Barbican. Now there is two way to access these keys while
creating a bay.


Rather than the user generates the keys…, perhaps it might be
better to word that as the magnum client library code generates
the keys for the user…”.


1. Heat will access Barbican directly.
While creating bay, the user will provide this key and heat
templates will fetch this key from Barbican.


I think you mean that Heat will use the Barbican key to fetch the
TLS key for accessing the native API service running on the Bay.


2. Magnum-conductor access Barbican.
While creating bay, the user will provide this key and then
Magnum-conductor will fetch this key from Barbican and provide
this key to heat.

Then heat will copy this files on kubernetes master node. Then
bay will use this key to start a Kubernetes services signed with
these keys.


Make sure that the Barbican keys used by Heat and magnum-conductor
to store the various TLS certificates/keys are unique per tenant
and per bay, and are not shared among multiple tenants. We don’t
want it to ever be possible to trick Magnum into revealing secrets
belonging to other tenants.


After discussion when we all come to same point, I will create
separate blueprints for each task.
I am currently working on configuring Kubernetes services with
TLS keys.

Please provide your suggestions if any.


Thanks for kicking off this 

Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-15 Thread Keith Bray
Regardless of what the API defaults to, could we have the CLI prompt/warn so 
that the user easily knows that both options exist?  Is there a precedent 
within OpenStack for a similar situation?

E.g.
 solum app delete MyApp
 Do you want to also delete your logs? (default is Yes):  [YES/no]
  NOTE, if you choose No, application logs will remain on your 
account. Depending on your service provider, you may incur on-going storage 
charges.

Thanks,
-Keith

From: Devdatta Kulkarni 
devdatta.kulka...@rackspace.commailto:devdatta.kulka...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 15, 2015 9:56 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
app?


Yes, the log deletion should be optional.

The question is what should be the default behavior. Should the default be to 
delete the logs and provide a flag to keep them, or keep the logs by default 
and provide a override flag to delete them?

Delete-by-default is consistent with the view that when an app is deleted, all 
its artifacts are deleted (the app's meta data, the deployment units (DUs), and 
the logs). This behavior is also useful in our current state when the app 
resource and the CLI are in flux. For now, without a way to specify a flag, 
either to delete the logs or to keep them, delete-by-default behavior helps us 
clean all the log files from the application's cloud files container when an 
app is deleted.

This is very useful for our CI jobs. Without this, we end up with lots of log 
files in the application's container,

and have to resort to separate scripts to delete them up after an app is 
deleted.


Once the app resource and CLI stabilize it should be straightforward to change 
the default behavior if required.


- Devdatta



From: Adrian Otto adrian.o...@rackspace.commailto:adrian.o...@rackspace.com
Sent: Friday, June 12, 2015 6:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

Team,

We currently delete logs for an app when we delete the app[1].

https://bugs.launchpad.net/solum/+bug/1463986

Perhaps there should be an optional setting at the tenant level that determines 
whether your logs are deleted or not by default (set to off initially), and an 
optional parameter to our DELETE calls that allows for the opposite action from 
the default to be specified if the user wants to override it at the time of the 
deletion. Thoughts?

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] [horizon] [merlin] [refstack] Javascript Linting

2015-06-15 Thread Richard Jones
JSCS in Horizon has been extended with the John Papa style guidelines to
enforce consistent angularjs code style*. It's no longer just a findbug
tool. I don't have time to investigate - can ESLint perform the same role
for Horizon?

Current Horizon activity involves a whole lot of bringing code into line
with that style (and other JSCS check fails).


  Richard

* https://review.openstack.org/#/c/181311/

On Tue, 16 Jun 2015 at 09:40 Michael Krotscheck krotsch...@gmail.com
wrote:

 I'm restarting this thread with a different subject line to get a broader
 audience. Here's the original thread:
 http://lists.openstack.org/pipermail/openstack-dev/2015-June/066040.html

 The question at hand is What will be OpenStack's javascript equivalent of
 flake8. I'm going to consider the need for common formatting rules to be
 self-evident. Here's the lay of the land so far:

- Horizon currently uses JSCS.
- Refstack uses Eslint.
- Merlin doesn't use anything.
- StoryBoard (deprecated) uses eslint.
- Nobody agrees on rules.

 *JSCS*
 JSCS Stands for JavaScript CodeStyle. Its mission is to enforce a style
 guide, yet it does not check for potential bugs, variable overrides, etc.
 For those tests, the team usually defers to (preferred) JSHint, or ESLint.

 *JSHint*
 Ever since JSCS was extracted from JSHint, it has actively removed rules
 that enforce code style, and focused on findbug style tests instead. JSHint
 still contains the Do no evil license, therefore is not an option for
 OpenStack, and has been disqualified.

 *ESLint*
 ESLint's original mission was to be an OSI compliant replacement for
 JSHint, before the JSCS split. It wants to be a one-tool solution.

 My personal opinion/recommendation: Based on the above, I recommend we use
 ESLint. My reasoning: It's one tool, it's extensible, it does both
 codestyle things and bug finding things, and it has a good license. JSHint
 is disqualified because of the license. JSCS is disqualified because it is
 too focused, and only partially useful on its own.

 I understand that this will mean some work by the Horizon team to bring
 their code in line with a new parser, however I personally consider this to
 be a good thing. If the code is good to begin with, it shouldn't be that
 difficult.

 This thread is not there to argue about which rules to enforce. Right now
 I just want to nail down a tool, so that we can (afterwards) have a
 discussion about which rules to activate.

 Michael
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Why do app names have to be unique?

2015-06-15 Thread Murali Allada
I think app names should be unique per tenant. My opinion is that this should 
be solums default behavior and does not require an extra setting in the config 
file.

I feel the pain of non unique names when I play with my own vagrant environment 
and deploy multiple 'test' apps. To identify the right app, I either need to 
save the uuid in a notepad to refer to later, or I just deploy them with unique 
names, which makes non unique names unnecessary.

I wouldn't want to list my apps and see multiple apps with the same name. If I 
do, and don't have the uuid saved somewhere, I'm totally lost.



On Jun 15, 2015, at 9:37 AM, Devdatta Kulkarni 
devdatta.kulka...@rackspace.commailto:devdatta.kulka...@rackspace.com wrote:


Hi Adrian,


The new app resource that is being implemented 
(https://review.openstack.org/#/c/185147/)
does not enforce name uniqueness.


This issue was discussed here sometime back.
Earlier thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-March/058858.html


The main argument for requiring unique app names within a tenant was usability. 
In customer research

it was found that users were more comfortable with using names than UUIDs. 
Without the unique app name constraint, users would have to resort to using 
UUIDs when they create multiple apps with the same name.


As a way to accommodate both requirements -- unique names when desired, and 
making them optional when

its not an issue -- we had said that we could make app uniqueness per tenant 
configurable.

So I would be okay if we open a new blueprint to track this work.


Thanks,

- Devdatta



From: Adrian Otto adrian.o...@rackspace.commailto:adrian.o...@rackspace.com
Sent: Friday, June 12, 2015 6:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Solum] Why do app names have to be unique?

Team,

While triaging this bug, I got to thinking about unique names:

https://bugs.launchpad.net/solum/+bug/1434293

Should our app names be unique? Why? Should I open a blueprint for a new 
feature to make name uniqueness optional, and default it to on. If not, why?

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] and [lbaas] - GSLB API and backend support

2015-06-15 Thread Doug Wiegley
Hi all,

We don’t have a rough draft API doc yet, so I’m suggesting that we postpone 
tomorrow morning’s meeting until next week. Does anyone have any other agenda 
items, or want the meeting tomorrow?

Thanks,
doug


 On Jun 2, 2015, at 10:52 AM, Doug Wiegley doug...@parksidesoftware.com 
 wrote:
 
 Hi all,
 
 The initial meeting logs can be found at 
 http://eavesdrop.openstack.org/meetings/gslb/2015/ , and we will be having 
 another meeting next week, same time, same channel.
 
 Thanks,
 doug
 
 
 On May 31, 2015, at 1:27 AM, Samuel Bercovici samu...@radware.com wrote:
 
 Good for me - Tuesday at 1600UTC
 
 
 -Original Message-
 From: Doug Wiegley [mailto:doug...@parksidesoftware.com] 
 Sent: Thursday, May 28, 2015 10:37 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [designate] and [lbaas] - GSLB API and backend 
 support
 
 
 On May 28, 2015, at 12:47 PM, Hayes, Graham graham.ha...@hp.com wrote:
 
 On 28/05/15 19:38, Adam Harwell wrote:
 I haven't seen any responses from my team yet, but I know we'd be 
 interested as well - we have done quite a bit of work on this in the 
 past, including dealing with the Designate team on this very subject. 
 We can be available most hours between 9am-6pm Monday-Friday CST.
 
 --Adam
 
 https://keybase.io/rm_you
 
 
 From: Rakesh Saha rsahaos...@gmail.com 
 mailto:rsahaos...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Date: Thursday, May 28, 2015 at 12:22 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [designate] and [lbaas] - GSLB API and 
 backend support
 
  Hi Kunal,
  I would like to participate as well.
  Mon-Fri morning US Pacific time works for me.
 
  Thanks,
  Rakesh Saha
 
  On Tue, May 26, 2015 at 8:45 PM, Vijay Venkatachalam
  vijay.venkatacha...@citrix.com
  mailto:vijay.venkatacha...@citrix.com wrote:
 
  We would like to participate as well.
 
  __ __
 
  Monday-Friday Morning US time works for me..
 
  __ __
 
  Thanks,
 
  Vijay V.
 
  __ __
 
  *From:*Samuel Bercovici [mailto:samu...@radware.com
  mailto:samu...@radware.com]
  *Sent:* 26 May 2015 21:39
 
 
  *To:* OpenStack Development Mailing List (not for usage questions)
  *Cc:* kunalhgan...@gmail.com mailto:kunalhgan...@gmail.com;
  v.jain...@gmail.com mailto:v.jain...@gmail.com;
  do...@a10networks.com mailto:do...@a10networks.com
  *Subject:* Re: [openstack-dev] [designate] and [lbaas] - GSLB
  API and backend support
 
  __ __
 
  Hi,
 
  __ __
 
  I would also like to participate.
 
  Friday is a non-working day in Israel (same as Saturday for most
  of you).
 
  So Monday- Thursday works best for me.
 
  __ __
 
  -Sam.
 
  __ __
 
  __ __
 
  *From:*Doug Wiegley [mailto:doug...@parksidesoftware.com]
  *Sent:* Saturday, May 23, 2015 8:45 AM
  *To:* OpenStack Development Mailing List (not for usage questions)
  *Cc:* kunalhgan...@gmail.com mailto:kunalhgan...@gmail.com;
  v.jain...@gmail.com mailto:v.jain...@gmail.com;
  do...@a10networks.com mailto:do...@a10networks.com
  *Subject:* Re: [openstack-dev] [designate] and [lbaas] - GSLB
  API and backend support
 
  __ __
 
  Of those two options, Friday would work better for me.
 
  __ __
 
  Thanks,
 
  doug
 
  __ __
 
  On May 22, 2015, at 9:33 PM, ki...@macinnes.ie
  mailto:ki...@macinnes.ie wrote:
 
  __ __
 
  Hi Kunal,
 
  Thursday/Friday works for me - early morning PT works best,
  as I'm based in Ireland.
 
  I'll find some specific times the Designate folks are
  available over the next day or two and provide some
  options.. 
 
  Thanks,
  Kiall
 
  On 22 May 2015 7:24 pm, Gandhi, Kunal
  kunalhgan...@gmail.com mailto:kunalhgan...@gmail.com
  wrote:
 
  Hi All
 
  __ __
 
  I wanted to start a discussion about adding support for GSLB
  to neutron-lbaas and designate. To be brief for folks who
  are new to GLB, GLB stands for Global Load Balancing and we
  use it for load balancing traffic across various
  geographical regions. A more detail description of GLB can
  be found at my talk at the summit this week here
  https://www.youtube.com/watch?v=fNR0SW3vj_s.
 
  __ __
 
  To my understanding, there are two sides to a GSLB - DNS
  side and LB side. 
 
  __ __
 
  DNS side
 
   Most of the GSLB's provided by various vendors
  

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Madhuri
+1 Kevin. We will make Barbican a dependency to make it the default option
to secure keys.

Regards,
Madhuri

On Tue, Jun 16, 2015 at 12:48 AM, Fox, Kevin M kevin@pnnl.gov wrote:

  If your asking the cloud provider to go through the effort to install
 Magnum, its not that much extra effort to install Barbican at the same
 time. Making it a dependency isn't too bad then IMHO.

 Thanks,
 Kevin
  --
 *From:* Adrian Otto [adrian.o...@rackspace.com]
 *Sent:* Sunday, June 14, 2015 11:09 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Magnum] TLS Support in Magnum

  Madhuri,

  On Jun 14, 2015, at 10:30 PM, Madhuri Rai madhuri.ra...@gmail.com
 wrote:

Hi All,

 This is to bring the blueprint  secure-kubernetes
 https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes in 
 discussion.
 I have been trying to figure out what could be the possible change area
 to support this feature in Magnum. Below is just a rough idea on how to
 proceed further on it.

 This task can be further broken in smaller pieces.

 *1. Add support for TLS in python-k8sclient.*
 The current auto-generated code doesn't support TLS. So this work will be
 to add TLS support in kubernetes python APIs.

 *2. Add support for Barbican in Magnum.*
  Barbican will be used to store all the keys and certificates.


  Keep in mind that not all clouds will support Barbican yet, so this
 approach could impair adoption of Magnum until Barbican is universally
 supported. It might be worth considering a solution that would generate all
 keys on the client, and copy them to the Bay master for communication with
 other Bay nodes. This is less secure than using Barbican, but would allow
 for use of Magnum before Barbican is adopted.

  If both methods were supported, the Barbican method should be the
 default, and we should put warning messages in the config file so that when
 the administrator relaxes the setting to use the non-Barbican configuration
 he/she is made aware that it requires a less secure mode of operation.

  My suggestion is to completely implement the Barbican support first, and
 follow up that implementation with a non-Barbican option as a second
 iteration for the feature.

  Another possibility would be for Magnum to use its own private
 installation of Barbican in cases where it is not available in the service
 catalog. I dislike this option because it creates an operational burden for
 maintaining the private Barbican service, and additional complexities with
 securing it.

*3. Add support of TLS in Magnum.*
  This work mainly involves supporting the use of key and certificates in
 magnum to support TLS.

 The user generates the keys, certificates and store them in Barbican. Now
 there is two way to access these keys while creating a bay.


  Rather than the user generates the keys…, perhaps it might be better to
 word that as the magnum client library code generates the keys for the
 user…”.

 1. Heat will access Barbican directly.
 While creating bay, the user will provide this key and heat templates will
 fetch this key from Barbican.


  I think you mean that Heat will use the Barbican key to fetch the TLS key
 for accessing the native API service running on the Bay.

 2. Magnum-conductor access Barbican.
 While creating bay, the user will provide this key and then
 Magnum-conductor will fetch this key from Barbican and provide this key to
 heat.

 Then heat will copy this files on kubernetes master node. Then bay will
 use this key to start a Kubernetes services signed with these keys.


  Make sure that the Barbican keys used by Heat and magnum-conductor to
 store the various TLS certificates/keys are unique per tenant and per bay,
 and are not shared among multiple tenants. We don’t want it to ever be
 possible to trick Magnum into revealing secrets belonging to other tenants.

 After discussion when we all come to same point, I will create
 separate blueprints for each task.
 I am currently working on configuring Kubernetes services with TLS keys.

  Please provide your suggestions if any.


  Thanks for kicking off this discussion.

  Regards,

  Adrian



  Regards,
  Madhuri
  __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Glance] [all] Proposal for Weekly Glance Drivers meeting.

2015-06-15 Thread Fei Long Wang

Awesome, glad to see it's proposed.

On 16/06/15 11:41, Nikhil Komawar wrote:

Hi,

As per the discussion during the last weekly Glance meeting (14:51:42at
http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-06-11-14.00.log.html
), we will begin a short drivers' meeting where anyone can come and get
more feedback.

The purpose is to enable those who need multiple drivers in the same
place; easily co-ordinate, schedule  collaborate on the specs, get
core-reviewers assigned to their specs etc. This will also enable more
synchronous style feedback, help with more collaboration as well as with
dedicated time for giving quality input on the specs. All are welcome to
attend and attendance from drivers is not mandatory but encouraged.
Initially it would be a 30 min meeting and if need persists we will
extend the period.

Please vote on the proposed time and date:
https://review.openstack.org/#/c/192008/ (Note: Run the tests for your
vote to ensure we are considering feasible  non-conflicting times.) We
will start the meeting next week unless there are strong conflicts.



--
Cheers  Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Proposal for the Resume Feature

2015-06-15 Thread BORTMAN, Limor (Limor)
+1,
I just have one question. Do we want to able resume for WF  in error state?
I mean isn't real resume it should be more of a rerun, don't you think?
So in an error state we will create new executor and just re run it
Thanks Limor



-Original Message-
From: Lingxian Kong [mailto:anlin.k...@gmail.com] 
Sent: Tuesday, June 16, 2015 5:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Mistral] Proposal for the Resume Feature

Thanks Winson for the write-up, very detailed infomation. (the format was good)

I'm totally in favor of your idea, actually, I really think you proposal is 
complementary to my proposal in 
https://etherpad.openstack.org/p/vancouver-2015-design-summit-mistral,
please see 'Workflow rollback/recovery' section.

What I wanna do is configure some 'checkpoints' throughout the workflow, and if 
some task failed, we could rollback the execution to some checkpoint, and 
resume the whole workflow after we have fixed some problem, seems like the 
execution has never been failed before.

It's just a initial idea, I'm waiting for our discussion to see if it really 
makes sense to users, to get feedback, then we can talk about the 
implementation and cooperation.

On Tue, Jun 16, 2015 at 7:51 AM, W Chan m4d.co...@gmail.com wrote:
 Resending to see if this fixes the formatting for outlines below.


 I want to continue the discussion on the workflow resume feature.


 Resuming from our last conversation @
 http://lists.openstack.org/pipermail/openstack-dev/2015-March/060265.h
 tml. I don't think we should limit how users resume. There may be 
 different possible scenarios. User can fix the environment or 
 condition that led to the failure of the current task and the user 
 wants to just re-run the failed task.  Or user can actually fix the 
 environment/condition which include fixing what the task was doing, 
 then just want to continue the next set of task(s).


 The following is a list of proposed changes.


 1. A new CLI operation to resume WF (i.e. mistral workflow-resume).

 A. If no additional info is provided, assume this WF is manually 
 paused and there are no task/action execution errors. The WF state is 
 updated to RUNNING. Update using the put method @ 
 ExecutionsController. The put method checks that there's no task/action 
 execution errors.

 B. If WF is in an error state

 i. To resume from failed task, the workflow-resume command 
 requires the WF execution ID, task name, and/or task input.

 ii. To resume from failed with-items task

 a. Re-run the entire task (re-run all items) requires WF
 execution ID, task name and/or task input.

 b. Re-run a single item requires WF execution ID, task 
 name, with-items index, and/or task input for the item.

 c. Re-run selected items requires WF execution ID, task 
 name, with-items indices, and/or task input for each items.

 - To resume from the next task(s), the workflow-resume 
 command requires the WF execution ID, failed task name, output for the 
 failed task, and a flag to skip the failed task.


 2. Make ERROR - RUNNING as valid state transition @ 
 is_valid_transition function.


 3. Add a comments field to Execution model. Add a note that indicates 
 the execution is launched by workflow-resume. Auto-populated in this case.


 4. Resume from failed task.

 A. Re-run task with the same task inputs  POST new action 
 execution for the task execution @ ActionExecutionsController

 B. Re-run task with different task inputs  POST new action 
 execution for the task execution, allowed for different input @ 
 ActionExecutionsController


 5. Resume from next task(s).

 A. Inject a noop task execution or noop action execution 
 (undecided yet) for the failed task with appropriate output.  The spec 
 is an adhoc spec that copies conditions from the failed task. This 
 provides some audit functionality and should trigger the next set of 
 task executions (in case of branching and such).



 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2015-06-15 15:59:18 -0700:
 No, I was confused by your statement:
 When we create a bay, we have an ssh keypair that we use to inject the ssh 
 public key onto the nova instances we create.
 
 It sounded like you were using that keypair to inject a public key. I just 
 misunderstood.
 
 It does raise the question though, are you using ssh between the controller 
 and the instance anywhere? If so, we will still run into issues when we go to 
 try and test it at our site. Sahara does currently, and we're forced to put a 
 floating ip on every instance. Its less then ideal...
 

Why not just give each instance a port on a network which can route
directly to the controller's network? Is there some reason you feel
forced to use a floating IP?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Proposal for the Resume Feature

2015-06-15 Thread Lingxian Kong
Thanks Winson for the write-up, very detailed infomation. (the format was good)

I'm totally in favor of your idea, actually, I really think you
proposal is complementary to my proposal in
https://etherpad.openstack.org/p/vancouver-2015-design-summit-mistral,
please see 'Workflow rollback/recovery' section.

What I wanna do is configure some 'checkpoints' throughout the
workflow, and if some task failed, we could rollback the execution to
some checkpoint, and resume the whole workflow after we have fixed
some problem, seems like the execution has never been failed before.

It's just a initial idea, I'm waiting for our discussion to see if it
really makes sense to users, to get feedback, then we can talk about
the implementation and cooperation.

On Tue, Jun 16, 2015 at 7:51 AM, W Chan m4d.co...@gmail.com wrote:
 Resending to see if this fixes the formatting for outlines below.


 I want to continue the discussion on the workflow resume feature.


 Resuming from our last conversation @
 http://lists.openstack.org/pipermail/openstack-dev/2015-March/060265.html. I
 don't think we should limit how users resume. There may be different
 possible scenarios. User can fix the environment or condition that led to
 the failure of the current task and the user wants to just re-run the failed
 task.  Or user can actually fix the environment/condition which include
 fixing what the task was doing, then just want to continue the next set of
 task(s).


 The following is a list of proposed changes.


 1. A new CLI operation to resume WF (i.e. mistral workflow-resume).

 A. If no additional info is provided, assume this WF is manually paused
 and there are no task/action execution errors. The WF state is updated to
 RUNNING. Update using the put method @ ExecutionsController. The put method
 checks that there's no task/action execution errors.

 B. If WF is in an error state

 i. To resume from failed task, the workflow-resume command requires
 the WF execution ID, task name, and/or task input.

 ii. To resume from failed with-items task

 a. Re-run the entire task (re-run all items) requires WF
 execution ID, task name and/or task input.

 b. Re-run a single item requires WF execution ID, task name,
 with-items index, and/or task input for the item.

 c. Re-run selected items requires WF execution ID, task name,
 with-items indices, and/or task input for each items.

 - To resume from the next task(s), the workflow-resume
 command requires the WF execution ID, failed task name, output for the
 failed task, and a flag to skip the failed task.


 2. Make ERROR - RUNNING as valid state transition @ is_valid_transition
 function.


 3. Add a comments field to Execution model. Add a note that indicates the
 execution is launched by workflow-resume. Auto-populated in this case.


 4. Resume from failed task.

 A. Re-run task with the same task inputs  POST new action execution
 for the task execution @ ActionExecutionsController

 B. Re-run task with different task inputs  POST new action execution
 for the task execution, allowed for different input @
 ActionExecutionsController


 5. Resume from next task(s).

 A. Inject a noop task execution or noop action execution (undecided yet)
 for the failed task with appropriate output.  The spec is an adhoc spec that
 copies conditions from the failed task. This provides some audit
 functionality and should trigger the next set of task executions (in case of
 branching and such).



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] [horizon] [merlin] [refstack] Javascript Linting

2015-06-15 Thread Robert Collins
On 16 June 2015 at 16:21, Richard Jones r1chardj0...@gmail.com wrote:
 JSCS in Horizon has been extended with the John Papa style guidelines to
 enforce consistent angularjs code style*. It's no longer just a findbug
 tool. I don't have time to investigate - can ESLint perform the same role
 for Horizon?

Yes - https://twitter.com/john_papa/status/574074485944385536

[I haven't dug into the details, but thats a tweak announcing eslint
john papa support].

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] [horizon] [merlin] [refstack] Javascript Linting

2015-06-15 Thread Richard Jones
Sorry, wrong patch. That one added the style requirement to the project
contribution guidelines. This is the one that added the .jscsrc config:

https://review.openstack.org/#/c/185725/

On Tue, 16 Jun 2015 at 14:21 Richard Jones r1chardj0...@gmail.com wrote:

 JSCS in Horizon has been extended with the John Papa style guidelines to
 enforce consistent angularjs code style*. It's no longer just a findbug
 tool. I don't have time to investigate - can ESLint perform the same role
 for Horizon?

 Current Horizon activity involves a whole lot of bringing code into line
 with that style (and other JSCS check fails).


   Richard

 * https://review.openstack.org/#/c/181311/

 On Tue, 16 Jun 2015 at 09:40 Michael Krotscheck krotsch...@gmail.com
 wrote:

 I'm restarting this thread with a different subject line to get a broader
 audience. Here's the original thread:
 http://lists.openstack.org/pipermail/openstack-dev/2015-June/066040.html

 The question at hand is What will be OpenStack's javascript equivalent
 of flake8. I'm going to consider the need for common formatting rules to
 be self-evident. Here's the lay of the land so far:

- Horizon currently uses JSCS.
- Refstack uses Eslint.
- Merlin doesn't use anything.
- StoryBoard (deprecated) uses eslint.
- Nobody agrees on rules.

 *JSCS*
 JSCS Stands for JavaScript CodeStyle. Its mission is to enforce a style
 guide, yet it does not check for potential bugs, variable overrides, etc.
 For those tests, the team usually defers to (preferred) JSHint, or ESLint.

 *JSHint*
 Ever since JSCS was extracted from JSHint, it has actively removed rules
 that enforce code style, and focused on findbug style tests instead. JSHint
 still contains the Do no evil license, therefore is not an option for
 OpenStack, and has been disqualified.

 *ESLint*
 ESLint's original mission was to be an OSI compliant replacement for
 JSHint, before the JSCS split. It wants to be a one-tool solution.

 My personal opinion/recommendation: Based on the above, I recommend we
 use ESLint. My reasoning: It's one tool, it's extensible, it does both
 codestyle things and bug finding things, and it has a good license. JSHint
 is disqualified because of the license. JSCS is disqualified because it is
 too focused, and only partially useful on its own.

 I understand that this will mean some work by the Horizon team to bring
 their code in line with a new parser, however I personally consider this to
 be a good thing. If the code is good to begin with, it shouldn't be that
 difficult.

 This thread is not there to argue about which rules to enforce. Right now
 I just want to nail down a tool, so that we can (afterwards) have a
 discussion about which rules to activate.

 Michael
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-15 Thread Jay Pipes

On 06/15/2015 01:16 PM, Dmitry Tantsur wrote:

On 06/15/2015 07:07 PM, Jay Pipes wrote:

It has come to my attention in [1] that the microversion spec for Nova
[2] and Ironic [3] have used the project name -- i.e. Nova and Ironic --
instead of the name of the API -- i.e. OpenStack Compute and
OpenStack Bare Metal -- in the HTTP header that a client passes to
indicate a preference for or knowledge of a particular API microversion.

The original spec said that the HTTP header should contain the name of
the service type returned by the Keystone service catalog (which is also
the official name of the REST API). I don't understand why the spec was
changed retroactively and why Nova has been changed to return
X-OpenStack-Nova-API-Version instead of X-OpenStack-Compute-API-Version
HTTP headers [4].

To be blunt, Nova is the *implementation* of the OpenStack Compute API.
Ironic is the *implementation* of the OpenStack BareMetal API.

The HTTP headers should never have been changed like this, IMHO, and I'm
disappointed that they were. In fact, it looks like a very select group
of individuals pushed through this change [5] with little to no input
from the mailing list or community.

Since no support for these headers has yet to land in the client
packages, can we please reconsider this?


ironicclient has support for the header for a while, and anyway we
released it with Ironic Kilo, so I guess it's a breaking change.


Would it be possible to add support and deprecate the 
X-OpenStack-Ironic-API-Version HTTP header?



also while the only implementation and source of authority for the
baremetal API is Ironic, I'm not sure there's point of calling it
baremetal API, but I'm neutral to this suggestion modulo compatibility
break.


What does the Ironic endpoint show up in the keystone service catalog as?

-jay


Thanks,
-jay

[1] https://review.openstack.org/#/c/187112/
[2]
https://github.com/openstack/nova-specs/blob/master/specs/kilo/implemented/api-microversions.rst


[3]
https://github.com/openstack/ironic-specs/blob/master/specs/kilo/api-microversions.rst


[4] https://review.openstack.org/#/c/155611/
[5] https://review.openstack.org/#/c/153183/

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] OpenStack Puppet configuration for HA deployment

2015-06-15 Thread Richard Raseley

Cristina Aiftimiei wrote:

The puppetlabs-openstack clearly states:





Limitations

  * High availability and SSL-enabled endpoints are not provided by this
module.




As Matt touched on, you really should be building your own 'composition 
layer' for deploying production services and their supporting 
components, not consuming a pre-canned composition layer like 
'puppetlabs-openstack', which has value - but primarily as a 
demonstration and testing tool.


In this model, each of the classes contained puppet-* module (e.g. 
puppet-nova, puppet-keystone, et. al.) will be wrapped with your own 
custom classes (likely in a role and profile pattern) in order to define 
those relationships.


Regards,

Richard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-15 Thread Ruby Loo
On 15 June 2015 at 13:07, Jay Pipes jaypi...@gmail.com wrote:

 It has come to my attention in [1] that the microversion spec for Nova [2]
 and Ironic [3] have used the project name -- i.e. Nova and Ironic --
 instead of the name of the API -- i.e. OpenStack Compute and OpenStack
 Bare Metal -- in the HTTP header that a client passes to indicate a
 preference for or knowledge of a particular API microversion.

 The original spec said that the HTTP header should contain the name of the
 service type returned by the Keystone service catalog (which is also the
 official name of the REST API). I don't understand why the spec was changed
 retroactively and why Nova has been changed to return
 X-OpenStack-Nova-API-Version instead of X-OpenStack-Compute-API-Version
 HTTP headers [4].

 To be blunt, Nova is the *implementation* of the OpenStack Compute API.
 Ironic is the *implementation* of the OpenStack BareMetal API.

 The HTTP headers should never have been changed like this, IMHO, and I'm
 disappointed that they were. In fact, it looks like a very select group of
 individuals pushed through this change [5] with little to no input from the
 mailing list or community.

 Since no support for these headers has yet to land in the client packages,
 can we please reconsider this?

 Thanks,
 -jay


Hi Jay,

When I reviewed the changes in Ironic, that was one of the things I
noticed. I looked at the nova implementation at the time, and I saw 'nova'
so even though I thought it should have been 'compute' (and 'baremetal' for
Ironic), I thought it was OK to use 'ironic'. Sorry, that was the wrong
time for me to be a laamb ;)

I think we should deprecate and change to use 'baremetal' (if it isn't
going to be too painful).

--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Make functional tests voting

2015-06-15 Thread Adrian Otto
Bug ticket for python-magnumclient gate test:
https://bugs.launchpad.net/python-magnumclient/+bug/1465375

On Jun 15, 2015, at 11:07 AM, Tom Cammann 
tom.camm...@hp.commailto:tom.camm...@hp.com wrote:

Review up for making it voting: https://review.openstack.org/#/c/191921/

It would probably be a good plan to run the functional tests on the 
python-magnumclient,
although I'm not too sure how to set that up.

Tom

On 15/06/15 18:47, Adrian Otto wrote:
Tom,

Yes, let’s make it voting now. I went through the full review queue over the 
weekend, and I did find a few examples of functional tests timing out after 2 
hours. It did not look to me like they actually ran, suggesting a malfunction 
in the setup of the test nodes. There we only one or two like that in the few 
dozen patches I looked at. I think I ordered one recheck. If that happens a 
lot, we might need to revisit this decision, but in all honesty I’d prefer to 
have the pressure of a malfunctioning gate blocking us so that we are motivated 
to resolve the issue at the root cause.

Thanks,

Adrian

On Jun 15, 2015, at 10:21 AM, Davanum Srinivas 
dava...@gmail.commailto:dava...@gmail.com wrote:

+1 from me Tom.

On Mon, Jun 15, 2015 at 1:15 PM, Tom Cammann 
tom.camm...@hp.commailto:tom.camm...@hp.com wrote:
Hello,

I haven't seen any false positives in a few weeks from the functional
tests, but I have seen a couple reviewers missing the -1 from the non-voting
job
when there has been a legitimate failures.

I think now would be a good time to turn 'check-functional-dsvm-magnum' to
voting.

Thanks,
Tom


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-15 Thread Thomas Goirand
On 06/15/2015 04:55 PM, James Page wrote:
 Hi All
 
 On 27/05/15 09:14, Thomas Goirand wrote:
 tl;dr: - We'd like to push distribution packaging of OpenStack on
 upstream gerrit with reviews. - The intention is to better share
 the workload, and improve the overall QA for packaging *and*
 upstream. - The goal is *not* to publish packages upstream -
 There's an ongoing discussion about using stackforge or openstack.
 This isn't, IMO, that important, what's important is to get
 started. - There's an ongoing discussion about using a distribution
 specific namespace, my own opinion here is that using
 /openstack-pkg-{deb,rpm} or /stackforge-pkg-{deb,rpm} would be the
 most convenient because of a number of technical reasons like the
 amount of Git repository. - Finally, let's not discuss for too long
 and let's do it!!!
 
 While working to re-align the dependency chain for OpenStack between
 Debian and Ubuntu 15.10, and in preparation for the first Liberty
 milestone, my team and I have been reflecting  on the proposal to make
 Ubuntu and Debian packaging of OpenStack a full OpenStack project.
 
 We’ve come to the conclusion that for Debian and Ubuntu the best place
 for us to collaborate on packaging is actually in the distributions,
 as we do today, and not upstream in OpenStack.
 
 Ubuntu has collaborated effectively with Debian since its inception
 and has an effective set of tools to support the flow of packaging
 (from Debian), bug reports and patches (to Debian) that have proven
 effective, in terms of both efficiency and value, ever since I’ve been
 working across both distributions.
 
 This process allows each distribution to maintain its own direction,
 whilst ensuring that any value that might be derived from
 collaboration is supported with as minimal overhead as possible.
 
 We understand and have communicated from the start of this
 conversation that we will need to be able to maintain deltas between
 Debian and Ubuntu - for both technical reasons, in the way the
 distributions work (think Ubuntu main vs universe), as well as
 objectives that each distribution has in terms of the way packaging
 should work.
 
 We don’t think that’s going to be made any easier by moving all of the
 packaging under the OpenStack project - it just feels like we’re
 trying to push a solved problem somewhere else, and then re-solve it
 in a whole new set of ways.
 
 The problem of managing delta and allowing a good level of
 distribution independence is still going to continue to exist and will
 be more difficult to manage due to the tighter coupling of development
 process and teams than we have today.
 
 On this basis, we're -1 on taking this proposal forward.
 
 That said, we do appreciate that the Ubuntu packaging for OpenStack is
 not as accessible as it might be using Bazaar as a VCS. In order to
 provide a more familiar experience to developers and operators looking
 to contribute to the wider Openstack ecosystem we will be moving our
 OpenStack packaging branches over to the new Git support in Launchpad
 in the next few weeks.
 
 Regards
 
 James

James,

During our discussions at the Summit, you seemed to be enthusiastic
about pushing our packaging to Stackforge. Then others told me to push
it to the /openstack namespace to make it more big tent-ish, which
made me very excited about the idea.

So far, I've been very happy of the reboot of our collaboration, and
felt like it was just awesome new atmosphere. So I have to admit I'm a
bit disappointed to read the above, even though I do understand the
reasoning.

Anyway, does this mean that you don't want to push packaging to
/stackforge either, which was the idea we shared at the summit?

I'm a bit lost on what I should do now, as what was exciting was
enabling operation people to contribute. I'll think about it and see
what to do next.

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-15 Thread Allison Randal
On 06/15/2015 11:48 AM, Thomas Goirand wrote:
 On 06/15/2015 04:55 PM, James Page wrote:
 The problem of managing delta and allowing a good level of
 distribution independence is still going to continue to exist and will
 be more difficult to manage due to the tighter coupling of development
 process and teams than we have today.

 On this basis, we're -1 on taking this proposal forward.

 That said, we do appreciate that the Ubuntu packaging for OpenStack is
 not as accessible as it might be using Bazaar as a VCS. In order to
 provide a more familiar experience to developers and operators looking
 to contribute to the wider Openstack ecosystem we will be moving our
 OpenStack packaging branches over to the new Git support in Launchpad
 in the next few weeks.
[...]
 During our discussions at the Summit, you seemed to be enthusiastic
 about pushing our packaging to Stackforge. Then others told me to push
 it to the /openstack namespace to make it more big tent-ish, which
 made me very excited about the idea.
 
 So far, I've been very happy of the reboot of our collaboration, and
 felt like it was just awesome new atmosphere. So I have to admit I'm a
 bit disappointed to read the above, even though I do understand the
 reasoning.

James is right. This discussion thread put a lot of faith in the
possibility that moving packaging efforts under the OpenStack umbrella
would magically solve our key blocking issues. (I'm guilty of it as much
as anyone else.) But really, we the collaborators are the ones who have
to solve those blocking issues, and we'll have to do it together, no
matter what banner we do it under.

 Anyway, does this mean that you don't want to push packaging to
 /stackforge either, which was the idea we shared at the summit?
 
 I'm a bit lost on what I should do now, as what was exciting was
 enabling operation people to contribute. I'll think about it and see
 what to do next.

It doesn't really matter where the repos are located, we can still
collaborate. Just moving Ubuntu's openstack repos to git and the Debian
Python Modules Team repos to git will be a massive step forward.

Allison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Tom Cammann
My main issue with having the user generate the keys/certs for the kube 
nodes
is that the keys have to be insecurely moved onto the kube nodes. 
Barbican can

talk to heat but heat must still copy them across to the nodes, exposing the
keys on the wire. Perhaps there are ways of moving secrets correctly which I
have missed.

I also agree that we should opt for a non-Barbican deployment first.

At the summit we talked about using Magnum as a CA and signing the
certificates, and we seemed to have some consensus about doing this with the
possibility of using Anchor. This would take a lot of the onus off of 
the user to

fiddle around with openssl and craft the right signed certs safely. Using
Magnum as a CA the user would generate a key/cert pair, and then get the
cert signed by Magnum, and the kube node would do the same. The main
downside of this technique is that the user MUST trust Magnum and the
administrator as they would have access to the CA signing cert.

An alternative to that where the user holds the CA cert/key, is to have 
the user:


- generate a CA cert/key (or use existing corp one etc)
- generate own cert/key
- sign their cert with their CA cert/key
- spin up kubecluster
- each node would generate key/cert
- each node exposes this cert to be signed
- user signs each cert and returns it to the node.

This is going quite manual unless they have a CA that the kube nodes can 
call

into. However this is the most secure way I could come up with.

Tom

On 15/06/15 17:52, Egor Guz wrote:

+1 for non-Barbican support first, unfortunately Barbican is not very well 
adopted in existing installation.

Madhuri, also please keep in mind we should come with solution which should 
work with Swarm and Mesos as well in further.

—
Egor

From: Madhuri Rai madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 15, 2015 at 0:47
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Hi,

Thanks Adrian for the quick response. Please find my response inline.

On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
Madhuri,

On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:

Hi All,

This is to bring the blueprint  
secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
 in discussion. I have been trying to figure out what could be the possible change 
area to support this feature in Magnum. Below is just a rough idea on how to proceed 
further on it.

This task can be further broken in smaller pieces.

1. Add support for TLS in python-k8sclient.
The current auto-generated code doesn't support TLS. So this work will be to 
add TLS support in kubernetes python APIs.

2. Add support for Barbican in Magnum.
Barbican will be used to store all the keys and certificates.

Keep in mind that not all clouds will support Barbican yet, so this approach 
could impair adoption of Magnum until Barbican is universally supported. It 
might be worth considering a solution that would generate all keys on the 
client, and copy them to the Bay master for communication with other Bay nodes. 
This is less secure than using Barbican, but would allow for use of Magnum 
before Barbican is adopted.

+1, I agree. One question here, we are trying to secure the communication 
between magnum-conductor and kube-apiserver. Right?


If both methods were supported, the Barbican method should be the default, and 
we should put warning messages in the config file so that when the 
administrator relaxes the setting to use the non-Barbican configuration he/she 
is made aware that it requires a less secure mode of operation.

In non-Barbican support, client will generate the keys and pass the location of 
the key to the magnum services. Then again heat template will copy and 
configure the kubernetes services on master node. Same as the below step.


My suggestion is to completely implement the Barbican support first, and follow 
up that implementation with a non-Barbican option as a second iteration for the 
feature.

How about implementing the non-Barbican support first as this would be easy to 
implement, so that we can first concentrate on Point 1 and 3. And then after 
it, we can work on Barbican support with more insights.

Another possibility would be for Magnum to use its own private installation of 
Barbican in cases where it is not available in the service catalog. I dislike 
this option because it creates an operational burden for maintaining the 
private Barbican service, and additional complexities with securing it.

In my opinion, installation of Barbican should be independent of Magnum. My 
idea here 

Re: [openstack-dev] [oslo.messaging][zeromq] Next step

2015-06-15 Thread Alec Hothan (ahothan)



On 6/12/15, 3:55 PM, Clint Byrum cl...@fewbar.com wrote:


 

I think you missed it is not tested in the gate as a root cause for
some of the ambiguity. Anecdotes and bug reports are super important for
knowing where to invest next, but a test suite would at least establish a
base line and prevent the sort of thrashing and confusion that comes from
such a diverse community of users feeding bug reports into the system.

I agree with you that zmq needs to pass whatever oslo messaging test is
currently available however this won't remove all the
semantical/behavioral ambiguities.
This kind of ambiguities could be fixed by enhancing the API documentation
- always good to do even if a bit late - and by developing the associated
test cases (although they tend to be harder to write).
Another (ugly) strategy could be to simply say that the intended behavior
is the one exposed by the rabbitMQ based implementation (by means of
seniority and/or actual deployment mileage).

For example, what happens if a recipient of a CALL or CAST message dies
before the message is sent.
Is the API supposed to return an error and if yes how quickly? RabbitMQ
based implementation will
likely return a success (since the message will sit in a queue in the
broker until the consumer reconnects - which could be a long time) while
ZMQ based will depend on the type of pattern used. Which is the behavior
desired by apps and which is the behavior advertised by the oslo
messaging API?

Another example relates to flow control conditions (sender sends lots of
CAST, receiver very slow to consume). Should the sender
- always receive success and all messages will be queued without limit,
- always receive success and all messages will be queued up to a certain
point and new messages will be dropped silently
- or receive an EAGAIN error (socket behavior)?

In these unclear conditions, switching to a different transport driver is
going to be tricky because apps may have been written/patched to assume a
certain behavior and might no longer behave properly if the expected
behavior changes (even if it is for the better) and may require adjusting
existing apps (to support a different behavior of the API).
Note that switching to a different transport is not just about testing
it in devstack but also about deploying it at scale on real production
environment and testing at scale.




Also, not having a test in the gate is a serious infraction now, and will
lead to zmq's removal from oslo.messaging now that we have a ratified
policy requiring this. I suggest a first step being to strive to get a
devstack-gate job that runs using zmq instead of rabbitmq. You can
trigger it in oslo.messaging's check pipeline, and make it non-voting,
but eventually it needs to get into nova, neutron, cinder, heat, etc.
etc. Without that, you'll find that the community of potential
benefactors of any effort you put into zmq will shrink dramatically when
we are forced to remove the driver from oslo.messaging (it can of course
live on out of tree).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] weekly subteam status report

2015-06-15 Thread Ruby Loo
Hi,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)


(as of Mon, 15 Jun 15:00 UTC, diff since 1 Jun)
Open: 159 (+9)
5 new (0), 51 in progress (+6), 0 critical, 12 high (+2) and 12 incomplete


Neutron/Ironic work (jroll)
===

Two specs around this work
- https://review.openstack.org/#/c/188528/
- https://review.openstack.org/#/c/187829/

Both specs have rough consensus from working group, just need some
clarifications and such.

Would love to start getting reviews from Ironic folks on them.

Also will probably need a short Nova blueprint/spec.

Oslo (lintan)
=

automaton release 0.2.0 (liberty), Friendly state machines for python.
- http://git.openstack.org/cgit/openstack/automaton
- We might need to replace our fsm state machine with that.

oslo.versionedobjects release 0.4.0
- http://git.openstack.org/cgit/openstack/oslo.versionedobjects
- I plan to migrate our object implementation to that, but now is still
waiting until the replacement has done in Nova by Dan Smith
- (dtantsur) suggestion to adopt futurist for periodic tasks:
https://review.openstack.org/#/c/191710/

Doc (pshige)
==

start discussion with docs team
-
http://eavesdrop.openstack.org/meetings/docteam/2015/docteam.2015-06-10-00.32.log.html

Drivers
==

iRMC (naohirot)
-
https://review.openstack.org//#/q/owner:+naohirot%2540jp.fujitsu.com+status:+open,n,z

Status: Reactive (solicit for core team's review and approval,  no review
comment for last 10 days since Jun. 5th)
- iRMC Virtual Media Deploy Driver
Deploy Driver Base patch which implemented:
bp/irmc-virtualmedia-deploy-driver
bp/non-glance-image-refs
bp/automate-uefi-bios-iso-creation
On top of the base, following up patches implemented:
bp/local-boot-support-with-partition-images
bp/whole-disk-image-support
bp/ipa-as-default-ramdisk

Status: Active (spec review is on going)
Enhance Power Interface for Soft Reboot and NMI
bp/enhance-power-interface-for-soft-reboot-and-nmi

Status: Active (started coding)
iRMC out of band inspection
bp/ironic-node-properties-discovery



Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Make functional tests voting

2015-06-15 Thread Tom Cammann

Review up for making it voting: https://review.openstack.org/#/c/191921/

It would probably be a good plan to run the functional tests on the 
python-magnumclient,

although I'm not too sure how to set that up.

Tom

On 15/06/15 18:47, Adrian Otto wrote:

Tom,

Yes, let’s make it voting now. I went through the full review queue over the 
weekend, and I did find a few examples of functional tests timing out after 2 
hours. It did not look to me like they actually ran, suggesting a malfunction 
in the setup of the test nodes. There we only one or two like that in the few 
dozen patches I looked at. I think I ordered one recheck. If that happens a 
lot, we might need to revisit this decision, but in all honesty I’d prefer to 
have the pressure of a malfunctioning gate blocking us so that we are motivated 
to resolve the issue at the root cause.

Thanks,

Adrian


On Jun 15, 2015, at 10:21 AM, Davanum Srinivas dava...@gmail.com wrote:

+1 from me Tom.

On Mon, Jun 15, 2015 at 1:15 PM, Tom Cammann tom.camm...@hp.com wrote:

Hello,

I haven't seen any false positives in a few weeks from the functional
tests, but I have seen a couple reviewers missing the -1 from the non-voting
job
when there has been a legitimate failures.

I think now would be a good time to turn 'check-functional-dsvm-magnum' to
voting.

Thanks,
Tom


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Adrian Otto
Tom,

 On Jun 15, 2015, at 10:59 AM, Tom Cammann tom.camm...@hp.com wrote:
 
 My main issue with having the user generate the keys/certs for the kube nodes
 is that the keys have to be insecurely moved onto the kube nodes. Barbican can
 talk to heat but heat must still copy them across to the nodes, exposing the
 keys on the wire. Perhaps there are ways of moving secrets correctly which I
 have missed.

When we create a bay, we have an ssh keypair that we use to inject the ssh 
public key onto the nova instances we create. We can use scp to securely 
transfer the keys over the wire using that keypair.

 I also agree that we should opt for a non-Barbican deployment first.
 
 At the summit we talked about using Magnum as a CA and signing the
 certificates, and we seemed to have some consensus about doing this with the
 possibility of using Anchor. This would take a lot of the onus off of the 
 user to
 fiddle around with openssl and craft the right signed certs safely. Using
 Magnum as a CA the user would generate a key/cert pair, and then get the
 cert signed by Magnum, and the kube node would do the same. The main
 downside of this technique is that the user MUST trust Magnum and the
 administrator as they would have access to the CA signing cert.
 
 An alternative to that where the user holds the CA cert/key, is to have the 
 user:
 
 - generate a CA cert/key (or use existing corp one etc)
 - generate own cert/key
 - sign their cert with their CA cert/key
 - spin up kubecluster
 - each node would generate key/cert
 - each node exposes this cert to be signed
 - user signs each cert and returns it to the node.
 
 This is going quite manual unless they have a CA that the kube nodes can call
 into. However this is the most secure way I could come up with.

Perhaps we can expose a “replace keys” feature that could be used to facilitate 
this after initial setup of the bay. This way you could establish a trust that 
excludes the administrator. This approach potentially lends itself to 
additional automation to make the replacement process a bit less manual.

Thanks,

Adrian

 
 Tom
 
 On 15/06/15 17:52, Egor Guz wrote:
 +1 for non-Barbican support first, unfortunately Barbican is not very well 
 adopted in existing installation.
 
 Madhuri, also please keep in mind we should come with solution which should 
 work with Swarm and Mesos as well in further.
 
 —
 Egor
 
 From: Madhuri Rai madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: Monday, June 15, 2015 at 0:47
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum
 
 Hi,
 
 Thanks Adrian for the quick response. Please find my response inline.
 
 On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto 
 adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
 Madhuri,
 
 On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
 madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:
 
 Hi All,
 
 This is to bring the blueprint  
 secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
  in discussion. I have been trying to figure out what could be the possible 
 change area to support this feature in Magnum. Below is just a rough idea on 
 how to proceed further on it.
 
 This task can be further broken in smaller pieces.
 
 1. Add support for TLS in python-k8sclient.
 The current auto-generated code doesn't support TLS. So this work will be to 
 add TLS support in kubernetes python APIs.
 
 2. Add support for Barbican in Magnum.
 Barbican will be used to store all the keys and certificates.
 
 Keep in mind that not all clouds will support Barbican yet, so this approach 
 could impair adoption of Magnum until Barbican is universally supported. It 
 might be worth considering a solution that would generate all keys on the 
 client, and copy them to the Bay master for communication with other Bay 
 nodes. This is less secure than using Barbican, but would allow for use of 
 Magnum before Barbican is adopted.
 
 +1, I agree. One question here, we are trying to secure the communication 
 between magnum-conductor and kube-apiserver. Right?
 
 
 If both methods were supported, the Barbican method should be the default, 
 and we should put warning messages in the config file so that when the 
 administrator relaxes the setting to use the non-Barbican configuration 
 he/she is made aware that it requires a less secure mode of operation.
 
 In non-Barbican support, client will generate the keys and pass the location 
 of the key to the magnum services. Then again heat template will copy and 
 configure the kubernetes services on master node. Same as the below step.
 
 
 My suggestion is to completely implement the Barbican support first, and 
 follow 

Re: [openstack-dev] Online Migrations.

2015-06-15 Thread Dan Smith
  3. Build the controls into our process with a way of storing the
 current release cycle information to only allow contract’s to
 occur at a set major release and maintain the column in the model
 till it is ready to be removed major release + 1 since the
 migration was added.


 Personally, for ease of development and functionality, I think the
 best option is the 3rd as it will not require reinventing the wheel
 around data loading and handling of already loaded data models that
 could affect ORM queries.
 
 Well one advantage to the way contract is totally automated here is
 that as long as the ORM models have a column X present, contract won't
 remove it.What problem would storing the release cycle information
 solve ?  (also by store you mean a DB table? ).

Tying this to the releases is less desirable from my perspective. It
means that landing a thing requires more than six months of developer
and reviewer context. We have that right now, and we get along, but it's
much harder to plan, execute, and cleanup those sorts of longer-lived
changes. It also means that CDers have to wait for the contract to be
landed well after they should have been able to clean up their database,
and may imply that people _have_ to do a contract at some point,
depending on how it's exposed.

The goal for this was to separate the three phases. Tying one of them to
the releases kinda hampers the utility of it to some degree, IMHO.
Making it declarative (even when part of what is declared are the
condition(s) upon which a particular contraction can proceed) is much
more desirable to me.

That said, I still think we should get the original thing merged. Even
if we did contractions purely with the manual migrations for the
foreseeable future, that'd be something we could deal with.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-15 Thread Dmitry Tantsur
2015-06-15 19:50 GMT+02:00 Clint Byrum cl...@fewbar.com:

 Excerpts from Jay Pipes's message of 2015-06-15 10:07:39 -0700:
  It has come to my attention in [1] that the microversion spec for Nova
  [2] and Ironic [3] have used the project name -- i.e. Nova and Ironic --
  instead of the name of the API -- i.e. OpenStack Compute and
  OpenStack Bare Metal -- in the HTTP header that a client passes to
  indicate a preference for or knowledge of a particular API microversion.
 
  The original spec said that the HTTP header should contain the name of
  the service type returned by the Keystone service catalog (which is also
  the official name of the REST API). I don't understand why the spec was
  changed retroactively and why Nova has been changed to return
  X-OpenStack-Nova-API-Version instead of X-OpenStack-Compute-API-Version
  HTTP headers [4].
 
  To be blunt, Nova is the *implementation* of the OpenStack Compute API.
  Ironic is the *implementation* of the OpenStack BareMetal API.
 
  The HTTP headers should never have been changed like this, IMHO, and I'm
  disappointed that they were. In fact, it looks like a very select group
  of individuals pushed through this change [5] with little to no input
  from the mailing list or community.
 
  Since no support for these headers has yet to land in the client
  packages, can we please reconsider this?

 I tend to agree with you.

 The juxtaposition is somewhat surprising. [1] Is cited as the reason for
 making the change, but that governance change is addressing the way we
 govern projects, not API's. The goal of the change itself is to encourage
 competition amongst projects. However, publishing an OpenStack API with
 a project name anywhere in it is the opposite: it discourages alternative
 implementations. If we really believe there are no sacred cows and a nova
 replacement (or proxy, or accelerator, or or or..) could happen inside the
 OpenStack community, then we should be more careful about defining the API


If Ironic will still be the main authority to define the baremetal API,
header renaming won't help the alternative implementations.

Also, what to use for services, that do not have direct program mapping?
I.e., I'm planning to add microversioning to ironic-inspector. Who is going
to define a proper service name? Myself? The ironic team? Should I bother
the TC?



 However, if we do believe that Nova and Ironic are special, then the API
 can stand as-is, though I still find it sub-optimal.

 I'm a little bit worried that we don't have a guiding principle to point
 at somewhere. Perhaps the API WG can encode guidance either way (We use
 project names, or we use service types).

 [1] https://review.openstack.org/#/c/145740/

 
  Thanks,
  -jay
 
  [1] https://review.openstack.org/#/c/187112/
  [2]
 
 https://github.com/openstack/nova-specs/blob/master/specs/kilo/implemented/api-microversions.rst
  [3]
 
 https://github.com/openstack/ironic-specs/blob/master/specs/kilo/api-microversions.rst
  [4] https://review.openstack.org/#/c/155611/
  [5] https://review.openstack.org/#/c/153183/
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-15 Thread Chris Dent

On Mon, 15 Jun 2015, Clint Byrum wrote:


I'm a little bit worried that we don't have a guiding principle to point
at somewhere. Perhaps the API WG can encode guidance either way (We use
project names, or we use service types).


I think it's a good idea to encode the principle, whatever it is,
but it feels like we are a rather long way from consensus on the
way to go.

There's a visible camp that would like to say that competing
projects should be competing over the effectiveness of their
implementation of a canonical (or even platonic) API.

In principle I have a lot of sympathy for this idea but it sort of begs
or presumes that the APIs that exist are:

* in the realm of or at least on their way to being good enough
* have some measure of stability
* should not themselves be overly subject to competitive forces

(at least two of these items are not true)

If that's the case, then we can imagine two different services both
of which implement the official compute API versions 2.blah to 3.foop
inclusive. That's probably an awful lot of work for everyone
involved?

Another way to look at things is that each project is seeking
knowledge about how to form a good API for a particular service but
nobody is quite there yet. In the meantime, if you use implementation X,
it's got microversions, please keep track.

And maybe someday microversion X of implementation Y will become
_the_ declared API for service Z? That could make a lot of people
feel like they've wasted effort.

I don't know where things are. Does anyone?

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (re)centralizing library release management

2015-06-15 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2015-06-09 16:08:16 -0400:
 Excerpts from Doug Hellmann's message of 2015-06-09 13:25:26 -0400:
  Until now we have encouraged project teams to prepare their own
  library releases as new versions of projects were needed. We've
  started running into a couple of problems with that, with releases
  not coming often enough, or at a bad time in the release cycle, or
  with version numbering not being applied consistently, or without
  proper announcements. To address these issues, the release management
  team is proposing to create a small team of library release managers
  to handle the process around all library releases (clients,
  non-application projects, middleware, Oslo libraries, etc.). This
  will give us a chance to collaborate and review the version numbers
  for new releases, as well as stay on top of stale libraries with
  fixes or features that sit unreleased for a period of time. It will
  also be the first step to creating an automated review process for
  release tags.
  
  The process still needs to be worked out, but I envision it working
  something like this:
  
  The release liaison/PTL for the project team submits a request for
  a new release, including the repo name, the SHA, the series (liberty,
  kilo, etc.), and the proposed version number. The release team
  checks the proposed version number against the unreleased changes
  up to that SHA, and then either updates it to reflect semver or
  uses it as it is provided. The release team would then handle the
  tag push and the resulting announcements.
  
  There would be a few conditions under which a new release might not
  be immediately approved, but really only when the gate is wedged
  and during freeze periods. The point of the change isn't to block
  releases, just ensure consistency and good timing.
  
  We have some tools to let us look for unreleased changes, and
  eventually we can set up a recurring job to build that report so
  we can propose releases to project teams with a large release
  backlog. That will likely come later, though.
  
  We can also pre-announce proposed releases if folks find that useful.
  
  We will need a small number of volunteers to join this team, and
  start building the required expertise in understanding the overall
  state of the project, and in semantic versioning. We do not necessarily
  want a liaison from every project -- think of this as the proto-team
  for the group that eventually has core reviewer rights on the release
  automation repository.
 
 The change to update the ACLs is https://review.openstack.org/189856
 
 I would appreciate a review to ensure I've not missed any library-like
 things, and so that all projects understand which repositories this
 affects.

The spec describing a repository and tools for starting to automate tag
reviewing is in gerrit at https://review.openstack.org/#/c/191193/

If you have any interest in library releases, please review and comment
on the spec.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Security] Nominating Michael McCune for Security CoreSec

2015-06-15 Thread McPeak, Travis
I¹d like to propose Michael McCune for CoreSec membership.

I¹ve worked with Michael (elmiko) on numerous security tasks and
bugs, and he has a great grasp on security concepts and is very active
in the OpenStack security community.  I think he would be a natural
choice for CoreSec.


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stackforge projects are not second class citizens

2015-06-15 Thread Thierry Carrez
Joe Gordon wrote:
 [...]
 Below is a list of the first few few projects to join OpenStack after
 the big tent, All of which have now been part of OpenStack for at least
 two months.[1]
 
 * Mangum -  Tue Mar 24 20:17:36 2015
 * Murano - Tue Mar 24 20:48:25 2015
 * Congress - Tue Mar 31 20:24:04 2015
 * Rally - Tue Apr 7 21:25:53 2015 
 
 When looking at stackalytics [2] for each project, we don't see any
 noticeably change in number of reviews, contributors, or number of
 commits from before and after each project joined OpenStack.

Also note that release and summit months are traditionally less active
(some would say totally dead), so comparing April-May to anything else
is likely to not mean much. I'd wait for a complete cycle before
answering this question. Or at the very least compare it to
October-November from the previous cycle.

If we do so for the few projects that existed in October 2014, that
would point to a rather steep increase:

Look at Oct/Nov in:
http://stackalytics.com/?module=murano-groupmetric=commitsrelease=kilo

And compare to April/May in:
http://stackalytics.com/?module=murano-groupmetric=commitsrelease=liberty

Same for Rally:
http://stackalytics.com/?module=rally-groupmetric=commitsrelease=kilo
http://stackalytics.com/?module=rally-groupmetric=commitsrelease=liberty

Only Congress was slightly more active in the first months of Kilo than
in the first months of Liberty.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Egor Guz
+1 for non-Barbican support first, unfortunately Barbican is not very well 
adopted in existing installation.

Madhuri, also please keep in mind we should come with solution which should 
work with Swarm and Mesos as well in further.

—
Egor

From: Madhuri Rai madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 15, 2015 at 0:47
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Hi,

Thanks Adrian for the quick response. Please find my response inline.

On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
Madhuri,

On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:

Hi All,

This is to bring the blueprint  
secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
 in discussion. I have been trying to figure out what could be the possible 
change area to support this feature in Magnum. Below is just a rough idea on 
how to proceed further on it.

This task can be further broken in smaller pieces.

1. Add support for TLS in python-k8sclient.
The current auto-generated code doesn't support TLS. So this work will be to 
add TLS support in kubernetes python APIs.

2. Add support for Barbican in Magnum.
Barbican will be used to store all the keys and certificates.

Keep in mind that not all clouds will support Barbican yet, so this approach 
could impair adoption of Magnum until Barbican is universally supported. It 
might be worth considering a solution that would generate all keys on the 
client, and copy them to the Bay master for communication with other Bay nodes. 
This is less secure than using Barbican, but would allow for use of Magnum 
before Barbican is adopted.

+1, I agree. One question here, we are trying to secure the communication 
between magnum-conductor and kube-apiserver. Right?


If both methods were supported, the Barbican method should be the default, and 
we should put warning messages in the config file so that when the 
administrator relaxes the setting to use the non-Barbican configuration he/she 
is made aware that it requires a less secure mode of operation.

In non-Barbican support, client will generate the keys and pass the location of 
the key to the magnum services. Then again heat template will copy and 
configure the kubernetes services on master node. Same as the below step.


My suggestion is to completely implement the Barbican support first, and follow 
up that implementation with a non-Barbican option as a second iteration for the 
feature.

How about implementing the non-Barbican support first as this would be easy to 
implement, so that we can first concentrate on Point 1 and 3. And then after 
it, we can work on Barbican support with more insights.

Another possibility would be for Magnum to use its own private installation of 
Barbican in cases where it is not available in the service catalog. I dislike 
this option because it creates an operational burden for maintaining the 
private Barbican service, and additional complexities with securing it.

In my opinion, installation of Barbican should be independent of Magnum. My 
idea here is, if user wants to store his/her keys in Barbican then he/she will 
install it.
We will have a config paramter like store_secure when True means we have to 
store the keys in Barbican or else not.
What do you think?

3. Add support of TLS in Magnum.
This work mainly involves supporting the use of key and certificates in magnum 
to support TLS.

The user generates the keys, certificates and store them in Barbican. Now there 
is two way to access these keys while creating a bay.

Rather than the user generates the keys…, perhaps it might be better to word 
that as the magnum client library code generates the keys for the user…”.

It is user here. In my opinion, there could be users who don't want to use 
magnum client rather the APIs directly, in that case the user will generate the 
key themselves.

In our first implementation, we can support the user generating the keys and 
then later client generating the keys.

1. Heat will access Barbican directly.
While creating bay, the user will provide this key and heat templates will 
fetch this key from Barbican.

I think you mean that Heat will use the Barbican key to fetch the TLS key for 
accessing the native API service running on the Bay.
Yes.

2. Magnum-conductor access Barbican.
While creating bay, the user will provide this key and then Magnum-conductor 
will fetch this key from Barbican and provide this key to heat.

Then heat will copy this files on kubernetes master node. Then bay will use 
this key to start a Kubernetes services signed with 

[openstack-dev] [Magnum] Make functional tests voting

2015-06-15 Thread Tom Cammann

Hello,

I haven't seen any false positives in a few weeks from the functional
tests, but I have seen a couple reviewers missing the -1 from the 
non-voting job

when there has been a legitimate failures.

I think now would be a good time to turn 'check-functional-dsvm-magnum' to
voting.

Thanks,
Tom


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-15 Thread Dmitry Tantsur

On 06/15/2015 07:07 PM, Jay Pipes wrote:

It has come to my attention in [1] that the microversion spec for Nova
[2] and Ironic [3] have used the project name -- i.e. Nova and Ironic --
instead of the name of the API -- i.e. OpenStack Compute and
OpenStack Bare Metal -- in the HTTP header that a client passes to
indicate a preference for or knowledge of a particular API microversion.

The original spec said that the HTTP header should contain the name of
the service type returned by the Keystone service catalog (which is also
the official name of the REST API). I don't understand why the spec was
changed retroactively and why Nova has been changed to return
X-OpenStack-Nova-API-Version instead of X-OpenStack-Compute-API-Version
HTTP headers [4].

To be blunt, Nova is the *implementation* of the OpenStack Compute API.
Ironic is the *implementation* of the OpenStack BareMetal API.

The HTTP headers should never have been changed like this, IMHO, and I'm
disappointed that they were. In fact, it looks like a very select group
of individuals pushed through this change [5] with little to no input
from the mailing list or community.

Since no support for these headers has yet to land in the client
packages, can we please reconsider this?


ironicclient has support for the header for a while, and anyway we 
released it with Ironic Kilo, so I guess it's a breaking change.


also while the only implementation and source of authority for the 
baremetal API is Ironic, I'm not sure there's point of calling it 
baremetal API, but I'm neutral to this suggestion modulo compatibility 
break.




Thanks,
-jay

[1] https://review.openstack.org/#/c/187112/
[2]
https://github.com/openstack/nova-specs/blob/master/specs/kilo/implemented/api-microversions.rst

[3]
https://github.com/openstack/ironic-specs/blob/master/specs/kilo/api-microversions.rst

[4] https://review.openstack.org/#/c/155611/
[5] https://review.openstack.org/#/c/153183/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Make functional tests voting

2015-06-15 Thread Davanum Srinivas
+1 from me Tom.

On Mon, Jun 15, 2015 at 1:15 PM, Tom Cammann tom.camm...@hp.com wrote:
 Hello,

 I haven't seen any false positives in a few weeks from the functional
 tests, but I have seen a couple reviewers missing the -1 from the non-voting
 job
 when there has been a legitimate failures.

 I think now would be a good time to turn 'check-functional-dsvm-magnum' to
 voting.

 Thanks,
 Tom


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

2015-06-15 Thread Jeff Peeler

On Sun, Jun 14, 2015 at 05:48:48PM +, Steven Dake (stdake) wrote:

I am proposing Harm Waites for the Kolla core team.


+1!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Make functional tests voting

2015-06-15 Thread Adrian Otto
Tom,

Yes, let’s make it voting now. I went through the full review queue over the 
weekend, and I did find a few examples of functional tests timing out after 2 
hours. It did not look to me like they actually ran, suggesting a malfunction 
in the setup of the test nodes. There we only one or two like that in the few 
dozen patches I looked at. I think I ordered one recheck. If that happens a 
lot, we might need to revisit this decision, but in all honesty I’d prefer to 
have the pressure of a malfunctioning gate blocking us so that we are motivated 
to resolve the issue at the root cause.

Thanks,

Adrian

 On Jun 15, 2015, at 10:21 AM, Davanum Srinivas dava...@gmail.com wrote:
 
 +1 from me Tom.
 
 On Mon, Jun 15, 2015 at 1:15 PM, Tom Cammann tom.camm...@hp.com wrote:
 Hello,
 
 I haven't seen any false positives in a few weeks from the functional
 tests, but I have seen a couple reviewers missing the -1 from the non-voting
 job
 when there has been a legitimate failures.
 
 I think now would be a good time to turn 'check-functional-dsvm-magnum' to
 voting.
 
 Thanks,
 Tom
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Davanum Srinivas :: https://twitter.com/dims
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova][ironic] Microversion API HTTP header

2015-06-15 Thread Sean Dague
On 06/15/2015 01:07 PM, Jay Pipes wrote:
 It has come to my attention in [1] that the microversion spec for Nova
 [2] and Ironic [3] have used the project name -- i.e. Nova and Ironic --
 instead of the name of the API -- i.e. OpenStack Compute and
 OpenStack Bare Metal -- in the HTTP header that a client passes to
 indicate a preference for or knowledge of a particular API microversion.
 
 The original spec said that the HTTP header should contain the name of
 the service type returned by the Keystone service catalog (which is also
 the official name of the REST API). I don't understand why the spec was
 changed retroactively and why Nova has been changed to return
 X-OpenStack-Nova-API-Version instead of X-OpenStack-Compute-API-Version
 HTTP headers [4].
 
 To be blunt, Nova is the *implementation* of the OpenStack Compute API.
 Ironic is the *implementation* of the OpenStack BareMetal API.
 
 The HTTP headers should never have been changed like this, IMHO, and I'm
 disappointed that they were. In fact, it looks like a very select group
 of individuals pushed through this change [5] with little to no input
 from the mailing list or community.
 
 Since no support for these headers has yet to land in the client
 packages, can we please reconsider this?

I think you are seeing demons where there are none. I don't think it was
ever really clear in the specification that official project short
moniker was critical to the spec vs. code name that everyone uses. While
I didn't weigh in on the review in question, I wouldn't have really seen
an issue with it at the time.

Honestly, we should work through standardization of the service catalog
(as was discussed at Summit) first and before we push out a microversion
on these projects to change this header, especially as that is the hook
by which projects are versioning on now.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Adrian Otto
Madhuri,

On Jun 15, 2015, at 12:47 AM, Madhuri Rai 
madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:

Hi,

Thanks Adrian for the quick response. Please find my response inline.

On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
Madhuri,

On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:

Hi All,

This is to bring the blueprint  
secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
 in discussion. I have been trying to figure out what could be the possible 
change area to support this feature in Magnum. Below is just a rough idea on 
how to proceed further on it.

This task can be further broken in smaller pieces.

1. Add support for TLS in python-k8sclient.
The current auto-generated code doesn't support TLS. So this work will be to 
add TLS support in kubernetes python APIs.

2. Add support for Barbican in Magnum.
Barbican will be used to store all the keys and certificates.

Keep in mind that not all clouds will support Barbican yet, so this approach 
could impair adoption of Magnum until Barbican is universally supported. It 
might be worth considering a solution that would generate all keys on the 
client, and copy them to the Bay master for communication with other Bay nodes. 
This is less secure than using Barbican, but would allow for use of Magnum 
before Barbican is adopted.

+1, I agree. One question here, we are trying to secure the communication 
between magnum-conductor and kube-apiserver. Right?

We need API services that are on public networks to be secured with TLS, or 
another approach that will allow us to implement access control so that these 
API’s can only be accessed by those with the correct keys. This need extends to 
all places in Magnum where we are exposing native API’s.

If both methods were supported, the Barbican method should be the default, and 
we should put warning messages in the config file so that when the 
administrator relaxes the setting to use the non-Barbican configuration he/she 
is made aware that it requires a less secure mode of operation.

In non-Barbican support, client will generate the keys and pass the location of 
the key to the magnum services. Then again heat template will copy and 
configure the kubernetes services on master node. Same as the below step.

Good!

My suggestion is to completely implement the Barbican support first, and follow 
up that implementation with a non-Barbican option as a second iteration for the 
feature.

How about implementing the non-Barbican support first as this would be easy to 
implement, so that we can first concentrate on Point 1 and 3. And then after 
it, we can work on Barbican support with more insights.

Another possibility would be for Magnum to use its own private installation of 
Barbican in cases where it is not available in the service catalog. I dislike 
this option because it creates an operational burden for maintaining the 
private Barbican service, and additional complexities with securing it.

In my opinion, installation of Barbican should be independent of Magnum. My 
idea here is, if user wants to store his/her keys in Barbican then he/she will 
install it.
We will have a config paramter like store_secure when True means we have to 
store the keys in Barbican or else not.
What do you think?

3. Add support of TLS in Magnum.
This work mainly involves supporting the use of key and certificates in magnum 
to support TLS.

The user generates the keys, certificates and store them in Barbican. Now there 
is two way to access these keys while creating a bay.

Rather than the user generates the keys…, perhaps it might be better to word 
that as the magnum client library code generates the keys for the user…”.

It is user here. In my opinion, there could be users who don't want to use 
magnum client rather the APIs directly, in that case the user will generate the 
key themselves.

Good point.

In our first implementation, we can support the user generating the keys and 
then later client generating the keys.

Users should not require any knowledge of how TLS works, or related certificate 
management tools in order to use Magnum. Let’s aim for this.

I do agree that’s a good logical first step, but I am reluctant to agree to it 
without confidence that we will add the additional security later. I want to 
achieve a secure-by-default configuration in Magnum. I’m happy to take measured 
forward progress toward this, but I don’t want the less secure option(s) to be 
the default once more secure options come along. By doing the more secure one 
first, and making it the default, we allow other options only when the 
administrator makes a conscious action to relax security to meet their 
constraints.

So, if our team agrees that doing simple key management without Barbican should 
be our first step, I will agree to that under the condition that we adjust the 

Re: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

2015-06-15 Thread Daneyon Hansen (danehans)

+1

Regards,
Daneyon Hansen
Software Engineer
Email: daneh...@cisco.com
Phone: 303-718-0400
http://about.me/daneyon_hansen

From: Steven Dake (stdake) std...@cisco.commailto:std...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Sunday, June 14, 2015 at 10:48 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

Hey folks,

I am proposing Harm Waites for the Kolla core team.  He did a fantastic job 
implementing Designate in a container[1] which I’m sure was incredibly 
difficult and never gave up even though there were 13 separate patch reviews :) 
 Beyond Harm’s code contributions, he is responsible for 32% of the 
“independent” reviews[1] where independents compose 20% of our total reviewer 
output.  I think we should judge core reviewers on more then output, and I knew 
Harm was core reviewer material with his fantastic review of the cinder 
container where he picked out 26 specific things that could be broken that 
other core reviewers may have missed ;) [3].  His other reviews are also as 
thorough as this particular review was.  Harm is active in IRC and in our 
meetings for which his TZ fits.  Finally Harm has agreed to contribute to the 
ansible-multi implementation that we will finish in the liberty-2 cycle.

Consider my proposal to count as one +1 vote.

Any Kolla core is free to vote +1, abstain, or vote –1.  A –1 vote is a veto 
for the candidate, so if you are on the fence, best to abstain :)  Since our 
core team has grown a bit, I’d like 3 core reviewer +1 votes this time around 
(vs Sam’s 2 core reviewer votes).  I will leave the voting open until June 21 
 UTC.  If the vote is unanimous prior to that time or a veto vote is 
received, I’ll close voting and make appropriate adjustments to the gerrit 
groups.

Regards
-steve

[1] https://review.openstack.org/#/c/182799/
[2] 
http://stackalytics.com/?project_type=allmodule=kollacompany=%2aindependent
[3] https://review.openstack.org/#/c/170965/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

2015-06-15 Thread Sam Yaple
+1 for me as well. Designate looks great
On Jun 15, 2015 6:43 AM, Ryan Hallisey rhall...@redhat.com wrote:

 +1 Great job with Cinder.

 -Ryan

 - Original Message -
 From: Steven Dake (stdake) std...@cisco.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Sunday, June 14, 2015 1:48:48 PM
 Subject: [openstack-dev] [kolla] Proposal for new core-reviewer Harm Waites

 Hey folks,

 I am proposing Harm Waites for the Kolla core team. He did a fantastic job
 implementing Designate in a container[1] which I’m sure was incredibly
 difficult and never gave up even though there were 13 separate patch
 reviews :) Beyond Harm’s code contributions, he is responsible for 32% of
 the “independent” reviews[1] where independents compose 20% of our total
 reviewer output. I think we should judge core reviewers on more then
 output, and I knew Harm was core reviewer material with his fantastic
 review of the cinder container where he picked out 26 specific things that
 could be broken that other core reviewers may have missed ;) [3]. His other
 reviews are also as thorough as this particular review was. Harm is active
 in IRC and in our meetings for which his TZ fits. Finally Harm has agreed
 to contribute to the ansible-multi implementation that we will finish in
 the liberty-2 cycle.

 Consider my proposal to count as one +1 vote.

 Any Kolla core is free to vote +1, abstain, or vote –1. A –1 vote is a
 veto for the candidate, so if you are on the fence, best to abstain :)
 Since our core team has grown a bit, I’d like 3 core reviewer +1 votes this
 time around (vs Sam’s 2 core reviewer votes). I will leave the voting open
 until June 21  UTC. If the vote is unanimous prior to that time or a
 veto vote is received, I’ll close voting and make appropriate adjustments
 to the gerrit groups.

 Regards
 -steve

 [1] https://review.openstack.org/#/c/182799/
 [2]
 http://stackalytics.com/?project_type=allmodule=kollacompany=%2aindependent
 [3] https://review.openstack.org/#/c/170965/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Difference between Sahara and CloudBrak

2015-06-15 Thread Andrew Lazarev
Hi Jay,

Cloudbreak is a Hadoop installation tool driven by Hortonworks. The main
difference with Sahara is a point of control. In Hortonworks world you have
Ambari and different planforms (AWS, OpenStack, etc.) to run Hadoop. Sahara
point of view - you have OpenStack cluster and want to control everything
from horizon (Hadoop of any vendor, Murano apps, etc.).

So,
If you tied with Hortonworks, spend most working time in Ambari and run
Hadoop on different types of clouds - choose CloudBreak.
If you have OpenStack infrastructure and want to run Hadoop on top of it -
choose Sahara.

Thanks,
Andrew.

On Mon, Jun 15, 2015 at 9:03 AM, Jay Lau jay.lau@gmail.com wrote:

 Hi Sahara Team,

 Just notice that the CloudBreak (https://github.com/sequenceiq/cloudbreak)
 also support running on top of OpenStack, can anyone show me some
 difference between Sahara and CloudBreak when both of them using OpenStack
 as Infrastructure Manager?

 --
 Thanks,

 Jay Lau (Guangya Liu)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >