Re: [openstack-dev] Step by step OpenStack Icehouse Installation Guide

2014-08-04 Thread Qiming Teng
Thanks for the efforts.  Just want to add some comments on installing
and configuring Heat, since an incomplete setup may cause bizarre
problems later on when users start experiments.

Please refer to devstack script below for proper configuration of Heat:

https://github.com/openstack-dev/devstack/blob/master/lib/heat#L68

and the function create_heat_accounts at the link below which helps
create the required Heat accounts.

https://github.com/openstack-dev/devstack/blob/master/lib/heat#L214

Regards,
  Qiming

On Sun, Aug 03, 2014 at 12:49:22PM +0200, chayma ghribi wrote:
 Dear All,
 
 I want to share with you our OpenStack Icehouse Installation Guide for
 Ubuntu 14.04.
 
 https://github.com/ChaimaGhribi/OpenStack-Icehouse-Installation/blob/master/OpenStack-Icehouse-Installation.rst
 
 An additional  guide for Heat service installation is also available ;)
 
 https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst
 
 Hope this manuals will be helpful and simple !
 Your contributions are welcome, as are questions and suggestions :)
 
 Regards,
 Chaima Ghribi



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Boot an instance assigned to a particular Neutron Port.

2014-08-04 Thread Parikshit Manur
Hi All,
I am trying to boot an instance assigned to a particular 
Neutron Port. CLI command nova boot  with argument  --nic port-id=port-uuid  
works as expected. But using the same option for the API, using argument   
networks: [ {port:  port-uuid }]   fails to boot with the port attached.

Am I missing some options in API during POST of servers?

Can you suggest any changes  in the API for this to work?

Thanks,
Parikshit Manur

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Boot an instance assigned to a particular Neutron Port.

2014-08-04 Thread tpiperat...@gmail.com
hi,
i use the same argument with you and it works well. you can run command 
with --debug to get more infomation.



tpiperat...@gmail.com
 
From: Parikshit Manur
Date: 2014-08-04 14:22
To: 'openst...@lists.openstack.org'; openstack-dev@lists.openstack.org
Subject: [openstack-dev] Boot an instance assigned to a particular Neutron Port.
Hi All,
I am trying to boot an instance assigned to a particular 
Neutron Port. CLI command nova boot  with argument “ --nic port-id=port-uuid”  
works as expected. But using the same option for the API, using argument   
networks: [ {port:  port-uuid }] “  fails to boot with the port attached. 
 
Am I missing some options in API during POST of servers?
 
Can you suggest any changes  in the API for this to work?
 
Thanks,
Parikshit Manur
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] VMware update

2014-08-04 Thread Gary Kotton
Hi,
A few updates regarding the driver:

  1.  Spawn refactor. This is coming along very nicely. It would be great if 
the following patches can get some TLC (this will enable us to start working on 
the next phases - finish spawn treatment and upgrading code to work with 
oslo.vmware):
 *   https://review.openstack.org/#/c/104146/
 *   https://review.openstack.org/#/c/105454/
  2.  ESX driver deprecation:
 *   https://review.openstack.org/101744 - this will enable a setup using 
the ESX driver to upgrade to the VC driver
 *   https://review.openstack.org/#/c/108854/ - ESX driver deprecation - 
this needs a few additional eyes

Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Block-migration keeping migrating state.

2014-08-04 Thread Joe
Hi folks,


The vm has keeping migrating status in my openstack environment, 
when I use nova live-migration --block-migrate vm_name dest_compute_node this 
instruction.
Then I am checked the instance log at 
dest_compute_node(/var/log/libvirt/qemu/instance-0331.log‍),
meet with .Completed 99 %^MDisabling I/O throttling on 
'drive-virtio-disk0' due to synchronous I/O the information.


Actually, I'm set the I/O rate limit of disk which adjust extra_specs property 
of openstack flavor.
Indeed, it's using libvirt/qemu to set xml file of instance attain rate 
limiting. I guess:)


and belowed‍[1][2] is  some fragments of my env.‍‍


So, Can someone please explain this situation for me, and, your help is a 
highly appreciated.‍‍
Thanks.


Joe


--[1]: source  compute node 
# virsh ‍list 
 112   instance-0331  paused‍


--[2]: destination compute node
# virsh list
 170   instance-0331  paused



--[3]: destination compute node
# vim /var/log/libvirt/qemu/instance-0331.log‍

char device redirected to /dev/pts/22 (label charserial1)
Receiving block device images
Completed 0 %^MCompleted 1 %^MCompleted 2 %^MCompleted 3 %^MCompleted 4 
%^MCompleted 5 %^MCompleted 6 %^MCompleted 7 %^MCompleted 8 %^MCompleted 9 
%^MCompleted 10 %^MCompleted 11 %^MCompleted 12 %^MCompleted 13 %^MCompleted 14 
%^MCompleted 15 %^MCompleted 16 %^MCompleted 17 %^MCompleted 18 %^MCompleted 19 
%^MCompleted 20 %^MCompleted 21 %^MCompleted 22 %^MCompleted 23 %^MCompleted 24 
%^MCompleted 25 %^MCompleted 26 %^MCompleted 27 %^MCompleted 28 %^MCompleted 29 
%^MCompleted 30 %^MCompleted 31 %^MCompleted 32 %^MCompleted 33 %^MCompleted 34 
%^MCompleted 35 %^MCompleted 36 %^MCompleted 37 %^MCompleted 38 %^MCompleted 39 
%^MCompleted 40 %^MCompleted 41 %^MCompleted 42 %^MCompleted 43 %^MCompleted 44 
%^MCompleted 45 %^MCompleted 46 %^MCompleted 47 %^MCompleted 48 %^MCompleted 49 
%^MCompleted 50 %^MCompleted 51 %^MCompleted 52 %^MCompleted 53 %^MCompleted 54 
%^MCompleted 55 %^MCompleted 56 %^MCompleted 57 %^MCompleted 58 %^MCompleted 59 
%^MCompleted 60 %^MCompleted 61 %^MCompleted 62 %^MCompleted 63 %^MCompleted 64 
%^MCompleted 65 %^MCompleted 66 %^MCompleted 67 %^MCompleted 68 %^MCompleted 69 
%^MCompleted 70 %^MCompleted 71 %^MCompleted 72 %^MCompleted 73 %^MCompleted 74 
%^MCompleted 75 %^MCompleted 76 %^MCompleted 77 %^MCompleted 78 %^MCompleted 79 
%^MCompleted 80 %^MCompleted 81 %^MCompleted 82 %^MCompleted 83 %^MCompleted 84 
%^MCompleted 85 %^MCompleted 86 %^MCompleted 87 %^MCompleted 88 %^MCompleted 89 
%^MCompleted 90 %^MCompleted 91 %^MCompleted 92 %^MCompleted 93 %^MCompleted 94 
%^MCompleted 95 %^MCompleted 96 %^MCompleted 97 %^MCompleted 98 %^MCompleted 99 
%^MDisabling I/O throttling on 'drive-virtio-disk0' due to synchronous I/O.‍

‍___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Jay Pipes for nova-core

2014-08-04 Thread Vui Chiap Lam
+1!


From: Michael Still mi...@stillhq.com
Sent: Wednesday, July 30, 2014 2:02 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Nova] Nominating Jay Pipes for nova-core

Greetings,

I would like to nominate Jay Pipes for the nova-core team.

Jay has been involved with nova for a long time now.  He's previously
been a nova core, as well as a glance core (and PTL). He's been around
so long that there are probably other types of core status I have
missed.

Please respond with +1s or any concerns.

References:

  https://review.openstack.org/#/q/owner:%22jay+pipes%22+status:open,n,z

  https://review.openstack.org/#/q/reviewer:%22jay+pipes%22,n,z

  
https://urldefense.proofpoint.com/v1/url?u=http://stackalytics.com/?module%3Dnova-group%26user_id%3Djaypipesk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=N%2F95l6VbnVj8%2FbCICfDy9UXxcczXii32s8%2BGfcsWiVU%3D%0Am=iJA11aP%2FIGBJ7unM%2BflkdDOKFvs09Tdo62pSQmMLWv8%3D%0As=e440cd1077427ed66a1eb039866286948bdb63d1c175c4c67119dfa506302d94

As a reminder, we use the voting process outlined at
https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our
core team.

Thanks,
Michael

--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-08-04 Thread Thierry Carrez
Alex Freedland wrote:
 [...]
 I can envision other tools building a community around them and they too
 should become part of OpenStack operations tooling.  Maybe Operator Tools
 program would be a better name?

+1: Operator Tools is less ambiguous than SLA management or even
Operations imho.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] VM bootting hang on OpenStack Icehouse + Centos 6.4 64bit + KVM

2014-08-04 Thread Anh Tu Nguyen
Thanks Dhanesh, I tried this before. Still hang... EDD is just a notice,
not a issue I think.

I'm not sure but look:

[1.035496] init[1] trap invalid opcode ip:7fc17cab71b2
sp:7fffae373780 error:0 in libc.so.6[7fc17ca98000+17e000]
[1.044709] Kernel panic - not syncing: Attempted to kill init!
[1.045096] Pid: 1, comm: init Not tainted 2.6.32-64-server #128-Ubuntu
[1.045410] Call Trace:
[1.045815]  [815698c3] panic+0x78/0x139
[1.046049]  [8106ab9d] forget_original_parent+0x30d/0x320
[1.046349]  [81069fc4] ? put_files_struct+0xc4/0xf0
[1.046619]  [8106abcb] exit_notify+0x1b/0x1b0
[1.046851]  [8106c4e0] do_exit+0x1c0/0x390
[1.047072]  [8106c705] do_group_exit+0x55/0xd0
[1.047324]  [8107d227] get_signal_to_deliver+0x1d7/0x3d0
[1.047607]  [81012a05] do_signal+0x75/0x1c0
[1.047835]  [81014ea5] ? do_invalid_op+0x95/0xb0
[1.048077]  [81012bad] do_notify_resume+0x5d/0x80
[1.048567]  [81013bdc] retint_signal+0x48/0x8c

I guess that problem comes from init error.

Need more help.


2014-08-04 13:59 GMT+07:00 dhanesh1212121212 dhanesh1...@gmail.com:

 Hi ,

 please try this in kernel line. edd=off.

 refer the site below.

 https://access.redhat.com/solutions/47621

 Regards,
 Dhanesh M


 On Mon, Aug 4, 2014 at 11:23 AM, Anh Tu Nguyen ng.tun...@gmail.com
 wrote:

 Hi guys,

 I'm deploying OpenStack Icehouse with KVM on Centos 6.4 64bit. Everything
 is good now. However, I can't boot VM (both Centos and Ubuntu). Ubuntu
 images I downloaded from https://cloud-images.ubuntu.com. Centos was
 built by myself.

 All VMs stuck at booting console, here are startup logs I copy from the
 log tab:

 Centos 6.5: https://gist.github.com/ngtuna/2f3065b8d48e462e458c
 Ubuntu Lucid: https://gist.github.com/ngtuna/6b60f94ff4b768766c6b

 On console, I see all VMs hang after this line: Probing EDD (edd=off to
 disable)... OK

 I found a post aboud adding nomodeset but unsuccessful for me.

 http://blog.scottlowe.org/2014/07/18/fix-for-strange-issue-booting-kvm-guests/

 That's very strange. Please help,
 ---Tuna

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





-- 
---Tuna
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DevStack program change

2014-08-04 Thread Thierry Carrez
Matthew Treinish wrote:
 On Fri, Aug 01, 2014 at 03:50:53PM -0500, Anne Gentle wrote:
 On Fri, Aug 1, 2014 at 10:48 AM, Dean Troyer dtro...@gmail.com wrote:

 I propose we de-program DevStack and consolidate it into the QA program.
  Some of my concerns about doing this in the beginning have proven to be a
 non-issue in practice.  Also, I believe a program's focus can and should be
 wider than we have implemented up to now and this a step toward
 consolidating narrowly defined programs.


 Sounds like a good idea to me, as long as QA PTL Matt is good with it.
 Thanks Dean for your service!

 Anne
 
 Yes, I fully support this idea as well. The scope of the 2 programs' missions
 are very much aligned, and the two teams are already working closely together.
 So I think that consolidating the DevStack progam into the QA program is a 
 good
 idea moving forward.

I'm all for it.

The only reason why we didn't do that in the first place was that the QA
program didn't want devstack (and considered its maintenance was not
part of its mission). If that position changed, merging the two sounds
good. The teams working on it are certainly mostly the same people anyway.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] specs.openstack.org is live

2014-08-04 Thread Thierry Carrez
Andreas Jaeger wrote:
 All OpenStack incubated projects and programs that use a -specs
 repository have now been setup to publish these to
 http://specs.openstack.org. With the next merged in patch of a *-specs
 repository, the documentation will get published.
 
 The index page contains the published repos as of yesterday and it will
 be enhanced as more are setup (current patch:
 https://review.openstack.org/111476).
 
 For now, you can reach a repo directly via
 http://specs.openstack.org/$ORGANIZATION/$project-specs, for example:
 http://specs.openstack.org/openstack/qa-specs/
 
 Thanks to Steve Martinelli and to the infra team (especially Clark,
 James, Jeremy and Sergey) for getting this done!

Nice work!

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][nova] extra Ceilometer Samples of the instance gauge

2014-08-04 Thread Qiming Teng

Regarding the intevals, it can be configured in your pipeline.yaml file.
e.g.

sources:
- name: meter_source
  interval: 600   -- change this to a smaller one if you like
  meters:
  - *
  sinks:
  - meter_sink

Regards,
  - Qiming

On Wed, Jul 30, 2014 at 02:16:31PM -0400, Mike Spreitzer wrote:
 In a normal DevStack install, each Compute instance causes one Ceilometer 
 Sample every 10 minutes.  Except, there is an extra one every hour.  And a 
 lot of extra ones at the start.  What's going on here?
 
 For example:
 
 $ ceilometer sample-list -m instance -q 
 resource=9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab
 +--+--+---++--++
 | Resource ID  | Name | Type  | Volume | Unit  
   | Timestamp  |
 +--+--+---++--++
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T18:09:28|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T18:00:54.009877 |
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T17:59:28|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T17:49:27|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T17:39:27|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T17:29:27|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T17:19:27|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T17:09:27|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T17:00:07.002075 |
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T16:59:27|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T16:49:27|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T16:39:27|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T16:29:27|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T16:19:27|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T16:09:26|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T16:00:20.172520 |
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T15:59:27|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T15:49:26|
 ...
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T05:19:21|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T05:09:21|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T05:00:41.909634 |
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T04:59:21|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T04:49:21|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T04:39:21|
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T04:32:55.049799 |
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T04:32:54.834377 |
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T04:32:51.905095 |
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T04:32:40.962977 |
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T04:32:40.201907 |
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T04:32:40.091348 |
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T04:32:39.858939 |
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T04:32:39.693631 |
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T04:32:39.523561 |
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T04:32:39.421295 |
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 2014-07-30T04:32:39.411968 |
 | 9108b64e-0e30-45fa-9fdf-ccc2b9cf8dab | instance | gauge | 1.0| 
 instance | 

[openstack-dev] Fwd: [NFV][CI][third-party] The plan to bring up Snabb NFV CI for Juno-3

2014-08-04 Thread Luke Gorrie
Hi Steve,

Thanks for the continuing help!

On 29 July 2014 20:25, Steve Gordon sgor...@redhat.com wrote:

 It appears to me the expectation/desire from Mark and Maru here is to see
 a lot more justification of the use cases for this driver and the direction
 of the current implementation


I am attempting to satisfy this. I've posted more information on the code
review and also on the ml:
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041662.html

I hope this moves us a step towards satisfying the objections. I am
optimistic because the use case is an important one and to me it seems like
the design is in line with existing practice in Neutron.

Typically third party CI is only provided/required for Nova when
 adding/maintaining a new hypervisor driver - at least that seems to be the
 case so far.


We are not adding a new hypervisor driver. but the VIF_VHOSTUSER feature
does depend on a very recent QEMU (= 2.1) and Libvirt (= 1.2.7).

I would like to understand what (if any) CI implications this version
requirement has on the Nova side.


 I know in your earlier email you mentioned also wanting to use this third
 party CI to also test a number of other scenarios, particularly:

  * Test with NFV-oriented features that are upstream in OpenStack.
  * Test with NFV-oriented changes that are not yet upstream e.g. Neutron
 QoS
 API.

 I am not sure how this would work - perhaps I misunderstand what you are
 proposing? As it stands the third-party CI jobs ideally run on each change
 *submitted* to gerrit so features that are not yet *merged* still receive
 this CI testing today both from the CI managed by infra and the existing
 third-party CI jobs? Or are you simply highlighting that you wish to test
 same with snabbswitch? Just not quite understanding why these were called
 out as separate cases.


Sorry, I didn't explain myself very clearly.

On reflection, the question is quite generic, and I will reraise it under
the [3rd-party][neutron] label.

There seems to be a chicken-and-egg situation where the CI is supposed to
run tests with a new Neutron driver enabled, but that new Neutron itself
driver is not merged and so it will not available with the Git refspec that
Gerrit supplies to the CI.

Likely there is already a standard practice for this that we can find out
about from the 3rd party gurus e.g. cherry-pick the driver into all tests.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec exceptions are closed, FPF is August 21

2014-08-04 Thread Thierry Carrez
Carl Baldwin wrote:
 Armando's point #2 is a good one.  I see that we should have raised
 awareness of this more than we did.  The bulk of the discussion and
 the development work moved over to the oslo team and I focused energy
 on other things.  What I didn't realize was that the importance of
 this work to Neutron did not transfer along with it and that simply
 delivering the new functionality in oslo by Juno was not sufficient as
 Neutron would need time to incorporate it.
 
 I am at a point now where I have some time to work on this.  If
 reconsideration for Juno is still an option at this time then I think
 what we need to do is to resolve the concerns that are still
 outstanding.  I'll admit that I really don't understand what the
 concerns are.  I believe that the security concerns have been
 addressed.  If you still have concerns around the design of this
 feature please bring them up specifically.

At this point I think it's just timing issues. There are 85 Neutron
blueprints targeted to juno-3, which is a considerable amount,
especially when compared to the 10 merged in juno-2 and the 8 merged in
juno-1). Most of that work is already proposed for review, ready to test
and merge and still won't make it purely due to reviewers bandwidth.
Establishing a review queue and excluding noise (reviews that have less
chance of making it) from it is critical to optimize the quantity of
features we'll end up merging.

The daemon work hasn't merged in oslo.rootwrap yet, the last details are
still being ironed out. I take my share of responsibility for some of
the delays in reviews there, but the fact is we had to take the time to
do it right. The neutron-side code has been ready for some time, but
until oslo.rootwrap includes the feature, you can't really review/test
that -- which means the daemon stuff is not anywhere near the top of the
review pile: it's not even in the pile at this point. I'm not totally
convinced it should jump the queue just because that code was written a
long time ago.

So we can certainly /try/ to include it in Juno, if it's seen as a
critical performance-enhancing feature: hope that we can get it in
oslo.rootwrap very soon and then propose the neutron code for review and
have fast turnaround on it. But I wouldn't bet my money on it -- I think
it might make sense to focus on reviewing stuff that is more likely to
make it (stuff that is already proposed for review) and have general
adoption of a Juno-released rootwrap daemon mode throughout all projects
during Kilo.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-08-04 Thread Sylvain Bauza


Le 02/08/2014 04:31, Alex Freedland a écrit :

Angus,

Rally is designed as an operations tool. Its purpose is to run a 
production cloud and give an operator tools and data to profile a 
production cloud. It is intended as a first of many such tools.


There is a strong support in the community that operations tools 
should be developed as part of OpenStack and Rally is the first such 
successful community effort.


I can envision other tools building a community around them and they 
too should become part of OpenStack operations tooling.  Maybe 
Operator Tools program would be a better name?





Some tooling exists already for development purposes : Devstack, 
Grenade, Tempest for the one most known. All of them are part of the QA 
program, except Devstack which is probably soon to be integrated as well 
in that QA program (see [1])



IMHO, there are 2 distinct concerns :
 - either we consider that Rally is another great tool for qualifying 
OpenStack releases, and then IMHO, the QA Program mission statement can 
cover this.
 - or, we consider that Rally is for operations only (IMHO we would 
loose some benefits then) and then possibly a new program could make 
sense. That said, by looking at the Deployment Program mission 
statement, I'm thinking that Rally could be part of, as it would be a 
gread addition for TripleO deployments.


-Sylvain



[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-August/041731.html





Alex Freedland
Co-Founder
Mirantis, Inc.




On Thu, Jul 31, 2014 at 3:55 AM, Angus Salkeld 
angus.salk...@rackspace.com mailto:angus.salk...@rackspace.com wrote:


On Sun, 2014-07-27 at 07:57 -0700, Sean Dague wrote:
 On 07/26/2014 05:51 PM, Hayes, Graham wrote:
  On Tue, 2014-07-22 at 12:18 -0400, Sean Dague wrote:
  On 07/22/2014 11:58 AM, David Kranz wrote:
  On 07/22/2014 10:44 AM, Sean Dague wrote:
  Honestly, I'm really not sure I see this as a different
program, but is
  really something that should be folded into the QA program.
I feel like
  a top level effort like this is going to lead to a lot of
duplication in
  the data analysis that's currently going on, as well as
functionality
  for better load driver UX.
 
   -Sean
  +1
  It will also lead to pointless discussions/arguments about which
  activities are part of QA and which are part of
  Performance and Scalability Testing.
 
  I think that those discussions will still take place, it will
just be on
  a per repository basis, instead of a per program one.
 
  [snip]
 
 
  Right, 100% agreed. Rally would remain with it's own repo +
review team,
  just like grenade.
 
 -Sean
 
 
  Is the concept of a separate review team not the point of a
program?
 
  In the the thread from Designate's Incubation request Thierry
said [1]:
 
  Programs just let us bless goals and teams and let them
organize
  code however they want, with contribution to any code repo
under that
  umbrella being considered official and ATC-status-granting.
 
  I do think that this is something that needs to be clarified
by the TC -
  Rally could not get a PTL if they were part of the QA project,
but every
  time we get a program request, the same discussion happens.
 
  I think that mission statements can be edited to fit new
programs as
  they occur, and that it is more important to let teams that
have been
  working closely together to stay as a distinct group.

 My big concern here is that many of the things that these
efforts have
 been doing are things we actually want much closer to the base. For
 instance, metrics on Tempest runs.

 When Rally was first created it had it's own load generator. It
took a
 ton of effort to keep the team from duplicating that and instead
just
 use some subset of Tempest. Then when measuring showed up, we
actually
 said that is something that would be great in Tempest, so
whoever ran
 it, be it for Testing, Monitoring, or Performance gathering,
would have
 access to that data. But the Rally team went off in a corner and
did it
 otherwise. That's caused the QA team to have to go and redo this
work
 from scratch with subunit2sql, in a way that can be consumed by
multiple
 efforts.

 So I'm generally -1 to this being a separate effort on the basis
that so
 far the team has decided to stay in their own sandbox instead of
 participating actively where many of us thing the functions
should be
 added. I also think this isn't like Designate, because this isn't
 intended to be part of the integrated release.

From reading Boris's email it seems like rally will provide a horizon
panel and api to back it (for the operator to kick of performance runs
and 

Re: [openstack-dev] [horizon] Support for Django 1.7: there's a bit of work, though it looks fixable to me...

2014-08-04 Thread Romain Hardouin
Hi,

Note that Django 1.7 requires Python 2.7 or above[1] while Juno still requires 
to be compatible with Python 2.6 (Suse ES 11 uses 2.6 if my memory serves me).

[1] https://docs.djangoproject.com/en/dev/releases/1.7/#python-compatibility

Best,

Romain


- Original Message -
From: Thomas Goirand z...@debian.org
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Sunday, August 3, 2014 12:55:19 PM
Subject: [openstack-dev] [horizon] Support for Django 1.7: there's a bit of 
work, though it looks fixable to me...

Hi,

The Debian maintainer of Django would like to upload Django 1.7 before
Jessie is frozen on the 5th of November. As for OpenStack, I would like
Icehouse to be in Jessie, since it will be supported by major companies
(RedHat and Canonical both will use Icehouse as LTS, and will work on
security for a longer time than previously planned in the OpenStack
community).

Though Horizon Icehouse doesn't currently work with Django 1.7. The
first thing to fix would be the TEMPLATE_DIRS thing:

./run_tests.sh -N -P || true
Running Horizon application tests
Traceback (most recent call last):
  File
/home/zigo/sources/openstack/icehouse/horizon/build-area/horizon-2014.1.1/manage.py,
line 25, in module
execute_from_command_line(sys.argv)
  File
/usr/lib/python2.7/dist-packages/django/core/management/__init__.py,
line 385, in execute_from_command_line
utility.execute()
[... not useful stack dump ...]
  File /usr/lib/python2.7/dist-packages/django/conf/__init__.py, line
42, in _setup
self._wrapped = Settings(settings_module)
  File /usr/lib/python2.7/dist-packages/django/conf/__init__.py, line
110, in __init__
Please fix your settings. % setting)
django.core.exceptions.ImproperlyConfigured: The TEMPLATE_DIRS setting
must be a tuple. Please fix your settings.
Running openstack_dashboard tests
WARNING:root:No local_settings file found.

Then of course, the rest of the tests are completely broken because
there's no local_settings. Adding a comma at the end of:

TEMPLATE_DIRS = (os.path.join(ROOT_PATH, 'tests', 'templates'))

in horizon/test/settings.py fixes the issue. Note that this works in
both Django 1.6 and 1.7. Some other TEMPLATE_DIRS declaration already
have the comma, so I guess it's fine to add it. Which is why I did this:
https://review.openstack.org/111561

FYI, there's this document that talks about it:
https://docs.djangoproject.com/en/1.7/releases/1.7/#backwards-incompatible-changes-in-1-7

Then, after fixing this, I get this error:
==
ERROR: Failure: TypeError (Error when calling the metaclass bases
function() argument 1 must be code, not str)
--
Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/nose/loader.py, line 414, in
loadTestsFromName
addr.filename, addr.module)
  File /usr/lib/python2.7/dist-packages/nose/importer.py, line 47, in
importFromPath
return self.importFromDir(dir_path, fqname)
  File /usr/lib/python2.7/dist-packages/nose/importer.py, line 94, in
importFromDir
mod = load_module(part_fqname, fh, filename, desc)
  File
/home/zigo/sources/openstack/icehouse/horizon/build-area/horizon-2014.1.1/horizon/test/tests/tables.py,
line 28, in module
from horizon.test import helpers as test
  File
/home/zigo/sources/openstack/icehouse/horizon/build-area/horizon-2014.1.1/horizon/test/helpers.py,
line 184, in module
class JasmineTests(SeleniumTestCase):
TypeError: Error when calling the metaclass bases
function() argument 1 must be code, not str

There's the same issue in the definition of SeleniumTestCase() in
openstack_dashboard/test/helpers.py (line 365 in Icehouse).

Since I don't really care about selenium (it can't be tested in Debian
because it's non-free), I commented out the class
JasmineTests(SeleniumTestCase), then I get more errors. A few instances
of this one:

  File
/home/zigo/sources/openstack/icehouse/horizon/build-area/horizon-2014.1.1/horizon/tables/base.py,
line 206, in lambda
average: lambda data: sum(data, 0.0) / len(data)
TypeError: unsupported operand type(s) for +: 'float' and 'str'

I'm not a Django expert, so I it'd be awesome to get help on this. Best
would be that:
1/ Support for Django 1.7 is added to Juno
2/ The changes are backported to Icehouse (even if this doesn't make it
into the stable branch because of let's stay safe, I can add the
patches as Debian specific).

Thoughts from the Horizon team would be welcome.

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-compute deadlock

2014-08-04 Thread Qin Zhao
Dear stackers,

FYI.  Eventually I report this problem to libguestfs. A workaround has been
included into libguestfs code to fix it. Thanks for your supporting!
https://bugzilla.redhat.com/show_bug.cgi?id=1123007


On Sat, Jun 7, 2014 at 3:27 AM, Qin Zhao chaoc...@gmail.com wrote:

 Yuriy,

 And I think if we use proxy object of multiprocessing, the green thread
 will not switch during we call libguestfs.  Is that correct?


 On Fri, Jun 6, 2014 at 2:44 AM, Qin Zhao chaoc...@gmail.com wrote:

 Hi Yuriy,

 I read multiprocessing source code just now.  Now I feel it may not solve
 this problem very easily.  For example, let us assume that we will use the
 proxy object in Manager's process to call libguestfs.  In manager.py, I see
 it needs to create a pipe, before fork the child process. The write end of
 this pipe is required by child process.


 http://sourcecodebrowser.com/python-multiprocessing/2.6.2.1/classmultiprocessing_1_1managers_1_1_base_manager.html#a57fe9abe7a3d281286556c4bf3fbf4d5

 And in Process._bootstrp(), I think we will need to register a function
 to be called by _run_after_forkers(), in order to closed the fds inherited
 from Nova process.


 http://sourcecodebrowser.com/python-multiprocessing/2.6.2.1/classmultiprocessing_1_1process_1_1_process.html#ae594800e7bdef288d9bfbf8b79019d2e

 And we also can not close the write end fd created by Manager in
 _run_after_forkers(). One feasible way may be getting that fd from the 5th
 element of _args attribute of Process object, then skip to close this
 fd  I have not investigate if or not Manager need to use other fds,
 besides this pipe. Personally, I feel such an implementation will be a
 little tricky and risky, because it tightly depends on Manager code. If
 Manager opens other files, or change the argument order, our code will fail
 to run. Am I wrong?  Is there any other safer way?


 On Thu, Jun 5, 2014 at 11:40 PM, Yuriy Taraday yorik@gmail.com
 wrote:

 Please take a look at
 https://docs.python.org/2.7/library/multiprocessing.html#managers -
 everything is already implemented there.
 All you need is to start one manager that would serve all your requests
 to libguestfs. The implementation in stdlib will provide you with all
 exceptions and return values with minimum code changes on Nova side.
 Create a new Manager, register an libguestfs endpoint in it and call
 start(). It will spawn a separate process that will speak with calling
 process over very simple RPC.
 From the looks of it all you need to do is replace tpool.Proxy calls in
 VFSGuestFS.setup method to calls to this new Manager.


 On Thu, Jun 5, 2014 at 7:21 PM, Qin Zhao chaoc...@gmail.com wrote:

 Hi Yuriy,

 Thanks for reading my bug!  You are right. Python 3.3 or 3.4 should not
 have this issue, since they have can secure the file descriptor. Before
 OpenStack move to Python 3, we may still need a solution. Calling
 libguestfs in a separate process seems to be a way. This way, Nova code can
 close those fd by itself, not depending upon CLOEXEC. However, that will be
 an expensive solution, since it requires a lot of code change. At least we
 need to write code to pass the return value and exception between these two
 processes. That will make this solution very complex. Do you agree?


 On Thu, Jun 5, 2014 at 9:39 PM, Yuriy Taraday yorik@gmail.com
 wrote:

 This behavior of os.pipe() has changed in Python 3.x so it won't be an
 issue on newer Python (if only it was accessible for us).

 From the looks of it you can mitigate the problem by running
 libguestfs requests in a separate process (multiprocessing.managers comes
 to mind). This way the only descriptors child process could theoretically
 inherit would be long-lived pipes to main process although they won't leak
 because they should be marked with CLOEXEC before any libguestfs request 
 is
 run. The other benefit is that this separate process won't be busy opening
 and closing tons of fds so the problem with inheriting will be avoided.


 On Thu, Jun 5, 2014 at 2:17 PM, laserjetyang laserjety...@gmail.com
 wrote:

   Will this patch of Python fix your problem? 
 *http://bugs.python.org/issue7213
 http://bugs.python.org/issue7213*


 On Wed, Jun 4, 2014 at 10:41 PM, Qin Zhao chaoc...@gmail.com wrote:

  Hi Zhu Zhu,

 Thank you for reading my diagram!   I need to clarify that this
 problem does not occur during data injection.  Before creating the ISO, 
 the
 driver code will extend the disk. Libguestfs is invoked in that time 
 frame.

 And now I think this problem may occur at any time, if the code use
 tpool to invoke libguestfs, and one external commend is executed in 
 another
 green thread simultaneously.  Please correct me if I am wrong.

 I think one simple solution for this issue is to call libguestfs
 routine in greenthread, rather than another native thread. But it will
 impact the performance very much. So I do not think that is an 
 acceptable
 solution.



  On Wed, Jun 4, 2014 at 12:00 PM, 

[openstack-dev] [nova] Why people don't close bugs?

2014-08-04 Thread Dmitry Guryanov
Hello!

I looked through launchpad bugs and it seems there are a lot of bugs, 
which are fixed already, but still open, here are 3 ones:

https://bugs.launchpad.net/nova/+bug/909096
https://bugs.launchpad.net/nova/+bug/1206762
https://bugs.launchpad.net/nova/+bug/1208743

I've posted comments on these bugs, but nobody replied. How is it 
possible, to close them?

-- 
Dmitry Guryanov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why people don't close bugs?

2014-08-04 Thread Tom Fifield

On 04/08/14 17:46, Dmitry Guryanov wrote:

Hello!

I looked through launchpad bugs and it seems there are a lot of bugs,
which are fixed already, but still open, here are 3 ones:

https://bugs.launchpad.net/nova/+bug/909096

https://bugs.launchpad.net/nova/+bug/1206762

https://bugs.launchpad.net/nova/+bug/1208743

I've posted comments on these bugs, but nobody replied. How is it
possible, to close them?


Hi Dimiry,

Thanks for looking into the bug tracker. We definitely always need more 
people helping with triage (https://wiki.openstack.org/BugTriage).


If you join the Nova Bug Team (https://launchpad.net/~nova-bugs) you 
will be able to change the bugs' status as appropriate.


Regards,


Tom

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance Store Future

2014-08-04 Thread Flavio Percoco
On 08/01/2014 10:41 PM, Jay Pipes wrote:
 cc'ing ML since it's an important discussion, IMO...
 
 On 07/31/2014 11:54 AM, Arnaud Legendre wrote:
 Hi Jay,

 I would be interested if you could share your point of view on this
 item: we want to make the glance stores a standalone library
 (glance.stores) which would be consumed directly by Nova and Cinder.
 
 Yes, I have been enthusiastic about this effort for a long time now :)
 In fact, I have been pushing a series of patches (most merged at this
 point) in Nova to clean up the (very) messy nova.image.glance module and
 standardize the image API in Nova.
 
 The messiest part of the current image API in Nova, by far, is the
 nova.image.glance.GlanceImageService.download() method, which you
 highlight below. The reason it is so messy is that the method does
 different things (and returns different things!) depending on how you
 call it and what arguments you provide. :(

+1

 
 I think it would be nice to get your pov since you worked a lot on
 the Nova image interface recently. To give you an example:

 Here
 https://github.com/openstack/nova/blob/master/nova/image/glance.py#L333,
  we would do:

 1. location = get_image_location(image_id),
 2. get(location) from the
 glance.stores library like for example rbd
 (https://github.com/openstack/glance/blob/master/glance/store/rbd.py#L206)

 
 Yup. Though I'd love for this code to live in olso, not glance...

I think both places make sense. The reason why we decided to keep it
under glance is because it's still an important piece of the project and
the team is willing to maintain it, plus the team is already familiar
with the code.

Not sure how strong those points are but AFAIR, those two are the reason
why the lib lives under glance. Here's the link to the email thread were
this was discussed:

http://lists.openstack.org/pipermail/openstack-dev/2013-December/022907.html

 
 Plus, I'd almost prefer to see an interface that hides the location URIs
 entirely and makes the discovery of those location URIs entirely
 encapsulated within glance.store. So, for instance, instead of getting
 the image location using a call to glanceclient.show(), parsing the
 locations collection from the v2 API response, and passing that URI to
 the glance.store.get() function, I'd prefer to see an interface more
 like this:

FWIW, The API is not finished (probably not even started :D). The first
step we wanted to pursue was pulling the code out of Glance and just
then work on an improved, more secure and more consistent API. Your
proposal looks neat, I'll propose a design session for the glance.store
API. :D


Thanks for your feedback,
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-08-04 Thread Boris Pavlovic
Thierry,

I like Operations name for this program. It totally makes sense as we can
keep this name and extend just mission of this program to include such
projects like satori, rubick, logaas and others in future.


Sylvian,

Thank you for your input.

In my opinion, having 2 separated programs doesn't mean that these 2
programs shouldn't collaborate and work together. Imho we are all one big
community and should help each other.

Actually the goal of Operations program is to be able to organize
centralized work on OpenStack API that will provide for OpenStack Operators
everything required to analyze how live production OpenStack cloud perform.
And in case of issues, to be able fast to detect and debug them.

If we present this via OpenStack APi, it will just make a life of QA
simpler, because they won't need to do all this via scripts in gates (which
is much harder task)


Best regards,
Boris Pavlovic




On Mon, Aug 4, 2014 at 1:04 PM, Sylvain Bauza sba...@redhat.com wrote:


 Le 02/08/2014 04:31, Alex Freedland a écrit :

 Angus,

  Rally is designed as an operations tool. Its purpose is to run a
 production cloud and give an operator tools and data to profile a
 production cloud. It is intended as a first of many such tools.

  There is a strong support in the community that operations tools should
 be developed as part of OpenStack and Rally is the first such successful
 community effort.

  I can envision other tools building a community around them and they too
 should become part of OpenStack operations tooling.  Maybe Operator Tools
 program would be a better name?




 Some tooling exists already for development purposes : Devstack, Grenade,
 Tempest for the one most known. All of them are part of the QA program,
 except Devstack which is probably soon to be integrated as well in that QA
 program (see [1])


 IMHO, there are 2 distinct concerns :
  - either we consider that Rally is another great tool for qualifying
 OpenStack releases, and then IMHO, the QA Program mission statement can
 cover this.
  - or, we consider that Rally is for operations only (IMHO we would loose
 some benefits then) and then possibly a new program could make sense. That
 said, by looking at the Deployment Program mission statement, I'm thinking
 that Rally could be part of, as it would be a gread addition for TripleO
 deployments.

 -Sylvain



 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/041731.html




 Alex Freedland
 Co-Founder
 Mirantis, Inc.




 On Thu, Jul 31, 2014 at 3:55 AM, Angus Salkeld 
 angus.salk...@rackspace.com wrote:

  On Sun, 2014-07-27 at 07:57 -0700, Sean Dague wrote:
  On 07/26/2014 05:51 PM, Hayes, Graham wrote:
   On Tue, 2014-07-22 at 12:18 -0400, Sean Dague wrote:
   On 07/22/2014 11:58 AM, David Kranz wrote:
   On 07/22/2014 10:44 AM, Sean Dague wrote:
   Honestly, I'm really not sure I see this as a different program,
 but is
   really something that should be folded into the QA program. I feel
 like
   a top level effort like this is going to lead to a lot of
 duplication in
   the data analysis that's currently going on, as well as
 functionality
   for better load driver UX.
  
-Sean
   +1
   It will also lead to pointless discussions/arguments about which
   activities are part of QA and which are part of
   Performance and Scalability Testing.
  
   I think that those discussions will still take place, it will just be
 on
   a per repository basis, instead of a per program one.
  
   [snip]
  
  
   Right, 100% agreed. Rally would remain with it's own repo + review
 team,
   just like grenade.
  
  -Sean
  
  
   Is the concept of a separate review team not the point of a program?
  
   In the the thread from Designate's Incubation request Thierry said
 [1]:
  
   Programs just let us bless goals and teams and let them organize
   code however they want, with contribution to any code repo under that
   umbrella being considered official and ATC-status-granting.
  
   I do think that this is something that needs to be clarified by the
 TC -
   Rally could not get a PTL if they were part of the QA project, but
 every
   time we get a program request, the same discussion happens.
  
   I think that mission statements can be edited to fit new programs as
   they occur, and that it is more important to let teams that have been
   working closely together to stay as a distinct group.
 
  My big concern here is that many of the things that these efforts have
  been doing are things we actually want much closer to the base. For
  instance, metrics on Tempest runs.
 
  When Rally was first created it had it's own load generator. It took a
  ton of effort to keep the team from duplicating that and instead just
  use some subset of Tempest. Then when measuring showed up, we actually
  said that is something that would be great in Tempest, so whoever ran
  it, be it for Testing, Monitoring, or Performance gathering, would have
  access to that data. But the Rally 

[openstack-dev] [qa] In-tree functional test vision

2014-08-04 Thread Chris Dent


In the Thoughts on the patch test failure rate and moving forward
thread[1] there's discussion of moving some of the burden for
functional testing to the individual projects. This seems like a good
idea to me, but also seems like it could be a source of confusion so I
thought I'd start another thread to focus on the details of just this
topic, separate from the gate-oriented discussion in the other.

In a couple of messages[2] Sean mentions the vision. Is there a wiki
page or spec or other kind of document where this nascent vision is
starting to form? Even if we can't quite get started just yet, it
would be good to have an opporunity to think about the constraints and
goals that we'll be working with.

Not just the goal of moving tests around, but what, for example, makes
a good functional test?

For constraints: Will tempest be available as a stable library? Is using
tempest (or other same library across all projects) a good or bad thing?
Seems there's some disagreement on both of these.

Personally I'm quite eager to to vastly increase the amount of testing
I can do on my own machine(s) before letting the gate touch my code.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-July/thread.html#41057
[2]
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041188.html
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041252.html

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] specs.openstack.org is live

2014-08-04 Thread Steve Gordon
- Original Message -
 From: Andreas Jaeger a...@suse.com
 To: openstack-dev@lists.openstack.org
 
 All OpenStack incubated projects and programs that use a -specs
 repository have now been setup to publish these to
 http://specs.openstack.org. With the next merged in patch of a *-specs
 repository, the documentation will get published.

Is there a way to manually trigger this for Nova and Neutron? As these projects 
are well past their self-enforced spec approval deadlines there may not be any 
merges to their respective -specs repositories for a while now.

Thanks,

-- 
Steve Gordon, RHCE
Sr. Technical Product Manager,
Red Hat Enterprise Linux OpenStack Platform

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][rally] Why we need subunit2sql?

2014-08-04 Thread Sean Dague
On 08/02/2014 01:05 PM, Boris Pavlovic wrote:
 Hi stackers,
 
 
 Could somebody explain me why we need subunit2sql? 
 
 It seems like a part of functionality that is already implemented in Rally:
 http://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/
 
 With Rally we can run tempest tests and Rally will parse  store results
 to it's DB. 
 Having one instance of Rally DB in gates will allow us to collect all
 results of all tempest runs. As well Rally is already integrated in
 gates. So there won't be issues with getting this done. 
 
 By the way, Rally team is ready to help with this task. 
 
 Thoughts? 
 
 Best regards,
 Boris Pavlovic 

This is actually the crux of why I think rally belongs in the QA
program. I consider Tempest result summary to be an essential but not
yet implemented part of Tempest itself. The Rally team has implemented
some of these features we want in base Tempest, but not in a way that
can be done without using the whole Rally toolchain, which shouldn't be
required. We should be building components that can be built into the
pipeline that people want to assemble.

The Rally team is arguing that Rally is not part of QA in one thread,
then saying that all upstream QA should be done with Rally in another
thread. So now I'm even more confused by the program proposal.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] specs.openstack.org is live

2014-08-04 Thread Andreas Jaeger
On 08/04/2014 01:52 PM, Steve Gordon wrote:
 - Original Message -
 From: Andreas Jaeger a...@suse.com
 To: openstack-dev@lists.openstack.org

 All OpenStack incubated projects and programs that use a -specs
 repository have now been setup to publish these to
 http://specs.openstack.org. With the next merged in patch of a *-specs
 repository, the documentation will get published.
 
 Is there a way to manually trigger this for Nova and Neutron? As these 
 projects are well past their self-enforced spec approval deadlines there may 
 not be any merges to their respective -specs repositories for a while now.

Just a dummy commit. I'm sure you find a typo to fix or something like
that;)

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Service configuration options modification

2014-08-04 Thread Denis Makogon
Hello, Stackers.



I’d like to propose policy for configuration option deprecation taking into
account all requirements related to production deployment.

As you all know, there are a lot patches that are proposing modifications
to existing configuration. And options are being modified without any(not
always) documentation that reflects differences behaviour with old and new
options.

What we’re doing right now doesn’t seems to be the most valid way, we
shouldn’t delete options without any signs (DocImpact section, at least).

We should find more appropriate way to handle such cases, the most
appropriate way to handle given case is to use oslo-config abilities - mark
option as “Deprecated”.

Here’s proposed workflow for modifications. Once option is being modified:

   1.

   Leave option that is going to be modified as is, add Deprecated flag.
   Clean-up code from usage from deprecated option.
   2.

   Add new option. Add usage to existing code as substitution to previous
   option.
   3.

   Add documentation that reflects differences between options: new and
   deprecated one.
   4.

   Add documentation that reflects behaviour  of existing code with new
   option.



Thoughts?


Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why people don't close bugs?

2014-08-04 Thread Dmitry Guryanov
On Monday 04 August 2014 17:53:11 Tom Fifield wrote:
 On 04/08/14 17:46, Dmitry Guryanov wrote:
  Hello!
  
  I looked through launchpad bugs and it seems there are a lot of bugs,
  which are fixed already, but still open, here are 3 ones:
  
  https://bugs.launchpad.net/nova/+bug/909096
  
  https://bugs.launchpad.net/nova/+bug/1206762
  
  https://bugs.launchpad.net/nova/+bug/1208743
  
  I've posted comments on these bugs, but nobody replied. How is it
  possible, to close them?
 
 Hi Dimiry,
 
 Thanks for looking into the bug tracker. We definitely always need more
 people helping with triage (https://wiki.openstack.org/BugTriage).
 
 If you join the Nova Bug Team (https://launchpad.net/~nova-bugs) you
 will be able to change the bugs' status as appropriate.
 
 Regards,
 
 

Thanks, I've joined this team. I'll try to help with such outdated issues.


 Tom
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Dmitry Guryanov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-08-04 Thread Sean Dague
On 07/31/2014 06:55 AM, Angus Salkeld wrote:
 On Sun, 2014-07-27 at 07:57 -0700, Sean Dague wrote:
 On 07/26/2014 05:51 PM, Hayes, Graham wrote:
 On Tue, 2014-07-22 at 12:18 -0400, Sean Dague wrote:
 On 07/22/2014 11:58 AM, David Kranz wrote:
 On 07/22/2014 10:44 AM, Sean Dague wrote:
 Honestly, I'm really not sure I see this as a different program, but is
 really something that should be folded into the QA program. I feel like
 a top level effort like this is going to lead to a lot of duplication in
 the data analysis that's currently going on, as well as functionality
 for better load driver UX.

  -Sean
 +1
 It will also lead to pointless discussions/arguments about which
 activities are part of QA and which are part of
 Performance and Scalability Testing.

 I think that those discussions will still take place, it will just be on
 a per repository basis, instead of a per program one.

 [snip]


 Right, 100% agreed. Rally would remain with it's own repo + review team,
 just like grenade.

-Sean


 Is the concept of a separate review team not the point of a program?

 In the the thread from Designate's Incubation request Thierry said [1]:

 Programs just let us bless goals and teams and let them organize 
 code however they want, with contribution to any code repo under that
 umbrella being considered official and ATC-status-granting.
 
 I do think that this is something that needs to be clarified by the TC -
 Rally could not get a PTL if they were part of the QA project, but every
 time we get a program request, the same discussion happens.

 I think that mission statements can be edited to fit new programs as
 they occur, and that it is more important to let teams that have been
 working closely together to stay as a distinct group.

 My big concern here is that many of the things that these efforts have
 been doing are things we actually want much closer to the base. For
 instance, metrics on Tempest runs.

 When Rally was first created it had it's own load generator. It took a
 ton of effort to keep the team from duplicating that and instead just
 use some subset of Tempest. Then when measuring showed up, we actually
 said that is something that would be great in Tempest, so whoever ran
 it, be it for Testing, Monitoring, or Performance gathering, would have
 access to that data. But the Rally team went off in a corner and did it
 otherwise. That's caused the QA team to have to go and redo this work
 from scratch with subunit2sql, in a way that can be consumed by multiple
 efforts.

 So I'm generally -1 to this being a separate effort on the basis that so
 far the team has decided to stay in their own sandbox instead of
 participating actively where many of us thing the functions should be
 added. I also think this isn't like Designate, because this isn't
 intended to be part of the integrated release.
 
 From reading Boris's email it seems like rally will provide a horizon
 panel and api to back it (for the operator to kick of performance runs
 and view stats). So this does seem like something that would be a
 part of the integrated release (if I am reading things correctly).
 
 Is the QA program happy to extend their scope to include that?
 QA could become Quality Assurance of upstream code and running
 OpenStack installations. If not we need to find some other program
 for rally.

I think that's realistically already the scope of the QA program, we
might just need to change the governance wording.

Tempest has always been intended to be run on production clouds (public
or private) to ensure proper function. Many operators are doing this
today as part of normal health management. And we continue to evolve it
to be something which works well in that environment.

All the statistics collection / analysis parts in Rally today I think
are basically things that should be part of any Tempest installation /
run. It's cool that Rally did a bunch of work there, but having that
code outside of Tempest is sort of problematic, especially as there are
huge issues with the collection of that data because of missing timing
information in subunit. So realistically to get accurate results there
needs to be additional events added into Tempest tests to build this
correctly. If you stare at the raw results here today they have such
huge accuracy problems (due to unaccounted for time in setupClass, which
is a known problem) to the point of being misleading, and possibly
actually harmful.

These are things that are fixable, but hard to do outside of the Tempest
project itself. Exporting accurate timing / stats should be a feature
close to the test load, not something that's done externally with
guessing and fudge factors.

So every time I look at the docs in Rally -
https://github.com/stackforge/rally I see largely features that should
be coming out of the test runners themselves.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature

Re: [openstack-dev] Glance Store Future

2014-08-04 Thread Duncan Thomas
Duncan Thomas
On Aug 1, 2014 9:44 PM, Jay Pipes jaypi...@gmail.com wrote:

 Yup. Though I'd love for this code to live in olso, not glance...

Why Oslo? There seems to be a general obsession with getting things into
Oslo, but our (cinder team) general experiences with the end result have
been highly variable, to the point where we've discussed just saying no to
Oslo code since the pain is more than the gain. In this case, the glance
team are the subject matter experts, the glance interfaces and internals
are their core competency, I honestly can't see any value in putting the
project anywhere other than glance
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 5 node Openstack Icehouse with Vagrant and Puppet on Virtualbox and Centos 6.5

2014-08-04 Thread Philip Cheong
Greetings to all!

So I'm learning Openstack and hope some day to add my name among the list
of contributors (maybe on something like Solum).

My first real foray into the Openstack world was to build myself a
multi-node environment and is available here
https://github.com/phiche/vagrant-puppet-openstack

In this example I used Vagrant and Puppet enterprise and the puppet modules
from puppetlabs. My simple target was to have a single Vagrantfile that
could bring up a fully functional multi-node environment at the push of a
button (i.e. vagrant up). I discovered that goal was perhaps overly
ambitious.

But anyways, it works (with a bit of massaging). Hopefully it might be
useful to others, if nothing else but perhaps for some amusement.

Happy to hear feedback if you try it!

Phil

-- 
*Philip Cheong*
*Elastx *| Public and Private PaaS
email: philip.che...@elastx.se
office: +46 8 557 728 10
mobile: +46 702 8170 814
twitter: @Elastx https://twitter.com/Elastx
http://elastx.se
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance Store Future

2014-08-04 Thread Jay Pipes

On 08/04/2014 09:09 AM, Duncan Thomas wrote:

Duncan Thomas
On Aug 1, 2014 9:44 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

  Yup. Though I'd love for this code to live in olso, not glance...

Why Oslo? There seems to be a general obsession with getting things into
Oslo, but our (cinder team) general experiences with the end result have
been highly variable, to the point where we've discussed just saying no
to Oslo code since the pain is more than the gain. In this case, the
glance team are the subject matter experts, the glance interfaces and
internals are their core competency, I honestly can't see any value in
putting the project anywhere other than glance


2 reasons.

1) This is code that will be utilized by 1 project, and is a library, 
not a service endpoint. That seems to be right up the Oslo alley.


2) The mission of the Glance program has changed to being an application 
catalog service, not an image streaming service.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance Store Future

2014-08-04 Thread Doug Hellmann

On Aug 4, 2014, at 9:27 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/04/2014 09:09 AM, Duncan Thomas wrote:
 Duncan Thomas
 On Aug 1, 2014 9:44 PM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:
 
  Yup. Though I'd love for this code to live in olso, not glance...
 
 Why Oslo? There seems to be a general obsession with getting things into
 Oslo, but our (cinder team) general experiences with the end result have
 been highly variable, to the point where we've discussed just saying no
 to Oslo code since the pain is more than the gain. In this case, the
 glance team are the subject matter experts, the glance interfaces and
 internals are their core competency, I honestly can't see any value in
 putting the project anywhere other than glance
 
 2 reasons.
 
 1) This is code that will be utilized by 1 project, and is a library, not a 
 service endpoint. That seems to be right up the Oslo alley.
 
 2) The mission of the Glance program has changed to being an application 
 catalog service, not an image streaming service.
 
 Best,
 -jay

Oslo isn’t the only program that can produce reusable libraries, though. If the 
Glance team is going to manage this code anyway, it makes sense to leave it in 
the Glance program.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] network_device_mtu is not applied to VMs

2014-08-04 Thread Elena Ezhova
Hi!

I feel I need a piece of advice regarding this bug [1].

The gist of the problem is that although there is an option
network_device_mtu that can be specified in neutron.conf VMs are not
getting that mtu on their interfaces. MTU can be specified by the dhcp
agent by adding the option to dnsmasq_config_file like it’s done here [2].
So one of the solutions might be to check whether the network_device_mtu is
specified during spawning of Dnsmasq process and if it is add an option to
Dnsmasq options file.

But I am not sure whether this is something worth being fixed and if the
above-described option is the right way to fix the bug.

By the way, as I found out, MTU cannot currently be set for instances that
are running cirros, because cirros does not handle dhcp options like it’s
said in the following bug. [3]

Regards, Elena Ezhova

[1] https://bugs.launchpad.net/neutron/+bug/1348788

[2]
https://ask.openstack.org/en/question/12499/forcing-mtu-to-1400-via-etcneutrondnsmasq-neutronconf-per-daniels/

[3] https://bugs.launchpad.net/cirros/+bug/1301958
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] old cloud apt version of ubuntu at install-depencies

2014-08-04 Thread LeslieWang
Dear all,
I found file “tripleo-incubator/scripts/install-depencencies” has below clauses:
DEBIAN_FRONTEND=noninteractive sudo -E apt-get install --yes 
ubuntu-cloud-keyring
(grep -Eqs precise-updates/grizzly 
/etc/apt/sources.list.d/cloud-archive.list) || echo 'deb 
http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main' 
| sudo tee -a /etc/apt/sources.list.d/cloud-archive.list
#adding precise-backports universe repository for jq packageif ! 
command -v add-apt-repository; then
  DEBIAN_FRONTEND=noninteractive sudo -E apt-get install --yes 
python-software-properties
fi
sudo add-apt-repository deb http://us.archive.ubuntu.com/ubuntu/ 
precise-backports universe

Seems like it still uses old grizzly repository. So can anyone please suggest 
if it is a bug and need be updated to latest icehouse? or not?
BTW, here is ubuntu cloud archive link. 
https://wiki.ubuntu.com/ServerTeam/CloudArchive
Best RegardsLeslie___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance Store Future

2014-08-04 Thread Sean Dague
On 08/04/2014 09:46 AM, Doug Hellmann wrote:
 
 On Aug 4, 2014, at 9:27 AM, Jay Pipes jaypi...@gmail.com wrote:
 
 On 08/04/2014 09:09 AM, Duncan Thomas wrote:
 Duncan Thomas
 On Aug 1, 2014 9:44 PM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 Yup. Though I'd love for this code to live in olso, not glance...

 Why Oslo? There seems to be a general obsession with getting things into
 Oslo, but our (cinder team) general experiences with the end result have
 been highly variable, to the point where we've discussed just saying no
 to Oslo code since the pain is more than the gain. In this case, the
 glance team are the subject matter experts, the glance interfaces and
 internals are their core competency, I honestly can't see any value in
 putting the project anywhere other than glance

 2 reasons.

 1) This is code that will be utilized by 1 project, and is a library, not a 
 service endpoint. That seems to be right up the Oslo alley.

 2) The mission of the Glance program has changed to being an application 
 catalog service, not an image streaming service.

 Best,
 -jay
 
 Oslo isn’t the only program that can produce reusable libraries, though. If 
 the Glance team is going to manage this code anyway, it makes sense to leave 
 it in the Glance program.

Agreed. Honestly it's better to keep the review teams close to the
expertise for the function at hand.

It needs to be ok that teams besides oslo create reusable components for
other parts of OpenStack. Oslo should be used for things where there
isn't a strong incumbent owner. I think we have a strong incumbent owner
here so living in Artifacts program makes sense to me.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答???: [Neutron] Auth token in context

2014-08-04 Thread Isaku Yamahata
ServiceVM wants auth token.
When creating l3 router which runs inside VM, it launches VM.
So neutron interacts with other projects like serivcevm server or nova.

thnaks,


On Sun, Jul 20, 2014 at 12:14:54AM -0700,
Kevin Benton blak...@gmail.com wrote:

 That makes sense. Shouldn't we wait for something to require it before
 adding it though?
 
 
 On Sat, Jul 19, 2014 at 11:41 PM, joehuang joehu...@huawei.com wrote:
 
   Hello, Kevin
 
 
 
  The leakage risk may be one of the design purpose. But  Nova/Cinder has
  already stored the token into the context, because Nova needs to access
  Neutron.Cinder.Glance, And Cinder interact with Glance
 
 
 
  For Neutron, I think why the token has not been passed to the context, is
  because that Neutron only reactively provide service (exactly PORT ) to
  Nova currently, so Neutron has not call other services' API by using the
  token.
 
 
 
  If the underlying agent or plugin wants to use the token, then the
  requirement will be asked by somebody.
 
 
 
  BR
 
 
 
  Joe
 
 
   --
  *???件人:* Kevin Benton [blak...@gmail.com]
  *???送??:* 2014年7月19日 4:23
 
  *收件人:* OpenStack Development Mailing List (not for usage questions)
  *主???:* Re: [openstack-dev] [Neutron] Auth token in context
 
I suspect it was just excluded since it is authenticating information
  and there wasn't a good use case to pass it around everywhere in the
  context where it might be leaked into logs or other network requests
  unexpectedly.
 
 
  On Fri, Jul 18, 2014 at 1:10 PM, Phillip Toohill 
  phillip.tooh...@rackspace.com wrote:
 
   It was for more of a potential use to query another service. Don't
  think well go this route though, but was curious why it was one of the only
  values not populated even though there's a field for it.
 
From: Kevin Benton blak...@gmail.com
  Reply-To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Date: Friday, July 18, 2014 2:16 PM
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Neutron] Auth token in context
 
What are you trying to use the token to do?
 
 
  On Fri, Jul 18, 2014 at 9:16 AM, Phillip Toohill 
  phillip.tooh...@rackspace.com wrote:
 
  Excellent! Thank you for the response, I figured it was possible, just
  concerned me to why everything else made it to context except for the
  token.
 
  So to be clear, you agree that it should at least be passed to context
  and
  because its not could be deemed a bug?
 
  Thank you
 
  On 7/18/14 2:03 AM, joehuang joehu...@huawei.com wrote:
 
  Hello, Phillip.
  
  Currently, Neutron did not pass the token to the context. But
  Nova/Cinder
  did that. It's easy to do that, just 'copy' from Nova/Cinder.
  
  1.  How Nova/Cinder did that
  class NovaKeystoneContext(wsgi.Middleware)
  ///or CinderKeystoneContext for cinder
  
auth_token = req.headers.get('X_AUTH_TOKEN',
   req.headers.get('X_STORAGE_TOKEN'))
ctx = context.RequestContext(user_id,
   project_id,
   user_name=user_name,
   project_name=project_name,
   roles=roles,
   auth_token=auth_token,
   remote_address=remote_address,
   service_catalog=service_catalog)
  
  2.  Neutron not passed token. Also not good for the third part network
  infrastructure to integrate the authentication with KeyStone.
  class NeutronKeystoneContext(wsgi.Middleware)
  .
  # token not get from the header and not passed to context. Just
  change here like what Nova/Cinder did.
  context.Context(user_id, tenant_id, roles=roles,
user_name=user_name,
  tenant_name=tenant_name,
request_id=req_id)
  req.environ['neutron.context'] = ctx
  
  I think I'd better to report a bug for your case.
  
  Best Regards
  Chaoyi Huang ( Joe Huang )
  -???件原件-
  ???件人: Phillip Toohill [mailto:phillip.tooh...@rackspace.com]
  ???送??: 2014年7月18日 14:07
  收件人: OpenStack Development Mailing List (not for usage questions)
  主???: [openstack-dev] [Neutron] Auth token in context
  
  Hello all,
  
  I am wondering how to get the auth token from a user request passed down
  to the context so it can potentially be used by the plugin or driver?
  
  Thank you
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  

[openstack-dev] [oslo-incubator] rpc.cleanup method is not reachable due to wrong import of rpc module

2014-08-04 Thread Malawade, Abhijeet
Hi all,

rpc module is not imported properly in nova, cinder, neutron projects. It is 
imported from wrong package.
In oslo-incubator  'rpc' module is used in openstack/common/service.py file and 
it is present at openstack/common package.
(https://github.com/openstack/oslo-incubator/blob/master/openstack/common/service.py#L48https://github.com/openstack/oslo-incubator/blob/master/openstack/common/service.py%23L48)

But this 'rpc' module is present at base package in 'nova' and 'cinder' project 
while it is present at neutron/common/ package in neutron project.

Nova : https://github.com/openstack/nova/blob/master/nova/rpc.py
Cinder : https://github.com/openstack/cinder/blob/master/cinder/rpc.py
Neutron : https://github.com/openstack/neutron/blob/master/neutron/common/rpc.py

This openstack/common/service.py is synced form oslo-incubator in each project. 
Because of this if we make change in specific project then these changes will  
get removed after re-synced  oslo-incubator code.
The same thing happened in nova project. This patch 
(https://review.openstack.org/#/c/81833/https://review.openstack.org/%23/c/81833/)
 has merged into nova code, but it is overwritten after syncing oslo-incubator 
code. There is comment on this patch by 'Mark McLoughlin' regarding the same.

I have filed bug for this issue in oslo : 
https://bugs.launchpad.net/oslo/+bug/1334661
And also I have pushed patch for same. But this patch will fail for 'Neutron' 
project.

I think we have to try importing 'rpc' module from all possible places till it 
gets imported properly
OR we need to change location of 'rpc' module in projects for uniformity. (ie 
to put 'rpc' module at some common place)

Could you please give me your opinions on the same.

Thanks,
Abhijeet

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Status of A/A HA for neutron-metadata-agent?

2014-08-04 Thread mar...@redhat.com
On 02/08/14 02:22, Assaf Muller wrote:
 Hey Marios, comments inline.
 
 - Original Message -
 Hi all,

 I have been asked by a colleague about the status of A/A HA for
 neutron-* processes. From the 'HA guide' [1], l3-agent and
 metadata-agent are the only neutron components that can't be deployed in
 A/A HA (corosync/pacemaker for a/p is documented as available 'out of
 the box' for both).

 The l3-agent work is approved for J3 [4] but I am unaware of any work on
 the metadata-agent and can't see any mention in [2][3]. Is this someone
 has looked at, or is planning to (though ultimately K would be the
 earliest right?)?

 
 With L3 HA turned on you can run the metadata agent on all network nodes.
 The active instance of each router will have the proxy up in its namespace
 and it will forward it to the agent as expected.


perfect thanks Assaf as well as for the follow up clarifications on irc,
I didn't realise this was the case,

all the best, marios


 
 thanks! marios

 [1] http://docs.openstack.org/high-availability-guide/content/index.html
 [2] https://wiki.openstack.org/wiki/NeutronJunoProjectPlan
 [3] https://launchpad.net/neutron/+milestone/juno-3
 [4]
 http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/juno/l3-high-availability.rst

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Status of A/A HA for neutron-metadata-agent?

2014-08-04 Thread mar...@redhat.com
On 03/08/14 13:07, Gary Kotton wrote:
 Hi,
 Happy you asked about this. This is an idea that we have:
 
 Below is a suggestion on how we can improve the metadata service. This can
 be done by leveraging the a Load balancers supports X-Forwarded-For.The
 following link has two diagrams. The first is the existing support (may be
 a little rusty here, so please feel free to correct) and the second is the
 proposal. 
 https://docs.google.com/drawings/d/19JCirhj2NVVFZ0Vbnsxhyxrm1jjzEAS3ZAMzfBR
 C-0E/edit?usp=sharing
 
 Metadata proxy support: the proxy will receive the HTTP request. It will
 then perform a query to the Neutron service (1) to retrieve the tenant id
 and the instance id from the neutron service. A proxy request will be sent
 to Nova for the metadata details (2).
 
 Proposed support:
 
 1. There will be a load balancer vip ­ 254.169.254.169 (this can be
 reached either via the L3 agent of the DG on the DHCP.
 2. The LB will have a server farm of all of the Nova API's (this makes the
 soon highly available)
  1. Replace the destination IP and port with the Nova metadata IP and
 port
  2. Replace the source IP with the interface IP
  3. Insert the header X-Fowarded-For (this will have the original
 source IP of the VM)
 
 
 
 1. When the Nova metadata service receives the request, according to a
 configuration variable
 (https://github.com/openstack/nova/blob/master/nova/api/metadata/handler.py
 #L134), will interface with the neutron service to get the instance_id and
 the tenant id. This will be done by using a new extension. With the
 details provided by Neutron Nova will provide the correct metadata for the
 instance
 2. A new extension will be added to Neutron that will enable a port
 lookup. The port lookup will have two input values and will return the
 port ­ which has the instance id and the tenant id.
 1. LB source IP ­ this is the LB source IP that interfaces with the Nova
 API. When we create the edge router for the virtual network we will have a
 mapping of the edge LB ip - network id. This will enable us to get the
 virtual network for the port
 2. Fixed port IP ­ this with the virtual network will enable us to get the
 specific port.
 
 Hopefully in the coming days a spec will be posted that will provide more
 details
 

thanks for that info Gary, the diagram in particular forced me to go
read a bit about the metadata agent (i was mostly just proxying for the
original question). I obviously miss a lot of the details (will be
easier once the spec is out) but it seems like you're proposing an
addition (port-lookup) that will change the way the metadata agent is
called; in fact does it make the neutron metadata proxy obsolete? I will
keep a look out for the spec,

thanks, marios



 Thanks
 Gary
 
 
 
 On 8/1/14, 6:11 PM, mar...@redhat.com mandr...@redhat.com wrote:
 
 Hi all,

 I have been asked by a colleague about the status of A/A HA for
 neutron-* processes. From the 'HA guide' [1], l3-agent and
 metadata-agent are the only neutron components that can't be deployed in
 A/A HA (corosync/pacemaker for a/p is documented as available 'out of
 the box' for both).

 The l3-agent work is approved for J3 [4] but I am unaware of any work on
 the metadata-agent and can't see any mention in [2][3]. Is this someone
 has looked at, or is planning to (though ultimately K would be the
 earliest right?)?

 thanks! marios

 [1] http://docs.openstack.org/high-availability-guide/content/index.html
 [2] https://wiki.openstack.org/wiki/NeutronJunoProjectPlan
 [3] 
 https://urldefense.proofpoint.com/v1/url?u=https://launchpad.net/neutron/%
 2Bmilestone/juno-3k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6h
 goMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=TZXQIMHmAX22lC0YOyItiXOrAA%2FegHqY5cN
 I73%2B0jJ8%3D%0As=b81f4d5919b317628f56d0313143cee8fca6e47f639a59784eb19da
 3d88681da
 [4]
 http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/juno/l3-h
 igh-availability.rst

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] devtest environment for virtual or true bare metal

2014-08-04 Thread LeslieWang
Dear all,
Looking at devtest pages at TripleO wiki 
http://docs.openstack.org/developer/tripleo-incubator/devtest.html. I thought 
all variables and configurations of devtest are for true bare-metal because I 
see diskimage-builder options of both overcloud and undercloud doesn't include 
vm option. But I see this configuration in 
tripleo-incubator/scripts/devtest_testenv.sh, 
## #. Set the default bare metal power manager. By default devtest uses
##nova.virt.baremetal.virtual_power_driver.VirtualPowerManager to
##support a fully virtualized TripleO test environment. You may
##optionally customize this setting if you are using real baremetal
##hardware with the devtest scripts. This setting controls the
##power manager used in both the seed VM and undercloud for Nova Baremetal.
##::


POWER_MANAGER=${POWER_MANAGER:-'nova.virt.baremetal.virtual_power_driver.VirtualPowerManager'}
Thus, seems like all setting are for virtual environment, not for true bare 
metal. So I'm a little confused. Can anyone help clarify it? And what is the 
right configure of POWER_MANAGER if using real bare metal hardware?
Best RegardsLeslie___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-incubator] rpc.cleanup method is not reachable due to wrong import of rpc module

2014-08-04 Thread Doug Hellmann

On Aug 4, 2014, at 10:36 AM, Malawade, Abhijeet abhijeet.malaw...@nttdata.com 
wrote:

 Hi all,
  
 rpc module is not imported properly in nova, cinder, neutron projects. It is 
 imported from wrong package.
 In oslo-incubator  'rpc' module is used in openstack/common/service.py file 
 and it is present at openstack/common package.
 (https://github.com/openstack/oslo-incubator/blob/master/openstack/common/service.py#L48)
  
 But this 'rpc' module is present at base package in 'nova' and 'cinder' 
 project while it is present at neutron/common/ package in neutron project.
  
 Nova : https://github.com/openstack/nova/blob/master/nova/rpc.py
 Cinder : https://github.com/openstack/cinder/blob/master/cinder/rpc.py
 Neutron : 
 https://github.com/openstack/neutron/blob/master/neutron/common/rpc.py
  
 This openstack/common/service.py is synced form oslo-incubator in each 
 project. Because of this if we make change in specific project then these 
 changes will  get removed after re-synced  oslo-incubator code.
 The same thing happened in nova project. This patch 
 (https://review.openstack.org/#/c/81833/) has merged into nova code, but it 
 is overwritten after syncing oslo-incubator code. There is comment on this 
 patch by 'Mark McLoughlin' regarding the same.
  
 I have filed bug for this issue in oslo : 
 https://bugs.launchpad.net/oslo/+bug/1334661
 And also I have pushed patch for same. But this patch will fail for 'Neutron' 
 project.
  
 I think we have to try importing 'rpc' module from all possible places till 
 it gets imported properly
 OR we need to change location of 'rpc' module in projects for uniformity. (ie 
 to put 'rpc' module at some common place)
  
 Could you please give me your opinions on the same.

As we move oslo modules out of the incubator and into libraries, we need to 
decouple them from the applications that are using them. In this case, we have 
a library trying to invoke a global method from an application module. Rather 
than having the library try to guess where that module or function is, we need 
to change the API of the service module in Oslo so that it takes an explicit 
argument for the thing it needs. For example, in this case ServiceLauncher 
should take an argument with a sequence of cleanup methods to be invoked on 
shutdown, and the application should pass rpc.cleanup in that list when it 
creates the ServiceLauncher.

Doug

  
 Thanks,
 Abhijeet
 
 __
 Disclaimer:This email and any attachments are sent in strictest confidence 
 for the sole use of the addressee and may contain legally privileged, 
 confidential, and proprietary data. If you are not the intended recipient, 
 please advise the sender by replying promptly to this email and then delete 
 and destroy this email and any attachments without any further use, copying 
 or forwarding
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting - 08/04/2014

2014-08-04 Thread Renat Akhmerov
This is a reminder about the team meeting today at 16.00 UTC at 
#openstack-meeting.

Agenda:
Review action items
Current status (progress, issues, roadblocks)
Further plans
Open discussion

(Can also be found at https://wiki.openstack.org/wiki/Meetings/MistralAgenda as 
well as links to the previous meetings).

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-incubator] rpc.cleanup method is not reachable due to wrong import of rpc module

2014-08-04 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 04/08/14 16:36, Malawade, Abhijeet wrote:
 Hi all,
 
 
 
 rpc module is not imported properly in nova, cinder, neutron
 projects. It is imported from wrong package.
 
 In oslo-incubator  'rpc' module is used in
 openstack/common/service.py file and it is present at
 openstack/common package.
 
 (https://github.com/openstack/oslo-incubator/blob/master/openstack/common/service.py#L48

 
https://github.com/openstack/oslo-incubator/blob/master/openstack/common/service.py%23L48)
 
 
 
 But this 'rpc' module is present at base package in 'nova' and
 'cinder' project while it is present at neutron/common/ package in
 neutron project.
 
 
 
 Nova : https://github.com/openstack/nova/blob/master/nova/rpc.py
 
 Cinder :
 https://github.com/openstack/cinder/blob/master/cinder/rpc.py
 
 Neutron : 
 https://github.com/openstack/neutron/blob/master/neutron/common/rpc.py

 
 
 
 This openstack/common/service.py is synced form oslo-incubator in
 each project. Because of this if we make change in specific project
 then these changes will  get removed after re-synced
 oslo-incubator code.
 
 The same thing happened in nova project. This patch 
 (https://review.openstack.org/#/c/81833/ 
 https://review.openstack.org/%23/c/81833/) has merged into nova
 code, but it is overwritten after syncing oslo-incubator code.
 There is comment on this patch by 'Mark McLoughlin' regarding the
 same.
 
 
 
 I have filed bug for this issue in oslo : 
 https://bugs.launchpad.net/oslo/+bug/1334661
 
 And also I have pushed patch for same. But this patch will fail
 for 'Neutron' project.
 
 
 
 I think we have to try importing 'rpc' module from all possible
 places till it gets imported properly

I wonder whether calling rpc.cleanup() is a part of any public
documentation for service.py. Looks like the module makes assumptions
about consuming projects that are incorrect. Why not leaving cleanup
to consumers instead (perhaps by allowing consumers to optionally pass
a cleanup method to Service.__init__() to call later in .wait(), if
needed)?

 
 OR we need to change location of 'rpc' module in projects for 
 uniformity. (ie to put 'rpc' module at some common place)
 
 
 
 Could you please give me your opinions on the same.
 
 
 
 Thanks,
 
 Abhijeet
 
 
 __

 
Disclaimer:This email and any attachments are sent in strictest
 confidence for the sole use of the addressee and may contain
 legally privileged, confidential, and proprietary data. If you are
 not the intended recipient, please advise the sender by replying
 promptly to this email and then delete and destroy this email and
 any attachments without any further use, copying or forwarding
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJT358IAAoJEC5aWaUY1u57bVgH/24jF0ocismtVCDlock67ytt
XdBo/iiS2l6ohikIrJg1F9qbPfxMRG92i7aPyK6DbouKeb15cKB4efitUcqrPp7i
1GwZX+yuAKMc/IpzPIIi/gHmYxLHdMMt4+H+O0ULsnfMTg6LGJQrwGVeweBy1qW1
vCAV4HinhNCv07YSR7QLqyl862a4bM5KKtzJYOmsio6a9cu4gDRI2gDcj9pr9TYL
tzUpOrOgm2DIHBE6PNSdP7hkvTe1NwM+tcV7FXOCE3gyGakWkvExxWrSLmOI05t6
UuFUeDQwpiLwODPR1sczuwQhBFm+v65Vhp+etvlp9PqAm27Hhr4S7UxOJP51N0k=
=AXGE
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Jay Pipes for nova-core

2014-08-04 Thread Russell Bryant
On 08/01/2014 01:59 PM, Jay Pipes wrote:
 On 07/31/2014 10:49 PM, Sean Dague wrote:
 On 07/31/2014 06:26 PM, Michael Still wrote:
 On Thu, Jul 31, 2014 at 9:57 PM, Russell Bryant rbry...@redhat.com
 wrote:

 Further, I'd like to propose that we treat all of existing +1
 reviews as
 +2 (once he's officially added to the team).  Does anyone have a
 problem
 with doing that?  I think some folks would have done that anyway, but I
 wanted to clarify that it's OK.

 As a core I sometimes +1 something to indicate a weak acceptance of
 the code instead of a strong acceptance (perhaps its not my area of
 expertise). Do we think it would be better to ask Jay to scan through
 his recent +1s and promote those he is comfortable with to +2s? I
 don't think that would take very long, and would keep the intent of
 the reviews clear.

 Agreed. That's more typical and means that you don't need to parse
 intent on +1s. Let jay upgrade the votes he feels comfortable with
 holding a full +2 on.
 
 Of course. I have no problem doing that. Frankly, I kind of revisit
 reviews often just in the course of my review work each day, so it's not
 a big deal at all.

OK, sounds good to me!

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a small experiment with Ansible in TripleO

2014-08-04 Thread Clint Byrum
I've been fiddling on github. This repo is unfortunately named the same
but is not the same ancestry as yours. Anyway, the branch 'fiddling' has
a working Heat inventory plugin which should give you a hostvar of
'heat_metadata' per host in the given stack.

https://github.com/SpamapS/tripleo-ansible/blob/fiddling/plugins/inventory/heat.py

Note that in the root there is a 'heat-ansible-inventory.conf' that is
an example config (works w/ devstack) to query a heat stack and turn it
into an ansible inventory. That uses oslo.config so all of the usual
patterns for loading configs in openstack should apply.

Excerpts from Allison Randal's message of 2014-08-01 09:07:44 -0700:
 A few of us have been independently experimenting with Ansible as a
 backend for TripleO, and have just decided to try experimenting
 together. I've chatted with Robert, and he says that TripleO was always
 intended to have pluggable backends (CM layer), and just never had
 anyone interested in working on them. (I see it now, even in the early
 docs and talks, I guess I just couldn't see the forest for the trees.)
 So, the work is in line with the overall goals of the TripleO project.
 
 We're starting with a tiny scope, focused only on updating a running
 TripleO deployment, so our first work is in:
 
 - Create an Ansible Dynamic Inventory plugin to extract metadata from Heat
 - Improve/extend the Ansible nova_compute Cloud Module (or create a new
 one), for Nova rebuild
 - Develop a minimal handoff from Heat to Ansible, particularly focused
 on the interactions between os-collect-config and Ansible
 
 We're merging our work in this repo, until we figure out where it should
 live:
 
 https://github.com/allisonrandal/tripleo-ansible
 
 We've set ourselves one week as the first sanity-check to see whether
 this idea is going anywhere, and we may scrap it all at that point. But,
 it seems best to be totally transparent about the idea from the start,
 so no-one is surprised later.
 
 Cheers,
 Allison
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] specs.openstack.org is live

2014-08-04 Thread Russell Bryant
On 08/04/2014 08:13 AM, Andreas Jaeger wrote:
 On 08/04/2014 01:52 PM, Steve Gordon wrote:
 - Original Message -
 From: Andreas Jaeger a...@suse.com
 To: openstack-dev@lists.openstack.org

 All OpenStack incubated projects and programs that use a -specs
 repository have now been setup to publish these to
 http://specs.openstack.org. With the next merged in patch of a *-specs
 repository, the documentation will get published.

 Is there a way to manually trigger this for Nova and Neutron? As these 
 projects are well past their self-enforced spec approval deadlines there may 
 not be any merges to their respective -specs repositories for a while now.
 
 Just a dummy commit. I'm sure you find a typo to fix or something like
 that;)

https://review.openstack.org/#/c/111762/

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DevStack program change

2014-08-04 Thread Russell Bryant
On 08/04/2014 04:23 AM, Thierry Carrez wrote:
 Matthew Treinish wrote:
 On Fri, Aug 01, 2014 at 03:50:53PM -0500, Anne Gentle wrote:
 On Fri, Aug 1, 2014 at 10:48 AM, Dean Troyer dtro...@gmail.com wrote:

 I propose we de-program DevStack and consolidate it into the QA program.
  Some of my concerns about doing this in the beginning have proven to be a
 non-issue in practice.  Also, I believe a program's focus can and should be
 wider than we have implemented up to now and this a step toward
 consolidating narrowly defined programs.


 Sounds like a good idea to me, as long as QA PTL Matt is good with it.
 Thanks Dean for your service!

 Anne

 Yes, I fully support this idea as well. The scope of the 2 programs' missions
 are very much aligned, and the two teams are already working closely 
 together.
 So I think that consolidating the DevStack progam into the QA program is a 
 good
 idea moving forward.
 
 I'm all for it.
 
 The only reason why we didn't do that in the first place was that the QA
 program didn't want devstack (and considered its maintenance was not
 part of its mission). If that position changed, merging the two sounds
 good. The teams working on it are certainly mostly the same people anyway.
 

+1

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] All-hands documentation day

2014-08-04 Thread Victoria Martínez de la Cruz
Next Thusday, August 7th then!

If someone else want to join us, please do.

Thanks all


2014-08-01 15:37 GMT-03:00 Kurt Griffiths kurt.griffi...@rackspace.com:

 I’m game for thursday. Love to help out.

 On 8/1/14, 2:26 AM, Flavio Percoco fla...@redhat.com wrote:

 On 07/31/2014 09:57 PM, Victoria Martínez de la Cruz wrote:
  Hi everyone,
 
  Earlier today I went through the documentation requirements for
  graduation [0] and it looks like there is some work do to.
 
  The structure we should follow is detailed
  in https://etherpad.openstack.org/p/marconi-graduation.
 
  It would be nice to do an all-hands documentation day next week to make
  this happen.
 
  Can you join us? When is it better for you?
 
 Hey Vicky,
 
 Awesome work, thanks for putting this together.
 
 I'd propose doing it on Thursday since, hopefully, some other patches
 will land during that week that will require documentation too.
 
 Flavio,
 
 
  My best,
 
  Victoria
 
  [0]
 
 https://github.com/openstack/governance/blob/master/reference/incubation-
 integration-requirements.rst#documentation--user-support-1
 
 
 --
 @flaper87
 Flavio Percoco
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][DB] Strategy for collecting models for autogenerate

2014-08-04 Thread Jakub Libosvar
Hi all,

as a part of making db migrations unconditional we need to have all
models easily collectible [1]. Currently we have a head.py file
containing imports to all models. Disadvantage is that all imports need
to be collected manually and maintained in case new module with model is
added to the tree.

Original suggested approach was to consolidate all models into one
directory providing simple import of all models. This approach wasn't
liked so I tried to implement alternatives.

1) os.walk neutron directory, search for model_base.BASEV2 occurrences
and import such modules.

2) Import all models in the tree excluding UT and neutron.db.migration
(since it contains frozen models).

I did simple benchmark of three mentioned approaches, could be found
here [2].

On a DB meeting today we agreed that least painful way will be importing
all modules with pkgutil. Considering this will be used only in testing
and autogenerate, memory consumption and speed are imho acceptable.

We'd like to know your opinion, and especially Mark's opinion, on
mentioned approaches and if we can agree on the approach chosen on
today's meeting.

Thanks,
Kuba

[1] https://bugs.launchpad.net/neutron/+bug/1346658
[2] http://paste.openstack.org/show/89984/
- time tested with 100 iterations on Lenovo X230, 2 cores with
  hyperthreading Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-08-04 Thread Boris Pavlovic
Hi Sean,


I would consider  collaboration between different programs and their
missions as a 2 different topics.

To clear up the situation I made diagram that shows how new Operations
program is aligned with OpenStack Integrated release and OpenStack QA.

[image: Inline image 1]
This means that I agree that part of Rally belongs to QA, but from other
side it is going to present OpenStack API, so it's as well part of
integrated release.
Rally is quite monolithic and can't be split, that is why I think Rally
should be a separated program (i.e. Rally scope is just different from QA
scope).
As well, It's not clear for me, why collaboration is possible only in case
of one program? In my opinion collaboration  programs are irrelevant
things.


About collaboration between Rally  Tempest teams...
Major goal of integration Tempest in Rally is to make it simpler to use
tempest on production clouds via OpenStack API.
This work requires a lot of collaboration between teams, as you already
mention we should work on improving measuring
durations and tempest.conf generation. I fully agree that this belongs to
Tempest. By the way, Rally team is already helping with this part.

In my opinion, end result should be something like: Rally just calls
Tempest (or couple of scripts from tempest) and store results to its DB,
presenting to end user tempest functionality via OpenStack API.

To get this done, we should implement next features in tempest:
1) Auto  tempest.conf generation
2) Production ready cleanup  - tempest should be absolutely safe for run
against cloud
3) Improvements related to time measurement.
4) Integration of OSprofiler  Tempest.

So in any case I would prefer to continue collaboration..

Thoughts?


Best regards,
Boris Pavlovic




On Mon, Aug 4, 2014 at 4:24 PM, Sean Dague s...@dague.net wrote:

 On 07/31/2014 06:55 AM, Angus Salkeld wrote:
  On Sun, 2014-07-27 at 07:57 -0700, Sean Dague wrote:
  On 07/26/2014 05:51 PM, Hayes, Graham wrote:
  On Tue, 2014-07-22 at 12:18 -0400, Sean Dague wrote:
  On 07/22/2014 11:58 AM, David Kranz wrote:
  On 07/22/2014 10:44 AM, Sean Dague wrote:
  Honestly, I'm really not sure I see this as a different program,
 but is
  really something that should be folded into the QA program. I feel
 like
  a top level effort like this is going to lead to a lot of
 duplication in
  the data analysis that's currently going on, as well as
 functionality
  for better load driver UX.
 
   -Sean
  +1
  It will also lead to pointless discussions/arguments about which
  activities are part of QA and which are part of
  Performance and Scalability Testing.
 
  I think that those discussions will still take place, it will just be
 on
  a per repository basis, instead of a per program one.
 
  [snip]
 
 
  Right, 100% agreed. Rally would remain with it's own repo + review
 team,
  just like grenade.
 
 -Sean
 
 
  Is the concept of a separate review team not the point of a program?
 
  In the the thread from Designate's Incubation request Thierry said [1]:
 
  Programs just let us bless goals and teams and let them organize
  code however they want, with contribution to any code repo under that
  umbrella being considered official and ATC-status-granting.
 
  I do think that this is something that needs to be clarified by the TC
 -
  Rally could not get a PTL if they were part of the QA project, but
 every
  time we get a program request, the same discussion happens.
 
  I think that mission statements can be edited to fit new programs as
  they occur, and that it is more important to let teams that have been
  working closely together to stay as a distinct group.
 
  My big concern here is that many of the things that these efforts have
  been doing are things we actually want much closer to the base. For
  instance, metrics on Tempest runs.
 
  When Rally was first created it had it's own load generator. It took a
  ton of effort to keep the team from duplicating that and instead just
  use some subset of Tempest. Then when measuring showed up, we actually
  said that is something that would be great in Tempest, so whoever ran
  it, be it for Testing, Monitoring, or Performance gathering, would have
  access to that data. But the Rally team went off in a corner and did it
  otherwise. That's caused the QA team to have to go and redo this work
  from scratch with subunit2sql, in a way that can be consumed by multiple
  efforts.
 
  So I'm generally -1 to this being a separate effort on the basis that so
  far the team has decided to stay in their own sandbox instead of
  participating actively where many of us thing the functions should be
  added. I also think this isn't like Designate, because this isn't
  intended to be part of the integrated release.
 
  From reading Boris's email it seems like rally will provide a horizon
  panel and api to back it (for the operator to kick of performance runs
  and view stats). So this does seem like something that would be a
  part of the 

Re: [openstack-dev] [all] specs.openstack.org is live

2014-08-04 Thread Russell Bryant
On 08/04/2014 11:25 AM, Russell Bryant wrote:
 On 08/04/2014 08:13 AM, Andreas Jaeger wrote:
 On 08/04/2014 01:52 PM, Steve Gordon wrote:
 - Original Message -
 From: Andreas Jaeger a...@suse.com
 To: openstack-dev@lists.openstack.org

 All OpenStack incubated projects and programs that use a -specs
 repository have now been setup to publish these to
 http://specs.openstack.org. With the next merged in patch of a *-specs
 repository, the documentation will get published.

 Is there a way to manually trigger this for Nova and Neutron? As these 
 projects are well past their self-enforced spec approval deadlines there 
 may not be any merges to their respective -specs repositories for a while 
 now.

 Just a dummy commit. I'm sure you find a typo to fix or something like
 that;)
 
 https://review.openstack.org/#/c/111762/
 

And for Neutron: https://review.openstack.org/111765

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] specs.openstack.org is live

2014-08-04 Thread Kyle Mestery
On Mon, Aug 4, 2014 at 10:46 AM, Russell Bryant rbry...@redhat.com wrote:
 On 08/04/2014 11:25 AM, Russell Bryant wrote:
 On 08/04/2014 08:13 AM, Andreas Jaeger wrote:
 On 08/04/2014 01:52 PM, Steve Gordon wrote:
 - Original Message -
 From: Andreas Jaeger a...@suse.com
 To: openstack-dev@lists.openstack.org

 All OpenStack incubated projects and programs that use a -specs
 repository have now been setup to publish these to
 http://specs.openstack.org. With the next merged in patch of a *-specs
 repository, the documentation will get published.

 Is there a way to manually trigger this for Nova and Neutron? As these 
 projects are well past their self-enforced spec approval deadlines there 
 may not be any merges to their respective -specs repositories for a while 
 now.

 Just a dummy commit. I'm sure you find a typo to fix or something like
 that;)

 https://review.openstack.org/#/c/111762/


 And for Neutron: https://review.openstack.org/111765

You beat me by a little bit, as I had this one:

https://review.openstack.org/111766

:)

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance Store Future

2014-08-04 Thread Jay Pipes

On 08/04/2014 10:29 AM, Sean Dague wrote:

On 08/04/2014 09:46 AM, Doug Hellmann wrote:

On Aug 4, 2014, at 9:27 AM, Jay Pipes jaypi...@gmail.com wrote:

On 08/04/2014 09:09 AM, Duncan Thomas wrote:

Duncan Thomas
On Aug 1, 2014 9:44 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:


Yup. Though I'd love for this code to live in olso, not glance...


Why Oslo? There seems to be a general obsession with getting things into
Oslo, but our (cinder team) general experiences with the end result have
been highly variable, to the point where we've discussed just saying no
to Oslo code since the pain is more than the gain. In this case, the
glance team are the subject matter experts, the glance interfaces and
internals are their core competency, I honestly can't see any value in
putting the project anywhere other than glance


2 reasons.

1) This is code that will be utilized by 1 project, and is a library, not a 
service endpoint. That seems to be right up the Oslo alley.

2) The mission of the Glance program has changed to being an application 
catalog service, not an image streaming service.

Best,
-jay


Oslo isn’t the only program that can produce reusable libraries, though. If the 
Glance team is going to manage this code anyway, it makes sense to leave it in 
the Glance program.


Agreed. Honestly it's better to keep the review teams close to the
expertise for the function at hand.

It needs to be ok that teams besides oslo create reusable components for
other parts of OpenStack. Oslo should be used for things where there
isn't a strong incumbent owner. I think we have a strong incumbent owner
here so living in Artifacts program makes sense to me.


Sure, fair points from all. If it can be imported/packaged without 
including all the legacy Glance code, then I'd be more behind keeping it 
in Glance...


-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [UX] Reminder: Bi-weekly meeting today

2014-08-04 Thread Liz Blanchard
Hi UXers,

I just wanted to send a quick reminder that we will be meeting about UX 
topics[1] in just under an hour on #openstack-meeting-3.

Hope to chat with you then!

Thanks,
Liz

[1] https://wiki.openstack.org/wiki/Meetings/UX#Previous_meetings___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] devtest environment for virtual or true bare metal

2014-08-04 Thread Ben Nemec
On 08/04/2014 09:46 AM, LeslieWang wrote:
 Dear all,
 Looking at devtest pages at TripleO wiki 
 http://docs.openstack.org/developer/tripleo-incubator/devtest.html. I thought 
 all variables and configurations of devtest are for true bare-metal because I 
 see diskimage-builder options of both overcloud and undercloud doesn't 
 include vm option. But I see this configuration in 
 tripleo-incubator/scripts/devtest_testenv.sh, 
 ## #. Set the default bare metal power manager. By default devtest uses
 ##nova.virt.baremetal.virtual_power_driver.VirtualPowerManager to
 ##support a fully virtualized TripleO test environment. You may
 ##optionally customize this setting if you are using real baremetal
 ##hardware with the devtest scripts. This setting controls the
 ##power manager used in both the seed VM and undercloud for Nova 
 Baremetal.
 ##::
 
 
 POWER_MANAGER=${POWER_MANAGER:-'nova.virt.baremetal.virtual_power_driver.VirtualPowerManager'}
 Thus, seems like all setting are for virtual environment, not for true bare 
 metal. So I'm a little confused. Can anyone help clarify it? And what is the 
 right configure of POWER_MANAGER if using real bare metal hardware?
 Best RegardsLeslie   

Devtest defaults to a virtual environment because that's what most
people working with it have available.  We try to stay as close to the
baremetal use case as possible though, which is why the vm element isn't
included in image builds.  The images are built to be deployed to
baremetal, and the virtual deployment just uses the VirtualPowerManager
to do that.

The correct power manager to use for real baremetal is going to depend
on the hardware you have available.  You'll have to look through the
available power managers and decide which is appropriate for you.

-Ben


 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-08-04 Thread Jay Pipes

On 08/04/2014 11:21 AM, Boris Pavlovic wrote:

Rally is quite monolithic and can't be split


I think this is one of the roots of the problem that folks like David
and Sean keep coming around to. If Rally were less monolithic, it
would be easier to say OK, bring this piece into Tempest, have this
piece be a separate library and live in the QA program, and have the 
service endpoint that allows operators to store and periodically measure 
SLA performance indicators against their cloud.


Incidentally, this is one of the problems that Scalr faced when applying 
for incubation, years ago, and one of the reasons the PPB at the time 
voted not to incubate Scalr: it had a monolithic design that crossed too 
many different lines in terms of duplicating functionality that already 
existed in a number of other projects.



, that is why I think Rally should be a separated program (i.e.
Rally scope is just different from QA scope). As well, It's not clear
for me, why collaboration is possible only in case of one program? In
my opinion collaboration  programs are irrelevant things.


Sure, it's certainly possible for collaboration to happen across
programs. I think what Sean is alluding to is the fact that the Tempest
and Rally communities have done little collaboration to date, and that
is worrying to him.


About collaboration between Rally  Tempest teams... Major goal of
integration Tempest in Rally is to make it simpler to use tempest on
production clouds via OpenStack API.


Plenty of folks run Tempest without Rally against production clouds as
an acceptance test platform. I see no real benefit to arguing that Rally
is for running against production clouds and Tempest is for
non-production clouds. There just isn't much of a difference there.

That said, an Operator Tools program is actually an entirely different
concept -- with a different audience and mission from the QA program. I
think you've seen here some initial support for such a proposed Operator
Tools program.

The problem I see is that Rally is not *yet* exposing the REST service
endpoint that would make it a full-fledged Operator Tool outside the
scope of its current QA focus. Once Rally does indeed publish a REST API
that exposes resource endpoints for an operator to store a set of KPIs
associated with an SLA, and allows the operator to store the run
schedule that Rally would use to go and test such metrics, *then* would
be the appropriate time to suggest that Rally be the pilot project in
this new Operator Tools program, IMO.


This work requires a lot of collaboration between teams, as you
already mention we should work on improving measuring durations and
tempest.conf generation. I fully agree that this belongs to Tempest.
By the way, Rally team is already helping with this part.

In my opinion, end result should be something like: Rally just calls
Tempest (or couple of scripts from tempest) and store results to its
DB, presenting to end user tempest functionality via OpenStack API.
To get this done, we should implement next features in tempest: 1)
Auto  tempest.conf generation 2) Production ready cleanup  - tempest
should be absolutely safe for run against cloud 3) Improvements
related to time measurement. 4) Integration of OSprofiler  Tempest.


I'm sure all of those things would be welcome additions to Tempest. At 
the same time, Rally contributors would do well to work on an initial 
REST API endpoint that would expose the resources I denoted above.


Best,
-jay


So in any case I would prefer to continue collaboration..

Thoughts?


Best regards, Boris Pavlovic




On Mon, Aug 4, 2014 at 4:24 PM, Sean Dague s...@dague.net
mailto:s...@dague.net wrote:

On 07/31/2014 06:55 AM, Angus Salkeld wrote:

On Sun, 2014-07-27 at 07:57 -0700, Sean Dague wrote:

On 07/26/2014 05:51 PM, Hayes, Graham wrote:

On Tue, 2014-07-22 at 12:18 -0400, Sean Dague wrote:

On 07/22/2014 11:58 AM, David Kranz wrote:

On 07/22/2014 10:44 AM, Sean Dague wrote:

Honestly, I'm really not sure I see this as a different

program, but is

really something that should be folded into the QA
program.

I feel like

a top level effort like this is going to lead to a lot
of

duplication in

the data analysis that's currently going on, as well as

functionality

for better load driver UX.

-Sean

+1 It will also lead to pointless discussions/arguments
about which activities are part of QA and which are part
of Performance and Scalability Testing.


I think that those discussions will still take place, it will

just be on

a per repository basis, instead of a per program one.

[snip]



Right, 100% agreed. Rally would remain with it's own repo +

review team,

just like grenade.

-Sean



Is the concept of a separate review team not the point of a

program?


In the the thread from Designate's Incubation request Thierry

said [1]:



Programs just let us bless goals and teams and let them
organize code however they want, with contribution to any
code repo

under that

umbrella 

Re: [openstack-dev] Step by step OpenStack Icehouse Installation Guide

2014-08-04 Thread chayma ghribi
Hi !

Thank you for the comment Qiming !

The script stack.sh is used to configure Devstack and  to assign the
heat_stack_owner role to users.
Also, I think that Heat is configured by default on Devstack for icehouse.
http://docs.openstack.org/developer/heat/getting_started/on_devstack.html#configure-devstack-to-enable-heat

In our guide we are not installing using devstack.
We are creating and managing stacks with Heat but we have not errors !

If you have some examples of tests (or scenarios) that helps us to identify
errors and improve the guide, please don't hesitate to contact us ;)
All your contributions are welcome :)

Regards,

Chaima Ghribi





2014-08-04 8:13 GMT+02:00 Qiming Teng teng...@linux.vnet.ibm.com:

 Thanks for the efforts.  Just want to add some comments on installing
 and configuring Heat, since an incomplete setup may cause bizarre
 problems later on when users start experiments.

 Please refer to devstack script below for proper configuration of Heat:

 https://github.com/openstack-dev/devstack/blob/master/lib/heat#L68

 and the function create_heat_accounts at the link below which helps
 create the required Heat accounts.

 https://github.com/openstack-dev/devstack/blob/master/lib/heat#L214

 Regards,
   Qiming

 On Sun, Aug 03, 2014 at 12:49:22PM +0200, chayma ghribi wrote:
  Dear All,
 
  I want to share with you our OpenStack Icehouse Installation Guide for
  Ubuntu 14.04.
 
 
 https://github.com/ChaimaGhribi/OpenStack-Icehouse-Installation/blob/master/OpenStack-Icehouse-Installation.rst
 
  An additional  guide for Heat service installation is also available ;)
 
 
 https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst
 
  Hope this manuals will be helpful and simple !
  Your contributions are welcome, as are questions and suggestions :)
 
  Regards,
  Chaima Ghribi



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance Store Future

2014-08-04 Thread Flavio Percoco
On 08/04/2014 05:56 PM, Jay Pipes wrote:
 On 08/04/2014 10:29 AM, Sean Dague wrote:
 On 08/04/2014 09:46 AM, Doug Hellmann wrote:
 On Aug 4, 2014, at 9:27 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 08/04/2014 09:09 AM, Duncan Thomas wrote:
 Duncan Thomas
 On Aug 1, 2014 9:44 PM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 Yup. Though I'd love for this code to live in olso, not glance...

 Why Oslo? There seems to be a general obsession with getting things
 into
 Oslo, but our (cinder team) general experiences with the end result
 have
 been highly variable, to the point where we've discussed just
 saying no
 to Oslo code since the pain is more than the gain. In this case, the
 glance team are the subject matter experts, the glance interfaces and
 internals are their core competency, I honestly can't see any value in
 putting the project anywhere other than glance

 2 reasons.

 1) This is code that will be utilized by 1 project, and is a
 library, not a service endpoint. That seems to be right up the Oslo
 alley.

 2) The mission of the Glance program has changed to being an
 application catalog service, not an image streaming service.

 Best,
 -jay

 Oslo isn’t the only program that can produce reusable libraries,
 though. If the Glance team is going to manage this code anyway, it
 makes sense to leave it in the Glance program.

 Agreed. Honestly it's better to keep the review teams close to the
 expertise for the function at hand.

 It needs to be ok that teams besides oslo create reusable components for
 other parts of OpenStack. Oslo should be used for things where there
 isn't a strong incumbent owner. I think we have a strong incumbent owner
 here so living in Artifacts program makes sense to me.
 
 Sure, fair points from all. If it can be imported/packaged without
 including all the legacy Glance code, then I'd be more behind keeping it
 in Glance...
 

FWIW, it's already like that. I'm working on the Glance port now[0],
which passes locally but not in the gate due to glance.store not being
released yet.

[0] https://review.openstack.org/#/c/100636/

Cheers,
Flavio



-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Trove Blueprint Meeting on 4 Aug canceled

2014-08-04 Thread Nikhil Manchanda
Hey folks:

There's nothing to discuss on the BP Agenda for this week and most folks
are busy working on existing BPs and bugs, so I'd like to cancel the
Trove blueprint meeting for this week.

See you guys at the regular Trove meeting on Wednesday.

Thanks,
Nikhil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes/log - 08/04/2014

2014-08-04 Thread Renat Akhmerov
Thanks for joining us today. 

As usually,
Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-08-04-16.00.html
Log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-08-04-16.00.log.html

Meeting archive: https://wiki.openstack.org/wiki/Meetings/MistralAgenda

The next meeting will be held on Aug 11.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] specs.openstack.org is live

2014-08-04 Thread Andreas Jaeger
Great, I've updated my patch to add neutron and nova to the index page.

For now read the specs using:
http://specs.openstack.org/openstack/neutron-specs/
http://specs.openstack.org/openstack/nova-specs/

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答???: [Neutron] Auth token in context

2014-08-04 Thread Kevin Benton
That makes sense. Is there a patch up for review to make this available in
the context?


On Mon, Aug 4, 2014 at 8:21 AM, Isaku Yamahata isaku.yamah...@gmail.com
wrote:

 ServiceVM wants auth token.
 When creating l3 router which runs inside VM, it launches VM.
 So neutron interacts with other projects like serivcevm server or nova.

 thnaks,


 On Sun, Jul 20, 2014 at 12:14:54AM -0700,
 Kevin Benton blak...@gmail.com wrote:

  That makes sense. Shouldn't we wait for something to require it before
  adding it though?
 
 
  On Sat, Jul 19, 2014 at 11:41 PM, joehuang joehu...@huawei.com wrote:
 
Hello, Kevin
  
  
  
   The leakage risk may be one of the design purpose. But  Nova/Cinder has
   already stored the token into the context, because Nova needs to access
   Neutron.Cinder.Glance, And Cinder interact with Glance
  
  
  
   For Neutron, I think why the token has not been passed to the context,
 is
   because that Neutron only reactively provide service (exactly PORT ) to
   Nova currently, so Neutron has not call other services' API by using
 the
   token.
  
  
  
   If the underlying agent or plugin wants to use the token, then the
   requirement will be asked by somebody.
  
  
  
   BR
  
  
  
   Joe
  
  
--
   *???件人:* Kevin Benton [blak...@gmail.com]
   *???送??:* 2014年7月19日 4:23
  
   *收件人:* OpenStack Development Mailing List (not for usage questions)
   *主???:* Re: [openstack-dev] [Neutron] Auth token in context
  
 I suspect it was just excluded since it is authenticating information
   and there wasn't a good use case to pass it around everywhere in the
   context where it might be leaked into logs or other network requests
   unexpectedly.
  
  
   On Fri, Jul 18, 2014 at 1:10 PM, Phillip Toohill 
   phillip.tooh...@rackspace.com wrote:
  
It was for more of a potential use to query another service. Don't
   think well go this route though, but was curious why it was one of
 the only
   values not populated even though there's a field for it.
  
 From: Kevin Benton blak...@gmail.com
   Reply-To: OpenStack Development Mailing List (not for usage
 questions)
   openstack-dev@lists.openstack.org
   Date: Friday, July 18, 2014 2:16 PM
   To: OpenStack Development Mailing List (not for usage questions) 
   openstack-dev@lists.openstack.org
   Subject: Re: [openstack-dev] [Neutron] Auth token in context
  
 What are you trying to use the token to do?
  
  
   On Fri, Jul 18, 2014 at 9:16 AM, Phillip Toohill 
   phillip.tooh...@rackspace.com wrote:
  
   Excellent! Thank you for the response, I figured it was possible,
 just
   concerned me to why everything else made it to context except for the
   token.
  
   So to be clear, you agree that it should at least be passed to
 context
   and
   because its not could be deemed a bug?
  
   Thank you
  
   On 7/18/14 2:03 AM, joehuang joehu...@huawei.com wrote:
  
   Hello, Phillip.
   
   Currently, Neutron did not pass the token to the context. But
   Nova/Cinder
   did that. It's easy to do that, just 'copy' from Nova/Cinder.
   
   1.  How Nova/Cinder did that
   class NovaKeystoneContext(wsgi.Middleware)
   ///or CinderKeystoneContext for cinder
   
 auth_token = req.headers.get('X_AUTH_TOKEN',
   
 req.headers.get('X_STORAGE_TOKEN'))
 ctx = context.RequestContext(user_id,
project_id,
user_name=user_name,
project_name=project_name,
roles=roles,
auth_token=auth_token,
remote_address=remote_address,
   
 service_catalog=service_catalog)
   
   2.  Neutron not passed token. Also not good for the third part
 network
   infrastructure to integrate the authentication with KeyStone.
   class NeutronKeystoneContext(wsgi.Middleware)
   .
   # token not get from the header and not passed to context. Just
   change here like what Nova/Cinder did.
   context.Context(user_id, tenant_id, roles=roles,
 user_name=user_name,
   tenant_name=tenant_name,
 request_id=req_id)
   req.environ['neutron.context'] = ctx
   
   I think I'd better to report a bug for your case.
   
   Best Regards
   Chaoyi Huang ( Joe Huang )
   -???件原件-
   ???件人: Phillip Toohill [mailto:phillip.tooh...@rackspace.com]
   ???送??: 2014年7月18日 14:07
   收件人: OpenStack Development Mailing List (not for usage questions)
   主???: [openstack-dev] [Neutron] Auth token in context
   
   Hello all,
   
   I am wondering how to get the auth token from a user request passed
 down
   to the context so it can potentially be used by the plugin or
 driver?
   
   Thank you
   
   
   ___
   OpenStack-dev mailing list
   

Re: [openstack-dev] 答???: [Neutron] Auth token in context

2014-08-04 Thread Mohammad Banikazemi
Yes, Here: https://review.openstack.org/#/c/111756/



From:   Kevin Benton blak...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Cc: isaku.yamah...@gmail.com
Date:   08/04/2014 01:01 PM
Subject:Re: [openstack-dev] 答???: [Neutron] Auth token in context



That makes sense. Is there a patch up for review to make this available in
the context?


On Mon, Aug 4, 2014 at 8:21 AM, Isaku Yamahata isaku.yamah...@gmail.com
wrote:
  ServiceVM wants auth token.
  When creating l3 router which runs inside VM, it launches VM.
  So neutron interacts with other projects like serivcevm server or nova.

  thnaks,


  On Sun, Jul 20, 2014 at 12:14:54AM -0700,
  Kevin Benton blak...@gmail.com wrote:

   That makes sense. Shouldn't we wait for something to require it before
   adding it though?
  
  
   On Sat, Jul 19, 2014 at 11:41 PM, joehuang joehu...@huawei.com wrote:
  
 Hello, Kevin
   
   
   
The leakage risk may be one of the design purpose. But  Nova/Cinder
  has
already stored the token into the context, because Nova needs to
  access
Neutron.Cinder.Glance, And Cinder interact with Glance
   
   
   
For Neutron, I think why the token has not been passed to the
  context, is
because that Neutron only reactively provide service (exactly PORT )
  to
Nova currently, so Neutron has not call other services' API by using
  the
token.
   
   
   
If the underlying agent or plugin wants to use the token, then the
requirement will be asked by somebody.
   
   
   
BR
   
   
   
Joe
   
   
 --
*???件人:* Kevin Benton [blak...@gmail.com]
*???送??:* 2014年7月19日 4:23
   
*收件人:* OpenStack Development Mailing List (not for usage
  questions)
*主???:* Re: [openstack-dev] [Neutron] Auth token in context
   
  I suspect it was just excluded since it is authenticating
  information
and there wasn't a good use case to pass it around everywhere in the
context where it might be leaked into logs or other network requests
unexpectedly.
   
   
On Fri, Jul 18, 2014 at 1:10 PM, Phillip Toohill 
phillip.tooh...@rackspace.com wrote:
   
 It was for more of a potential use to query another service. Don't
think well go this route though, but was curious why it was one of
  the only
values not populated even though there's a field for it.
   
  From: Kevin Benton blak...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage
  questions)
openstack-dev@lists.openstack.org
Date: Friday, July 18, 2014 2:16 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Auth token in context
   
  What are you trying to use the token to do?
   
   
On Fri, Jul 18, 2014 at 9:16 AM, Phillip Toohill 
phillip.tooh...@rackspace.com wrote:
   
Excellent! Thank you for the response, I figured it was possible,
  just
concerned me to why everything else made it to context except for
  the
token.
   
So to be clear, you agree that it should at least be passed to
  context
and
because its not could be deemed a bug?
   
Thank you
   
On 7/18/14 2:03 AM, joehuang joehu...@huawei.com wrote:
   
Hello, Phillip.

Currently, Neutron did not pass the token to the context. But
Nova/Cinder
did that. It's easy to do that, just 'copy' from Nova/Cinder.

1.  How Nova/Cinder did that
class NovaKeystoneContext(wsgi.Middleware)
///or CinderKeystoneContext for cinder

              auth_token = req.headers.get('X_AUTH_TOKEN',
                                     req.headers.get
  ('X_STORAGE_TOKEN'))
              ctx = context.RequestContext(user_id,
                                     project_id,
                                     user_name=user_name,
                                     project_name=project_name,
                                     roles=roles,
                                     auth_token=auth_token,

  remote_address=remote_address,

  service_catalog=service_catalog)

2.  Neutron not passed token. Also not good for the third part
  network
infrastructure to integrate the authentication with KeyStone.
class NeutronKeystoneContext(wsgi.Middleware)
.
# token not get from the header and not passed to context.
  Just
change here like what Nova/Cinder did.
        context.Context(user_id, tenant_id, roles=roles,
                              user_name=user_name,
tenant_name=tenant_name,
                              request_id=req_id)
        req.environ['neutron.context'] = ctx

I think I'd better to report a bug for your case.

Best Regards
Chaoyi Huang ( Joe Huang )
-???件原件-
???件人: 

[openstack-dev] [Infra] Meeting Tuesday August 5th at 19:00 UTC

2014-08-04 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday August 5th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

Meeting log and minutes from our meeting last week available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-07-29-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-07-29-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-07-29-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][policy] Long standing -2 on Group-based policy patch

2014-08-04 Thread Sumit Naiksatam
The first patch[1] of this high priority approved blueprint[2][3]
targeted for Juno-3 has been blocked by a core reviewer’s (Mark
McClain) -2 since July 2nd. This patch was at patch-set 13 then, and
has been repeatedly reviewed and updated to the current patch-set 22.
However, there has been no comment or reason forthcoming from the
reviewer on why the -2 still persists. The dependent patches have also
gone through numerous iterations since.

There is a team of several people working on this feature across
different OpenStack projects since Oct 2013. The feature has been
discussed and iterated over weekly IRC meetings[4], design sessions
spanning two summits, mailing list threads, a working PoC
implementation, and a code sprint. Blocking the first patch required
for this feature jeopardizes this year-long effort and the delivery of
this feature in Juno. For many, this may sound like an all too
familiar story[5], and hence this team would like to mitigate the
issue while there is still time.

Mark, can you please explain why your -2 still persists? If there are
no major outstanding issues, can you please remove the -2?

Neutron-PTL, can you please provide guidance on how we can make
progress on this feature?

For the benefit of anyone not familiar with this feature, please see [6].

Thanks,
Group-based Policy Team.

[1] https://review.openstack.org/#/c/95900/
[2] 
https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction
[3] https://review.openstack.org/#/c/95900
[4] https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy
[5] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041651.html
[6] https://wiki.openstack.org/wiki/Neutron/GroupPolicy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a small experiment with Ansible in TripleO

2014-08-04 Thread Allison Randal
On 08/04/2014 08:19 AM, Clint Byrum wrote:
 I've been fiddling on github. This repo is unfortunately named the same
 but is not the same ancestry as yours. Anyway, the branch 'fiddling' has
 a working Heat inventory plugin which should give you a hostvar of
 'heat_metadata' per host in the given stack.

Cool, Monty and I have that merged now. And, I've got a working Ansible
module for nova rebuild:

https://github.com/allisonrandal/tripleo-ansible/blob/master/library/cloud/nova_rebuild

 Note that in the root there is a 'heat-ansible-inventory.conf' that is
 an example config (works w/ devstack) to query a heat stack and turn it
 into an ansible inventory. That uses oslo.config so all of the usual
 patterns for loading configs in openstack should apply.

This is really elegant, I'll follow suit in the example playbook for
rebuild.

Allison

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] photos of whiteboards?

2014-08-04 Thread Dan Prince
On Sun, 2014-08-03 at 07:59 +1200, Robert Collins wrote:
 We had a few whiteboard photos taken during the sprint, but I can't
 find where they were posted :/
 
 Right now I'm looking for the one with the list of priority CI jobs,

I don't have the photo. But it looks like someone wrote them down on the
etherpad here (see line 95):

https://etherpad.openstack.org/p/juno-midcycle-meetup

 
 -Rob
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Addressing unit tests broken by random PYTHONHASHSEED

2014-08-04 Thread Henry Gessau
Please see this bug:  https://launchpad.net/bugs/1348818

I innocently assigned myself to this bug for Neutron. However, there are a
very large number of Neutron unit tests that are broken by random hash seeds.
I think multiple people should work on fixing the tests.

We don't want to have multiple people doing the same fixes simultaneously, so
I have created an etherpad[1] to list the broken tests and allow people to
sign up for fixing them. Please read the instructions carefully.

This is not urgent work for Neutron for Juno. Please prioritize other work
first. However, there are several low-hanging-fruit fixes in the list, which
may be good for new developers. Some of them are not so trivial though, so be
careful.

[1] https://etherpad.openstack.org/p/neutron-random-hashseed

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Addressing unit tests broken by random PYTHONHASHSEED

2014-08-04 Thread Salvatore Orlando
Hi Henry,

Are the fixes pushed with patches [1], and [2], which amend tox.ini,
insufficient?

Salvatore

[1] https://review.openstack.org/#/c/109888/
[2] https://review.openstack.org/#/c/109729/


On 4 August 2014 20:42, Henry Gessau ges...@cisco.com wrote:

 Please see this bug:  https://launchpad.net/bugs/1348818

 I innocently assigned myself to this bug for Neutron. However, there are a
 very large number of Neutron unit tests that are broken by random hash
 seeds.
 I think multiple people should work on fixing the tests.

 We don't want to have multiple people doing the same fixes simultaneously,
 so
 I have created an etherpad[1] to list the broken tests and allow people to
 sign up for fixing them. Please read the instructions carefully.

 This is not urgent work for Neutron for Juno. Please prioritize other work
 first. However, there are several low-hanging-fruit fixes in the list,
 which
 may be good for new developers. Some of them are not so trivial though, so
 be
 careful.

 [1] https://etherpad.openstack.org/p/neutron-random-hashseed

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Addressing unit tests broken by random PYTHONHASHSEED

2014-08-04 Thread Henry Gessau
Salvatore Orlando sorla...@nicira.com wrote:
 Hi Henry,
 
 Are the fixes pushed with patches [1], and [2], which amend tox.ini, 
 insufficient?
 
 Salvatore
 
 [1] https://review.openstack.org/#/c/109888/

This is for functional tests, not unit tests. Subtle difference. :)

 [2] https://review.openstack.org/#/c/109729/

This masks the errors by brute-forcing the hash to zero, which preserves
current behaviour. If we want to (and we do, eventually) remove this
work-around, then we need to fix all the broken test cases. The etherpad has
all the details.

 
 
 On 4 August 2014 20:42, Henry Gessau ges...@cisco.com
 mailto:ges...@cisco.com wrote:
 
 Please see this bug:  https://launchpad.net/bugs/1348818
 
 I innocently assigned myself to this bug for Neutron. However, there are a
 very large number of Neutron unit tests that are broken by random hash 
 seeds.
 I think multiple people should work on fixing the tests.
 
 We don't want to have multiple people doing the same fixes 
 simultaneously, so
 I have created an etherpad[1] to list the broken tests and allow people to
 sign up for fixing them. Please read the instructions carefully.
 
 This is not urgent work for Neutron for Juno. Please prioritize other work
 first. However, there are several low-hanging-fruit fixes in the list, 
 which
 may be good for new developers. Some of them are not so trivial though, 
 so be
 careful.
 
 [1] https://etherpad.openstack.org/p/neutron-random-hashseed
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org 
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] data-source renovation

2014-08-04 Thread Alex Yip
Hi all,

I favor the first approach because it solves the usability problem of wide 
tables without limiting Congress' ability to use wide tables, or adding extra 
complexity.

There are legitimate uses for wide tables, so Congress should be able to 
support them.  For example, Congress will need to support very large data 
sources in the future (TB in size).  It is best if Congress uses those 
databases in place, without creating a local copy of the database, so 
supporting wide tables and making them easy to use in the policy language will 
be a win for the future.

For Con (i) (we will need to invert the preprocessor when showing 
rules/traces/etc. to the user), we can keep the translated policies hidden from 
the user.  The user should only see policies that he wrote.

For Con (ii) (a layer of translation makes debugging difficult), the 
translation layer would be akin to a C preprocessor.  It will be possible to 
match up items on both sides of the translation layer.

- Alex


 Option 2 looks like a better idea keeping in mind the data model
 consistency with Neutron/Nova.
 Could we write something similar to a view which becomes a layer on top if
 this data model?


From: Tim Hinrichs
Sent: Tuesday, July 29, 2014 3:03 PM
To: openstack-dev@lists.openstack.org
Cc: Alex Yip
Subject: [Congress] data-source renovation

Hi all,

As I mentioned in a previous IRC, when writing our first few policies I had 
trouble using the tables we currently use to represent external data sources 
like Nova/Neutron.

The main problem is that wide tables (those with many columns) are hard to use. 
 (a) it is hard to remember what all the columns are, (b) it is easy to 
mistakenly use the same variable in two different tables in the body of the 
rule, i.e. to create an accidental join, (c) changes to the datasource drivers 
can require tedious/error-prone modifications to policy.

I see several options.  Once we choose something, I’ll write up a spec and 
include the other options as alternatives.


1) Add a preprocessor to the policy engine that makes it easier to deal with 
large tables via named-argument references.

Instead of writing a rule like

p(port_id, name) :-
neutron:ports(port_id, addr_pairs, security_groups, extra_dhcp_opts, 
binding_cap, status, name, admin_state_up, network_id, tenant_id, binding_vif, 
device_owner, mac_address, fixed_ips, router_id, binding_host)

we would write

p(id, nme) :-
neutron:ports(port_id=id, name=nme)

The preprocessor would fill in all the missing variables and hand the original 
rule off to the Datalog engine.

Pros: (i) leveraging vanilla database technology under the hood
  (ii) policy is robust to changes in the fields of the original data b/c 
the Congress data model is different than the Nova/Neutron data models
Cons: (i) we will need to invert the preprocessor when showing 
rules/traces/etc. to the user
  (ii) a layer of translation makes debugging difficult

2) Be disciplined about writing narrow tables and write 
tutorials/recommendations demonstrating how.

Instead of a table like...
neutron:ports(port_id, addr_pairs, security_groups, extra_dhcp_opts, 
binding_cap, status, name, admin_state_up, network_id, tenant_id, binding_vif, 
device_owner, mac_address, fixed_ips, router_id, binding_host)

we would have many tables...
neutron:ports(port_id)
neutron:ports.addr_pairs(port_id, addr_pairs)
neutron:ports.security_groups(port_id, security_groups)
neutron:ports.extra_dhcp_opts(port_id, extra_dhcp_opts)
neutron:ports.name(port_id, name)
...

People writing policy would write rules such as ...

p(x) :- neutron:ports.name(port, name), ...

[Here, the period e.g. in ports.name is not an operator--just a convenient way 
to spell the tablename.]

To do this, Congress would need to know which columns in a table are sufficient 
to uniquely identify a row, which in most cases is just the ID.

Pros: (i) this requires only changes in the datasource drivers; everything else 
remains the same
  (ii) still leveraging database technology under the hood
  (iii) policy is robust to changes in fields of original data
Cons: (i) datasource driver can force policy writer to use wide tables
  (ii) this data model is much different than the original data models
  (iii) we need primary-key information about tables

3) Enhance the Congress policy language to handle objects natively.

Instead of writing a rule like the following ...

p(port_id, name, group) :-
neutron:ports(port_id, addr_pairs, security_groups, extra_dhcp_opts, 
binding_cap, status, name, admin_state_up, network_id, tenant_id, binding_vif, 
device_owner, mac_address, fixed_ips, router_id, binding_host),
neutron:ports.security_groups(security_group, group)

we would write a rule such as
p(port_id, name) :-
neutron:ports(port),
port.name(name),
port.id(port_id),
port.security_groups(group)

The big difference here is that the period (.) 

[openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-04 Thread Mark McClain
All-

tl;dr

* Group Based Policy API is the kind of experimentation we be should attempting.
* Experiments should be able to fail fast.
* The master branch does not fail fast.
* StackForge is the proper home to conduct this experiment.


Why this email?
---
Our community has been discussing and working on Group Based Policy (GBP) for 
many months.  I think the discussion has reached a point where we need to 
openly discuss a few issues before moving forward.  I recognize that this 
discussion could create frustration for those who have invested significant 
time and energy, but the reality is we need ensure we are making decisions that 
benefit all members of our community (users, operators, developers and vendors).

Experimentation

I like that as a community we are exploring alternate APIs.  The process of 
exploring via real user experimentation can produce valuable results.  A good 
experiment should be designed to fail fast to enable further trials via rapid 
iteration.

Merging large changes into the master branch is the exact opposite of failing 
fast.

The master branch deliberately favors small iterative changes over time.  
Releasing a new version of the proposed API every six months limits our ability 
to learn and make adjustments.

In the past, we’ve released LBaaS, FWaaS, and VPNaaS as experimental APIs.  The 
results have been very mixed as operators either shy away from testing/offering 
the API or embrace the API with the expectation that the community will provide 
full API support and migration.  In both cases, the experiment fails because we 
either could not get the data we need or are unable to make significant changes 
without accepting a non-trivial amount of technical debt via migrations or 
draft API support.

Next Steps
--
Previously, the GPB subteam used a Github account to host the development, but 
the workflows and tooling do not align with OpenStack's development model. I’d 
like to see us create a group based policy project in StackForge.  StackForge 
will host the code and enable us to follow the same open review and QA 
processes we use in the main project while we are developing and testing the 
API. The infrastructure there will benefit us as we will have a separate review 
velocity and can frequently publish libraries to PyPI.  From a technical 
perspective, the 13 new entities in GPB [1] do not require any changes to 
internal Neutron data structures.  The docs[2] also suggest that an external 
plugin or service would work to make it easier to speed development.

End State
-
APIs require time to fully bake and right now it is too early to know the final 
outcome.  Using StackForge will allow the team to retain all of its options 
including: merging the code into Neutron, adopting the repository as 
sub-project of the Network Program, leaving the project in StackForge project 
or learning that users want something completely different.  I would expect 
that we'll revisit the status of the repo during the L or M cycles since the 
Kilo development cycle does not leave enough time to experiment and iterate.


mark

[1] 
http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/juno/group-based-policy-abstraction.rst#n370
[2] 
https://docs.google.com/presentation/d/1Nn1HjghAvk2RTPwvltSrnCUJkidWKWY2ckU7OYAVNpo/edit#slide=id.g12c5a79d7_4078
[3]
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Long standing -2 on Group-based policy patch

2014-08-04 Thread Mark McClain

On Aug 4, 2014, at 1:16 PM, Sumit Naiksatam sumitnaiksa...@gmail.com wrote:

 The first patch[1] of this high priority approved blueprint[2][3]
 targeted for Juno-3 has been blocked by a core reviewer’s (Mark
 McClain) -2 since July 2nd. This patch was at patch-set 13 then, and
 has been repeatedly reviewed and updated to the current patch-set 22.
 However, there has been no comment or reason forthcoming from the
 reviewer on why the -2 still persists. The dependent patches have also
 gone through numerous iterations since.
 
 There is a team of several people working on this feature across
 different OpenStack projects since Oct 2013. The feature has been
 discussed and iterated over weekly IRC meetings[4], design sessions
 spanning two summits, mailing list threads, a working PoC
 implementation, and a code sprint. Blocking the first patch required
 for this feature jeopardizes this year-long effort and the delivery of
 this feature in Juno. For many, this may sound like an all too
 familiar story[5], and hence this team would like to mitigate the
 issue while there is still time.
 
 Mark, can you please explain why your -2 still persists? If there are
 no major outstanding issues, can you please remove the -2?


There are issues outside of the code itself, I’ve created a thread here to 
discuss:

http://lists.openstack.org/pipermail/openstack-dev/2014-August/041863.html

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-08-04 Thread Boris Pavlovic
Jay,

Thanks for review of proposal. Some my comments below..


I think this is one of the roots of the problem that folks like David
 and Sean keep coming around to. If Rally were less monolithic, it
 would be easier to say OK, bring this piece into Tempest, have this
 piece be a separate library and live in the QA program, and have the
 service endpoint that allows operators to store and periodically measure
 SLA performance indicators against their cloud.



Actually Rally was designed to be a glue service (and cli tool) that will
bind everything together and present service endpoint for Operators. I
really do not understand what can be split? and put to tempest? and
actually why? Could you elaborate pointing on current Rally code, maybe
there is some misleading here. I think this should be discussed in more
details..

By the way I believe that monolithic architecture is the best one, like
Linus does.=)
http://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate


Incidentally, this is one of the problems that Scalr faced when applying
 for incubation, years ago, and one of the reasons the PPB at the time voted
 not to incubate Scalr: it had a monolithic design that crossed too many
 different lines in terms of duplicating functionality that already existed
 in a number of other projects.


I found the Scalr incubation discussion:
http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-meeting.2011-06-14-20.03.log.html

The reasons of reject were next:
*) OpenStack shouldn't put PaaS in OpenStack core # rally is not PaaS
*) Duplication of functionality (actually dashboard)  # Rally doesn't
duplicate anything
*) Development is done behind closed doors
# Not about Rally
http://stackalytics.com/?release=junometric=commitsproject_type=Allmodule=rally

Seems like Rally is quite different case and this comparison is misleading
 irrelevant to current case.



 , that is why I think Rally should be a separated program (i.e.
 Rally scope is just different from QA scope). As well, It's not clear
 for me, why collaboration is possible only in case of one program? In
 my opinion collaboration  programs are irrelevant things.


 Sure, it's certainly possible for collaboration to happen across
 programs. I think what Sean is alluding to is the fact that the Tempest
 and Rally communities have done little collaboration to date, and that
 is worrying to him.


Could you please explain this paragraph. What do you mean by have done
little collaboration

We integrated Tempest in Rally:
http://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/

We are working on spec in Tempest about tempest conf generation:
https://review.openstack.org/#/c/94473/ # probably not so fast as we would
like

We had design session:
http://junodesignsummit.sched.org/event/2815ca60f70466197d3a81d62e1ee7e4#.U9_ugYCSz1g

I am going to work on integration OSprofiler in tempest, as soon as I get
it in core projects.

By the way, I am really not sure how being one Program will help us to
collaborate? What it actually changes?



 About collaboration between Rally  Tempest teams... Major goal of
 integration Tempest in Rally is to make it simpler to use tempest on
 production clouds via OpenStack API.


Plenty of folks run Tempest without Rally against production clouds as
 an acceptance test platform. I see no real benefit to arguing that Rally
 is for running against production clouds and Tempest is for
 non-production clouds. There just isn't much of a difference there.


Hm, I didn't say anything about Tempest is for non-prduction clouds...
I said that Rally team is working on making it simpler to use on production
clouds..



The problem I see is that Rally is not *yet* exposing the REST service
 endpoint that would make it a full-fledged Operator Tool outside the
 scope of its current QA focus. Once Rally does indeed publish a REST API
 that exposes resource endpoints for an operator to store a set of KPIs
 associated with an SLA, and allows the operator to store the run
 schedule that Rally would use to go and test such metrics, *then* would
 be the appropriate time to suggest that Rally be the pilot project in
 this new Operator Tools program, IMO.


It's really almost done.. It is all about 2 weeks of work...



I'm sure all of those things would be welcome additions to Tempest. At the
 same time, Rally contributors would do well to work on an initial REST API
 endpoint that would expose the resources I denoted above.


As I said before it's almost finished..


Best regards,
Boris Pavlovic



On Mon, Aug 4, 2014 at 8:25 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/04/2014 11:21 AM, Boris Pavlovic wrote:

 Rally is quite monolithic and can't be split


 I think this is one of the roots of the problem that folks like David
 and Sean keep coming around to. If Rally were less monolithic, it
 would be easier to say OK, bring this piece into Tempest, have this
 piece be a separate library and live in the QA 

Re: [openstack-dev] [Tripleo] photos of whiteboards?

2014-08-04 Thread Robert Collins
Thanks!

On 5 August 2014 06:40, Dan Prince dpri...@redhat.com wrote:
 On Sun, 2014-08-03 at 07:59 +1200, Robert Collins wrote:
 We had a few whiteboard photos taken during the sprint, but I can't
 find where they were posted :/

 Right now I'm looking for the one with the list of priority CI jobs,

 I don't have the photo. But it looks like someone wrote them down on the
 etherpad here (see line 95):

 https://etherpad.openstack.org/p/juno-midcycle-meetup


 -Rob




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] specs.openstack.org is live

2014-08-04 Thread John Dickinson
Can you please add Swift as well?

--John



On Aug 4, 2014, at 9:54 AM, Andreas Jaeger a...@suse.com wrote:

 Great, I've updated my patch to add neutron and nova to the index page.
 
 For now read the specs using:
 http://specs.openstack.org/openstack/neutron-specs/
 http://specs.openstack.org/openstack/nova-specs/
 
 Andreas
 -- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-04 Thread Hemanth Ravi
Hi,

I believe that the API has been reviewed well both for its usecases and
correctness. And the blueprint has been approved after sufficient exposure
of the API in the community. The best way to enable users to adopt GBP is
to introduce this in Juno rather than as a project in StackForge. Just as
in other APIs any evolutionary changes can be incorporated, going forward.

OS development processes are being followed in the implementation to make
sure that there is no negative impact on Neutron stability with the
inclusion of GBP.

Thanks,
-hemanth


On Mon, Aug 4, 2014 at 1:27 PM, Mark McClain mmccl...@yahoo-inc.com wrote:

  All-

 tl;dr

 * Group Based Policy API is the kind of experimentation we be should
 attempting.
 * Experiments should be able to fail fast.
 * The master branch does not fail fast.
 * StackForge is the proper home to conduct this experiment.


 Why this email?
 ---
 Our community has been discussing and working on Group Based Policy (GBP)
 for many months.  I think the discussion has reached a point where we need
 to openly discuss a few issues before moving forward.  I recognize that
 this discussion could create frustration for those who have invested
 significant time and energy, but the reality is we need ensure we are
 making decisions that benefit all members of our community (users,
 operators, developers and vendors).

 Experimentation
 
 I like that as a community we are exploring alternate APIs.  The process
 of exploring via real user experimentation can produce valuable results.  A
 good experiment should be designed to fail fast to enable further trials
 via rapid iteration.

 Merging large changes into the master branch is the exact opposite of
 failing fast.

 The master branch deliberately favors small iterative changes over time.
  Releasing a new version of the proposed API every six months limits our
 ability to learn and make adjustments.

 In the past, we’ve released LBaaS, FWaaS, and VPNaaS as experimental APIs.
  The results have been very mixed as operators either shy away from
 testing/offering the API or embrace the API with the expectation that the
 community will provide full API support and migration.  In both cases, the
 experiment fails because we either could not get the data we need or are
 unable to make significant changes without accepting a non-trivial amount
 of technical debt via migrations or draft API support.

 Next Steps
 --
 Previously, the GPB subteam used a Github account to host the development,
 but the workflows and tooling do not align with OpenStack's development
 model. I’d like to see us create a group based policy project in
 StackForge.  StackForge will host the code and enable us to follow the same
 open review and QA processes we use in the main project while we are
 developing and testing the API. The infrastructure there will benefit us as
 we will have a separate review velocity and can frequently publish
 libraries to PyPI.  From a technical perspective, the 13 new entities in
 GPB [1] do not require any changes to internal Neutron data structures.
  The docs[2] also suggest that an external plugin or service would work to
 make it easier to speed development.

 End State
 -
 APIs require time to fully bake and right now it is too early to know the
 final outcome.  Using StackForge will allow the team to retain all of its
 options including: merging the code into Neutron, adopting the repository
 as sub-project of the Network Program, leaving the project in StackForge
 project or learning that users want something completely different.  I
 would expect that we'll revisit the status of the repo during the L or M
 cycles since the Kilo development cycle does not leave enough time to
 experiment and iterate.


 mark

 [1]
 http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/juno/group-based-policy-abstraction.rst#n370
 [2]
 https://docs.google.com/presentation/d/1Nn1HjghAvk2RTPwvltSrnCUJkidWKWY2ckU7OYAVNpo/edit#slide=id.g12c5a79d7_4078
 [3]

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][libvirt] Non-readonly connection to libvirt in unit tests

2014-08-04 Thread Solly Ross
Hi,
I was wondering if there was a way to get a non-readonly connection to libvirt 
when running the unit tests
on the CI.  If I call `LibvirtDriver._connect(LibvirtDriver.uri())`, it works 
fine locally, but the upstream
CI barfs with libvirtError: internal error Unable to locate libvirtd daemon in 
/usr/sbin (to override, set $LIBVIRTD_PATH to the name of the libvirtd binary).
If I try to connect by calling libvirt.open(None), it also barfs, saying I 
don't have permission to connect.  I could just set it to always use 
fakelibvirt,
but it would be nice to be able to run some of the tests against a real target. 
 The tests in question are part of https://review.openstack.org/#/c/111459/,
and involve manipulating directory-based libvirt storage pools.

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][libvirt][baremetal] Nova Baremetal's Usage of Components from Libvirt

2014-08-04 Thread Solly Ross
Hello All,

So, I'm working on https://review.openstack.org/#/c/111459/, and have 
encountered an issue.  It seems that the Nova Baremetal driver
uses the ImageCacheManager from the Libvirt driver.  For various reasons (see 
the commit), the ImageCacheManager has been refactored to
require a libvirt connection to function properly.  However, the Nova Baremetal 
driver cannot provide such a connection.  Bearing in mind that
Baremetal is deprecated and slated to be replaced by Ironic, the question is 
such: what to do about the ImageCacheManager.

One option would be to make it so that the ImageCacheManager can function 
without a libvirt connection.  This might make sense if the Baremetal
driver were around to stay; there would be somewhat less duplication than a 
wholesale copying of the code.  However, in light of Baremetal's impending
this seems to me to be a poor choice since it would involve lots of duplicate 
functionality, would complicate the ImageCacheManager code, and would
later need to be manually removed once the Baremetal driver is removed.

The second option would be to make a copy of the old ImageCacheManager in the 
Baremetal directory, and have the Baremetal driver
use that.  This seems to me to be the better option, since it means that when 
the Baremetal driver is removed, the old ImageCacheManager
code goes with it, without someone having to manually remove it.

What do you think?

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-04 Thread Ivar Lazzaro
+1 Hemanth.


On Tue, Aug 5, 2014 at 12:24 AM, Hemanth Ravi hemanthrav...@gmail.com
wrote:

 Hi,

 I believe that the API has been reviewed well both for its usecases and
 correctness. And the blueprint has been approved after sufficient exposure
 of the API in the community. The best way to enable users to adopt GBP is
 to introduce this in Juno rather than as a project in StackForge. Just as
 in other APIs any evolutionary changes can be incorporated, going forward.

 OS development processes are being followed in the implementation to make
 sure that there is no negative impact on Neutron stability with the
 inclusion of GBP.

 Thanks,
 -hemanth


 On Mon, Aug 4, 2014 at 1:27 PM, Mark McClain mmccl...@yahoo-inc.com
 wrote:

  All-

 tl;dr

 * Group Based Policy API is the kind of experimentation we be should
 attempting.
 * Experiments should be able to fail fast.
 * The master branch does not fail fast.
 * StackForge is the proper home to conduct this experiment.


 Why this email?
 ---
 Our community has been discussing and working on Group Based Policy (GBP)
 for many months.  I think the discussion has reached a point where we need
 to openly discuss a few issues before moving forward.  I recognize that
 this discussion could create frustration for those who have invested
 significant time and energy, but the reality is we need ensure we are
 making decisions that benefit all members of our community (users,
 operators, developers and vendors).

 Experimentation
 
 I like that as a community we are exploring alternate APIs.  The process
 of exploring via real user experimentation can produce valuable results.  A
 good experiment should be designed to fail fast to enable further trials
 via rapid iteration.

 Merging large changes into the master branch is the exact opposite of
 failing fast.

 The master branch deliberately favors small iterative changes over time.
  Releasing a new version of the proposed API every six months limits our
 ability to learn and make adjustments.

 In the past, we’ve released LBaaS, FWaaS, and VPNaaS as experimental
 APIs.  The results have been very mixed as operators either shy away from
 testing/offering the API or embrace the API with the expectation that the
 community will provide full API support and migration.  In both cases, the
 experiment fails because we either could not get the data we need or are
 unable to make significant changes without accepting a non-trivial amount
 of technical debt via migrations or draft API support.

 Next Steps
 --
 Previously, the GPB subteam used a Github account to host the
 development, but the workflows and tooling do not align with OpenStack's
 development model. I’d like to see us create a group based policy project
 in StackForge.  StackForge will host the code and enable us to follow the
 same open review and QA processes we use in the main project while we are
 developing and testing the API. The infrastructure there will benefit us as
 we will have a separate review velocity and can frequently publish
 libraries to PyPI.  From a technical perspective, the 13 new entities in
 GPB [1] do not require any changes to internal Neutron data structures.
  The docs[2] also suggest that an external plugin or service would work to
 make it easier to speed development.

 End State
 -
 APIs require time to fully bake and right now it is too early to know the
 final outcome.  Using StackForge will allow the team to retain all of its
 options including: merging the code into Neutron, adopting the repository
 as sub-project of the Network Program, leaving the project in StackForge
 project or learning that users want something completely different.  I
 would expect that we'll revisit the status of the repo during the L or M
 cycles since the Kilo development cycle does not leave enough time to
 experiment and iterate.


 mark

 [1]
 http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/juno/group-based-policy-abstraction.rst#n370
 [2]
 https://docs.google.com/presentation/d/1Nn1HjghAvk2RTPwvltSrnCUJkidWKWY2ckU7OYAVNpo/edit#slide=id.g12c5a79d7_4078
 [3]

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [git-review] Supporting development in local branches

2014-08-04 Thread Yuriy Taraday
Hello, git-review users!

I'd like to gather feedback on a feature I want to implement that might
turn out useful for you.

I like using Git for development. It allows me to keep track of current
development process, it remembers everything I ever did with the code (and
more).
I also really like using Gerrit for code review. It provides clean
interfaces, forces clean histories (who needs to know that I changed one
line of code in 3am on Monday?) and allows productive collaboration.
What I really hate is having to throw away my (local, precious for me)
history for all change requests because I need to upload a change to Gerrit.

That's why I want to propose making git-review to support the workflow that
will make me happy. Imagine you could do smth like this:

0. create new local branch;

master: M--
 \
feature:  *

1. start hacking, doing small local meaningful (to you) commits;

master: M--
 \
feature:  A-B-...-C

2. since hacking takes tremendous amount of time (you're doing a Cool
Feature (tm), nothing less) you need to update some code from master, so
you're just merging master in to your branch (i.e. using Git as you'd use
it normally);

master: M---N-O-...
 \\\
feature:  A-B-...-C-D-...

3. and now you get the first version that deserves to be seen by community,
so you run 'git review', it asks you for desired commit message, and poof,
magic-magic all changes from your branch is uploaded to Gerrit as _one_
change request;

master: M---N-O-...
 \\\E* = uploaded
feature:  A-B-...-C-D-...-E

4. you repeat steps 1 and 2 as much as you like;
5. and all consecutive calls to 'git review' will show you last commit
message you used for upload and use it to upload new state of your local
branch to Gerrit, as one change request.

Note that during this process git-review will never run rebase or merge
operations. All such operations are done by user in local branch instead.

Now, to the dirty implementations details.

- Since suggested feature changes default behavior of git-review, it'll
have to be explicitly turned on in config (review.shadow_branches?
review.local_branches?). It should also be implicitly disabled on master
branch (or whatever is in .gitreview config).
- Last uploaded commit for branch branch-name will be kept in
refs/review-branches/branch-name.
- For every call of 'git review' it will find latest commit in
gerrit/master (or remote and branch from .gitreview), create a new one that
will have that commit as its parent and a tree of current commit from local
branch as its tree.
- While creating new commit, it'll open an editor to fix commit message for
that new commit taking it's initial contents from
refs/review-branches/branch-name if it exists.
- Creating this new commit might involve generating a temporary bare repo
(maybe even with shared objects dir) to prevent changes to current index
and HEAD while using bare 'git commit' to do most of the work instead of
loads of plumbing commands.

Note that such approach won't work for uploading multiple change request
without some complex tweaks, but I imagine later we can improve it and
support uploading several interdependent change requests from several local
branches. We can resolve dependencies between them by tracking latest
merges (if branch myfeature-a has been merged to myfeature-b then change
request from myfeature-b will depend on change request from myfeature-a):

master:M---N-O-...
\\\-E*
myfeature-a: A-B-...-C-D-...-E   \
  \   \   J* = uploaded
myfeature-b:   F-...-G-I-J

This improvement would be implemented later if needed.

I hope such feature seams to be useful not just for me and I'm looking
forward to some comments on it.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-04 Thread Armando M.
Hi,

When I think about Group-Based Policy I cannot help myself but think about
the degree of variety of sentiments (for lack of better words) that this
subject has raised over the past few months on the mailing list and/or
other venues.

I speak for myself when I say that when I look at the end-to-end
Group-Based Policy functionality I am not entirely sold on the following
points:

- The abstraction being proposed, its relationship with the Neutron API and
ODL;
- The way the reference implementation has been introduced into the
OpenStack world, and Neutron in particular;
- What an evolution of Group-Based Policy means going forward if we use the
proposed approach as a foundation for a more application-friendly and
intent-driven API abstraction going forward.
- The way we used development tools for bringing Neutron developers
(reviewers and committers), application developers, operators, and users
together around these new concepts.

Can I speak for everybody when I say that we do not have a consensus across
the board on all/some/other points being touched in this thread or other
threads? I think I can: I have witnessed that there is *NOT* such a
consensus. If I am asked where I stand, my position is that I wouldn't mind
to see how Group-Based Policy as we know it kick the tires; would I love to
see it do that in a way that's not disruptive to the Neutron project? YES,
I would love to.

So, where do we go from here? Do we need a consensus on such a delicate
area? I think we do.

I think Mark's intent, or anyone's who has at his/her heart the interest of
the Neutron community as a whole, is to make sure that we find a compromise
which everyone is comfortable with.

Do we vote about what we do next? Do we leave just cores to vote? I am not
sure. But one thing is certain, we cannot keep procrastinating as the Juno
window is about to expire.

I am sure that there are people hitching to get their hands on Group-Based
Policy, however the vehicle whereby this gets released should be irrelevant
to them; at the same time I appreciate that some people perceive Stackforge
projects as not as established and mature as other OpenStack projects; that
said wouldn't be fair to say that Group-Based Policy is exactly that? If
this means that other immature abstractions would need to follow suit, I
would be all in for this more decentralized approach. Can we do that now,
or do we postpone this discussion for the Kilo Summit? I don't know.

I realize that I have asked more questions than the answers I tried to
give, but I hope we can all engage in a constructive discussion.

Cheers,
Armando

PS: Salvatore I expressly stayed away from the GBP acronym you love so
much, so please read the thread and comment on it :)

On 4 August 2014 15:54, Ivar Lazzaro ivarlazz...@gmail.com wrote:

 +1 Hemanth.


 On Tue, Aug 5, 2014 at 12:24 AM, Hemanth Ravi hemanthrav...@gmail.com
 wrote:

 Hi,

 I believe that the API has been reviewed well both for its usecases and
 correctness. And the blueprint has been approved after sufficient exposure
 of the API in the community. The best way to enable users to adopt GBP is
 to introduce this in Juno rather than as a project in StackForge. Just as
 in other APIs any evolutionary changes can be incorporated, going forward.

 OS development processes are being followed in the implementation to make
 sure that there is no negative impact on Neutron stability with the
 inclusion of GBP.

 Thanks,
 -hemanth


 On Mon, Aug 4, 2014 at 1:27 PM, Mark McClain mmccl...@yahoo-inc.com
 wrote:

  All-

 tl;dr

 * Group Based Policy API is the kind of experimentation we be should
 attempting.
 * Experiments should be able to fail fast.
 * The master branch does not fail fast.
 * StackForge is the proper home to conduct this experiment.


 Why this email?
 ---
 Our community has been discussing and working on Group Based Policy
 (GBP) for many months.  I think the discussion has reached a point where we
 need to openly discuss a few issues before moving forward.  I recognize
 that this discussion could create frustration for those who have invested
 significant time and energy, but the reality is we need ensure we are
 making decisions that benefit all members of our community (users,
 operators, developers and vendors).

 Experimentation
 
 I like that as a community we are exploring alternate APIs.  The process
 of exploring via real user experimentation can produce valuable results.  A
 good experiment should be designed to fail fast to enable further trials
 via rapid iteration.

 Merging large changes into the master branch is the exact opposite of
 failing fast.

 The master branch deliberately favors small iterative changes over time.
  Releasing a new version of the proposed API every six months limits our
 ability to learn and make adjustments.

 In the past, we’ve released LBaaS, FWaaS, and VPNaaS as experimental
 APIs.  The results have been very mixed as operators either shy 

Re: [openstack-dev] [all] specs.openstack.org is live

2014-08-04 Thread Steve Martinelli
Done - https://review.openstack.org/#/c/111865/
But it would be great if the swift team
could push this one too: https://review.openstack.org/#/c/111869/
(since it's the only *-specs on specs.openstack.org that doesn't look the
same)


Regards,

Steve Martinelli
Software Developer - Openstack
Keystone Core Member





Phone:
1-905-413-2851
E-mail: steve...@ca.ibm.com

8200 Warden Ave
Markham, ON L6G 1C7
Canada




From:   
John Dickinson m...@not.mn
To:   
OpenStack Development
Mailing List (not for usage questions) openstack-dev@lists.openstack.org,

Date:   
08/04/2014 05:39 PM
Subject:  
 Re: [openstack-dev]
[all] specs.openstack.org is live




Can you please add Swift as well?

--John



On Aug 4, 2014, at 9:54 AM, Andreas Jaeger a...@suse.com wrote:

 Great, I've updated my patch to add neutron and nova to the index
page.
 
 For now read the specs using:
 http://specs.openstack.org/openstack/neutron-specs/
 http://specs.openstack.org/openstack/nova-specs/
 
 Andreas
 -- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
 SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
  GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG
Nürnberg)
  GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1
389A 563C C272 A126
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[attachment signature.asc deleted by Steve Martinelli/Toronto/IBM]
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-04 Thread loy wolfe
+1 mark


On Tue, Aug 5, 2014 at 4:27 AM, Mark McClain mmccl...@yahoo-inc.com wrote:

  All-

 tl;dr

 * Group Based Policy API is the kind of experimentation we be should
 attempting.
 * Experiments should be able to fail fast.
 * The master branch does not fail fast.
 * StackForge is the proper home to conduct this experiment.


 Why this email?
 ---
 Our community has been discussing and working on Group Based Policy (GBP)
 for many months.  I think the discussion has reached a point where we need
 to openly discuss a few issues before moving forward.  I recognize that
 this discussion could create frustration for those who have invested
 significant time and energy, but the reality is we need ensure we are
 making decisions that benefit all members of our community (users,
 operators, developers and vendors).

 Experimentation
 
 I like that as a community we are exploring alternate APIs.  The process
 of exploring via real user experimentation can produce valuable results.  A
 good experiment should be designed to fail fast to enable further trials
 via rapid iteration.

 Merging large changes into the master branch is the exact opposite of
 failing fast.

 The master branch deliberately favors small iterative changes over time.
  Releasing a new version of the proposed API every six months limits our
 ability to learn and make adjustments.

 In the past, we’ve released LBaaS, FWaaS, and VPNaaS as experimental APIs.
  The results have been very mixed as operators either shy away from
 testing/offering the API or embrace the API with the expectation that the
 community will provide full API support and migration.  In both cases, the
 experiment fails because we either could not get the data we need or are
 unable to make significant changes without accepting a non-trivial amount
 of technical debt via migrations or draft API support.

 Next Steps
 --
 Previously, the GPB subteam used a Github account to host the development,
 but the workflows and tooling do not align with OpenStack's development
 model. I’d like to see us create a group based policy project in
 StackForge.  StackForge will host the code and enable us to follow the same
 open review and QA processes we use in the main project while we are
 developing and testing the API. The infrastructure there will benefit us as
 we will have a separate review velocity and can frequently publish
 libraries to PyPI.  From a technical perspective, the 13 new entities in
 GPB [1] do not require any changes to internal Neutron data structures.
  The docs[2] also suggest that an external plugin or service would work to
 make it easier to speed development.

 End State
 -
 APIs require time to fully bake and right now it is too early to know the
 final outcome.  Using StackForge will allow the team to retain all of its
 options including: merging the code into Neutron, adopting the repository
 as sub-project of the Network Program, leaving the project in StackForge
 project or learning that users want something completely different.  I
 would expect that we'll revisit the status of the repo during the L or M
 cycles since the Kilo development cycle does not leave enough time to
 experiment and iterate.


 mark

 [1]
 http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/juno/group-based-policy-abstraction.rst#n370
 [2]
 https://docs.google.com/presentation/d/1Nn1HjghAvk2RTPwvltSrnCUJkidWKWY2ckU7OYAVNpo/edit#slide=id.g12c5a79d7_4078
 [3]

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-04 Thread Angus Salkeld
On Tue, 2014-08-05 at 03:18 +0400, Yuriy Taraday wrote:
 Hello, git-review users!
 
 
 I'd like to gather feedback on a feature I want to implement that
 might turn out useful for you.
 
 
 I like using Git for development. It allows me to keep track of
 current development process, it remembers everything I ever did with
 the code (and more).
 I also really like using Gerrit for code review. It provides clean
 interfaces, forces clean histories (who needs to know that I changed
 one line of code in 3am on Monday?) and allows productive
 collaboration.
 What I really hate is having to throw away my (local, precious for me)
 history for all change requests because I need to upload a change to
 Gerrit.

I just create a short-term branch to record this.

 
 
 That's why I want to propose making git-review to support the workflow
 that will make me happy. Imagine you could do smth like this:
 
 
 0. create new local branch;
 
 
 master: M--
  \
 feature:  *
 
 
 1. start hacking, doing small local meaningful (to you) commits;
 
 
 master: M--
  \
 feature:  A-B-...-C
 
 
 2. since hacking takes tremendous amount of time (you're doing a Cool
 Feature (tm), nothing less) you need to update some code from master,
 so you're just merging master in to your branch (i.e. using Git as
 you'd use it normally);
 
 master: M---N-O-...
  \\\
 feature:  A-B-...-C-D-...
 
 
 3. and now you get the first version that deserves to be seen by
 community, so you run 'git review', it asks you for desired commit
 message, and poof, magic-magic all changes from your branch is
 uploaded to Gerrit as _one_ change request;
 
 master: M---N-O-...
  \\\E* = uploaded
 feature:  A-B-...-C-D-...-E
 
 
 4. you repeat steps 1 and 2 as much as you like;
 5. and all consecutive calls to 'git review' will show you last commit
 message you used for upload and use it to upload new state of your
 local branch to Gerrit, as one change request.
 
 
 Note that during this process git-review will never run rebase or
 merge operations. All such operations are done by user in local branch
 instead.
 
 
 Now, to the dirty implementations details.
 
 
 - Since suggested feature changes default behavior of git-review,
 it'll have to be explicitly turned on in config
 (review.shadow_branches? review.local_branches?). It should also be
 implicitly disabled on master branch (or whatever is in .gitreview
 config).
 - Last uploaded commit for branch branch-name will be kept in
 refs/review-branches/branch-name.
 - For every call of 'git review' it will find latest commit in
 gerrit/master (or remote and branch from .gitreview), create a new one
 that will have that commit as its parent and a tree of current commit
 from local branch as its tree.
 - While creating new commit, it'll open an editor to fix commit
 message for that new commit taking it's initial contents from
 refs/review-branches/branch-name if it exists.
 - Creating this new commit might involve generating a temporary bare
 repo (maybe even with shared objects dir) to prevent changes to
 current index and HEAD while using bare 'git commit' to do most of the
 work instead of loads of plumbing commands.
 
 
 Note that such approach won't work for uploading multiple change
 request without some complex tweaks, but I imagine later we can
 improve it and support uploading several interdependent change
 requests from several local branches. We can resolve dependencies
 between them by tracking latest merges (if branch myfeature-a has been
 merged to myfeature-b then change request from myfeature-b will depend
 on change request from myfeature-a):
 
 master:M---N-O-...
 \\\-E*
 myfeature-a: A-B-...-C-D-...-E   \
   \   \   J* = uploaded
 myfeature-b:   F-...-G-I-J
 
 
 This improvement would be implemented later if needed.
 
 
 I hope such feature seams to be useful not just for me and I'm looking
 forward to some comments on it.

Hi Yuriy,

I like my local history matching what is up for review and
don't value the interim messy commits (I make a short term branch to
save the history so I can go back to it - if I mess up a merge).

Tho' others might love this idea.

-Angus

 
 
 -- 
 
 Kind regards, Yuriy.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: This is a digitally signed message part
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] specs.openstack.org is live

2014-08-04 Thread joehuang
Hi,

Good job!

I would like to know how to submit cross project spec? Is there a repository 
for cross project cross project spec.

Best Regards
Chaoyi Huang ( Joe Huang )


-邮件原件-
发件人: Andreas Jaeger [mailto:a...@suse.com] 
发送时间: 2014年8月3日 1:17
收件人: openstack-dev@lists.openstack.org
主题: [openstack-dev] [all] specs.openstack.org is live

All OpenStack incubated projects and programs that use a -specs repository have 
now been setup to publish these to http://specs.openstack.org. With the next 
merged in patch of a *-specs repository, the documentation will get published.

The index page contains the published repos as of yesterday and it will be 
enhanced as more are setup (current patch:
https://review.openstack.org/111476).

For now, you can reach a repo directly via 
http://specs.openstack.org/$ORGANIZATION/$project-specs, for example:
http://specs.openstack.org/openstack/qa-specs/

Thanks to Steve Martinelli and to the infra team (especially Clark, James, 
Jeremy and Sergey) for getting this done!

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Step by step OpenStack Icehouse Installation Guide

2014-08-04 Thread Shake Chen
Hi

maybe you can consider use script create Database and endpoint. like
https://github.com/EmilienM/openstack-folsom-guide/tree/master/scripts

this would be easy for user.


On Tue, Aug 5, 2014 at 12:38 AM, chayma ghribi chaym...@gmail.com wrote:

 Hi !

 Thank you for the comment Qiming !

 The script stack.sh is used to configure Devstack and  to assign the
 heat_stack_owner role to users.
 Also, I think that Heat is configured by default on Devstack for icehouse.

 http://docs.openstack.org/developer/heat/getting_started/on_devstack.html#configure-devstack-to-enable-heat

 In our guide we are not installing using devstack.
 We are creating and managing stacks with Heat but we have not errors !

 If you have some examples of tests (or scenarios) that helps us to
 identify errors and improve the guide, please don't hesitate to contact us
 ;)
 All your contributions are welcome :)

 Regards,

 Chaima Ghribi





 2014-08-04 8:13 GMT+02:00 Qiming Teng teng...@linux.vnet.ibm.com:

 Thanks for the efforts.  Just want to add some comments on installing
 and configuring Heat, since an incomplete setup may cause bizarre
 problems later on when users start experiments.

 Please refer to devstack script below for proper configuration of Heat:

 https://github.com/openstack-dev/devstack/blob/master/lib/heat#L68

 and the function create_heat_accounts at the link below which helps
 create the required Heat accounts.

 https://github.com/openstack-dev/devstack/blob/master/lib/heat#L214

 Regards,
   Qiming

 On Sun, Aug 03, 2014 at 12:49:22PM +0200, chayma ghribi wrote:
  Dear All,
 
  I want to share with you our OpenStack Icehouse Installation Guide for
  Ubuntu 14.04.
 
 
 https://github.com/ChaimaGhribi/OpenStack-Icehouse-Installation/blob/master/OpenStack-Icehouse-Installation.rst
 
  An additional  guide for Heat service installation is also available ;)
 
 
 https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst
 
  Hope this manuals will be helpful and simple !
  Your contributions are welcome, as are questions and suggestions :)
 
  Regards,
  Chaima Ghribi



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Shake Chen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt][baremetal] Nova Baremetal's Usage of Components from Libvirt

2014-08-04 Thread Monty Taylor

On 08/04/2014 03:54 PM, Solly Ross wrote:

Hello All,

So, I'm working on https://review.openstack.org/#/c/111459/, and have 
encountered an issue.  It seems that the Nova Baremetal driver
uses the ImageCacheManager from the Libvirt driver.  For various reasons (see 
the commit), the ImageCacheManager has been refactored to
require a libvirt connection to function properly.  However, the Nova Baremetal 
driver cannot provide such a connection.  Bearing in mind that
Baremetal is deprecated and slated to be replaced by Ironic, the question is 
such: what to do about the ImageCacheManager.

One option would be to make it so that the ImageCacheManager can function 
without a libvirt connection.  This might make sense if the Baremetal
driver were around to stay; there would be somewhat less duplication than a 
wholesale copying of the code.  However, in light of Baremetal's impending
this seems to me to be a poor choice since it would involve lots of duplicate 
functionality, would complicate the ImageCacheManager code, and would
later need to be manually removed once the Baremetal driver is removed.

The second option would be to make a copy of the old ImageCacheManager in the 
Baremetal directory, and have the Baremetal driver
use that.  This seems to me to be the better option, since it means that when 
the Baremetal driver is removed, the old ImageCacheManager
code goes with it, without someone having to manually remove it.


I might get shot in the head, but I think option 2 makes the most sense. 
There is no need to do _new_ work in support of a dead codebase.


I am not, however, the ruler of the universe...

Monty


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New Salt Openstack Formula

2014-08-04 Thread Lei Zhang
Hi Guys,

I have write a new set of salt formulas to install the openstack for those
who likes SaltStack.
These formula are under develop and tested on RedHat/CentOS 6 and Ubuntu
12.04.
Pull Request is welcome.
Here is the related repos:

  - https://github.com/saltstack-formulas/mysql-formula.git
  - https://github.com/saltstack-formulas/memcached-formula.git
  - https://github.com/saltstack-formulas/rabbitmq-formula.git
  - https://github.com/jeffrey4l/keystone-formula.git
  - https://github.com/jeffrey4l/glance-formula.git
  - https://github.com/jeffrey4l/cinder-formula.git
  - https://github.com/jeffrey4l/nova-formula.git
  - https://github.com/jeffrey4l/openstack-formula.git

-- 
Lei Zhang
Blog: http://xcodest.me
twitter/weibo: @jeffrey4l
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] specs.openstack.org is live

2014-08-04 Thread Jeremy Stanley
On 2014-08-05 01:26:49 + (+), joehuang wrote:
 I would like to know how to submit cross project spec? Is there a
 repository for cross project cross project spec.

Specs repositories are about formalizing/streamlining the design
process within a program, and generally the core reviewers of those
programs decide when a spec is in a suitable condition for approval.
In the case of a cross-program spec (which I assume is what you mean
by cross-project), who would decide what needs to be in the spec
proposal and who would approve it? What sort of design proposal do
you have in mind which you think would need to be a single spec
applying to projects in more than one program?
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Networking Docs Swarm - Brisbane 9 August

2014-08-04 Thread Lana Brindley

Hi everyone,

I just wanted to let you all know about the OpenStack Networking Docs 
Swarm being held in Brisbane on 9 August.


Currently, there is no OpenStack Networking Guide, so the focus of this 
swarm is to combine the existing networking content into a single doc so 
that it can be updated, reviewed, and hopefully completed for the Juno 
release.


We need both tech writers and OpenStack admins for the event to be a 
success. Even if you can only make it for half an hour, your presence 
would be greatly appreciated!


RSVP here: 
http://www.meetup.com/Australian-OpenStack-User-Group/events/198867972/?gj=rcs.ba=co2.b_grprv=rcs.b


More information here: http://openstack-swarm.rhcloud.com/

See you on Saturday!

Lana

--
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] scheduler subgroup meeting agenda 8/5

2014-08-04 Thread Dugger, Donald D
1) Mid-cycle meetup results
2) Forklift status
3) Opens

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Networking Docs Swarm - Brisbane 9 August

2014-08-04 Thread Stuart Fox
Cant make it to Brisbane but this doc is so needed. Any chamce you could
put round a questionaire or sethomg similar to get input from those who
cant make it?



--

BR,

Stuart



On 14-08-04 8:05 PM Lana Brindley wrote:
Hi everyone,

 I just wanted to let you all know about the OpenStack Networking Docs
Swarm being held in Brisbane on 9 August.

 Currently, there is no OpenStack Networking Guide, so the focus of this
swarm is to combine the existing networking content into a single doc so
that it can be updated, reviewed, and hopefully completed for the Juno
release.

 We need both tech writers and OpenStack admins for the event to be a
success. Even if you can only make it for half an hour, your presence
would be greatly appreciated!

 RSVP here:

http://www.meetup.com/Australian-OpenStack-User-Group/events/198867972/?gj=rcs.ba=co2.b_grprv=rcs.b

 More information here:
http://www.meetup.com/Australian-OpenStack-User-Group/events/198867972/?gj=rcs.ba=co2.b_grprv=rcs.b

 See you on Saturday!

 Lana

-- 
 Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia

http://www.meetup.com/Australian-OpenStack-User-Group/events/198867972/?gj=rcs.ba=co2.b_grprv=rcs.b

___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
http://www.meetup.com/Australian-OpenStack-User-Group/events/198867972/?gj=rcs.ba=co2.b_grprv=rcs.b
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Networking Docs Swarm - Brisbane 9 August

2014-08-04 Thread Tom Fifield

How about writing up something in a bug report:

https://bugs.launchpad.net/openstack-manuals/+filebug

or a mailing list post about what you'd like to see?


Regards,


Tom

On 05/08/14 12:22, Stuart Fox wrote:

Cant make it to Brisbane but this doc is so needed. Any chamce you could
put round a questionaire or sethomg similar to get input from those who
cant make it?Â

Â

--

BR,

Stuart

Â


On 14-08-04 8:05 PM Lana Brindley wrote:

Hi everyone,

I just wanted to let you all know about the OpenStack Networking Docs
Swarm being held in Brisbane on 9 August.

Currently, there is no OpenStack Networking Guide, so the focus of this
swarm is to combine the existing networking content into a single doc so
that it can be updated, reviewed, and hopefully completed for the Juno
release.

We need both tech writers and OpenStack admins for the event to be a
success. Even if you can only make it for half an hour, your presence
would be greatly appreciated!

RSVP here:
http://www.meetup.com/Australian-OpenStack-User-Group/events/198867972/?gj=rcs.ba=co2.b_grprv=rcs.b

More information here:
http://www.meetup.com/Australian-OpenStack-User-Group/events/198867972/?gj=rcs.ba=co2.b_grprv=rcs.b

See you on Saturday!

Lana

--
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://www.meetup.com/Australian-OpenStack-User-Group/events/198867972/?gj=rcs.ba=co2.b_grprv=rcs.b

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org
http://www.meetup.com/Australian-OpenStack-User-Group/events/198867972/?gj=rcs.ba=co2.b_grprv=rcs.b




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Networking Docs Swarm - Brisbane 9 August

2014-08-04 Thread Mike Spreitzer
Lana Brindley openst...@lanabrindley.com wrote on 08/04/2014 11:05:24 
PM:
 I just wanted to let you all know about the OpenStack Networking Docs 
 Swarm being held in Brisbane on 9 August.
 ...

+++ on this.

I can not contribute answers, but have lots of questions.

Let me suggest that documentation is needed both for cloud providers doing 
general deployment and also for developers using DevStack.  Not all of us 
developers are Neutron experts, so we need decent documentation.  And 
developers sometimes need to use host machines with fewer than the ideal 
number of NICs.  Sometimes those host machines are virtual, leading to 
nested virtualization (of network as well as compute).

Thanks!
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat][TripleO] Heat can't retrieve stack list

2014-08-04 Thread Peeyush Gupta
Hi all,

I have been trying to set up tripleo using instack.
When I try to deploy overcloud, I get a heat related 
error. Here it is:

[stack@localhost ~]$ heat stack-list
ERROR: Timeout while waiting on RPC response - topic: engine, RPC method: 
list_stacks info: unknown

Now, heat-engine is running:


[stack@localhost ~]$ ps ax | grep heat-engine
15765 pts/0    S+     0:00 grep --color=auto heat-engine
25671 ?        Ss     0:27 /usr/bin/python /usr/bin/heat-engine --logfile 
/var/log/heat/engine.log

Here is the heat-engine log:

2014-08-04 07:57:26.321 25671 ERROR heat.engine.resource [-] CREATE : Server 
SwiftStorage0 [b78e4c74-f446-4941-8402-56cf46401013] Stack overcloud 
[9bdc71f5-ce31-4a9c-8d72-3adda0a2c66e]
2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource Traceback (most recent 
call last):
2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource   File 
/usr/lib/python2.7/site-packages/heat/engine/resource.py, line 420, in 
_do_action
2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource     while not 
check(handle_data):
2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource   File 
/usr/lib/python2.7/site-packages/heat/engine/resources/server.py, line 545, 
in check_create_complete
2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource     return 
self._check_active(server)
2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource   File 
/usr/lib/python2.7/site-packages/heat/engine/resources/server.py, line 561, 
in _check_active
2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource     raise exc
2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource Error: Creation of 
server overcloud-SwiftStorage0-fnl43ebtcsom failed.
2014-08-04 07:57:26.321 25671 TRACE heat.engine.resource 
2014-08-04 07:57:27.152 25671 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-08-04 07:57:27.494 25671 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-08-04 07:57:27.998 25671 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-08-04 07:57:28.312 25671 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-08-04 07:57:28.799 25671 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-08-04 07:57:29.452 25671 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-08-04 07:57:30.106 25671 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-08-04 07:57:30.516 25671 WARNING heat.common.keystoneclient [-] 
stack_user_domain ID not set in heat.conf falling back to using default
2014-08-04 07:57:31.499 25671 WARNING heat.engine.service [-] Stack create 
failed, status FAILED

Any idea how to figure this error out?
 
Thanks,
Peeyush Gupta___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Tempest] LBaaS API Tempest testing status update

2014-08-04 Thread Brandon Logan
Hey Miguel,
I was able to reproduce the issue here and luckily it was an error in
the driver.  So that means I don't need to update the plugin.  I fixed
the issue in the driver, and pushed the change up.  Everything should be
working now.

Thanks,
Brandon

On Sun, 2014-08-03 at 20:42 -0500, Miguel Lavalle wrote:
 Hi,
 
 
 Today I have confirmed with a Tempest api test script that all the
 operations for loadbalancers, listeners, healthmonitors and pools for
 the new LBaaS v2.0 work correctly. 
 
 
 As for members, POST, PUT and GET operations also work correctly. The
 only exception is the DELETE operation. The test script failed with
 it. I will investigate the cause tomorrow
 
 
 Cheers
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev