Re: [openstack-dev] [Infra][Neutron] Request voting for Tail-f CI account

2014-07-25 Thread Luke Gorrie
Thanks everybody!

Onward :-)


On 24 July 2014 19:41, Anita Kuno ante...@anteaya.info wrote:

 On 07/24/2014 01:18 PM, Kyle Mestery wrote:
  On Thu, Jul 24, 2014 at 12:03 PM, Collins, Sean
  sean_colli...@cable.comcast.com wrote:
  On Wed, Jul 23, 2014 at 11:19:13AM EDT, Luke Gorrie wrote:
  Tail-f NCS: I want to keep this feature well maintained and compliant
 with
  all the rules. I am the person who wrote this driver originally, I have
  been the responsible person for 90% of its lifetime, I am the person
 who
  setup the current CI, and I am the one responsible for smooth
 operation of
  that CI. I am reviewing its results with my morning coffee and have
 been
  doing so for the past 6 weeks. I would like to have it start voting
 and I
  believe that it and I are ready for that. I am responsive to email, I
 am
  usually on IRC (lukego), and in case of emergency you can SMS/call my
  mobile on +41 79 244 32 17.
 
  So... Let's be friends again? (and do ever cooler stuff in Kilo?)
 
 
 
  Luke was kind enough to reach out to me, and we had a discussion in
  order to bury the hatchet. Posting his contact details and being
  available to discuss things has put my mind at ease, I am ready to move
  forward.
 
  +1
 
  He also reached out to me, so I'm also happy to add this back and move
  forward with burying the hatchet. I'm all for second chances in
  general, and Luke's gone out of his way to work with people upstream
  in a much more efficient and effective manner.
 
  Thanks,
  Kyle
 
 Well done, Luke. It takes a lot of work to dig oneself out of a hole and
 create good relationships where there need to be some. It is a tough job
 and not everyone chooses to do it.

 You chose to and you succeeded. I commend your work.

 I'm glad we have a good resolution in this space.

 Thanks to all involved for their persistence and hard work. Well done,
 Anita.

  --
  Sean M. Collins
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]resize

2014-07-25 Thread Li Tianqing
I test after i change nova-resize code.


configuration:
controler:1
computenode:3
storage: nfs, 


before nova resize is changed

Task fdc67dfe-b182-4450-8e85-10085626acab is finished.



test scenario NovaServers.resize_server
args position 0
args values:
{u'args': {u'confirm': True,
   u'flavor': u'5a180f8a-51e9-4621-a9f9-7332253e0b32',
   u'image': u'95be464a-ddac-4332-9c14-3c9bc4156c86',
   u'to_flavor': u'90ff2ce2-e116-4295-b33a-13f54a26d495'},
 u'context': {u'users': {u'concurrent': 30,
 u'tenants': 1,
 u'users_per_tenant': 1}},
 u'runner': {u'concurrency': 5,
 u'timeout': 2000,
 u'times': 10,
 u'type': u'constant'}}


+-+---+---+---+---+---+-+---+
| action  | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 
percentile | success | count |
+-+---+---+---+---+---+-+---+
| nova.boot_server| 11.052| 94.3  | 167.309   | 167.108   | 
167.208   | 100.0%  | 10|
| nova.delete_server  | 8.884 | 23.542| 96.072| 31.738| 
63.905| 100.0%  | 10|
| nova.resize | 301.313   | 353.936   | 448.779   | 447.586   | 
448.183   | 100.0%  | 10|
| nova.resize_confirm | 2.425 | 20.242| 145.639   | 26.322| 
85.981| 100.0%  | 10|
| total   | 466.888   | 492.02| 512.294   | 508.277   | 
510.285   | 100.0%  | 10|
+-+---+---+---+---+---+-+---+


after nova-resize code is changed

Task cd7a3d71-ccae-481b-aaa3-49902ad0da08 is finished.



test scenario NovaServers.resize_server
args position 0
args values:
{u'args': {u'confirm': True,
   u'flavor': u'5a180f8a-51e9-4621-a9f9-7332253e0b32',
   u'image': u'95be464a-ddac-4332-9c14-3c9bc4156c86',
   u'to_flavor': u'90ff2ce2-e116-4295-b33a-13f54a26d495'},
 u'context': {u'users': {u'concurrent': 30,
 u'tenants': 1,
 u'users_per_tenant': 1}},
 u'runner': {u'concurrency': 5,
 u'timeout': 2000,
 u'times': 10,
 u'type': u'constant'}}
+-+---+---+---+---+---+-+---+
| action  | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 
percentile | success | count |
+-+---+---+---+---+---+-+---+
| nova.boot_server| 13.997| 16.314| 18.858| 17.82 | 
18.339| 100.0%  | 10|
| nova.delete_server  | 6.593 | 8.877 | 10.85 | 10.777| 
10.814| 100.0%  | 10|
| nova.resize | 16.072| 19.147| 22.807| 22.798| 
22.803| 100.0%  | 10|
| nova.resize_confirm | 2.459 | 5.368 | 10.618| 6.949 | 
8.783 | 100.0%  | 10|
| total   | 43.034| 49.707| 55.565| 53.537| 
54.551| 100.0%  | 10|
+-+---+---+---+---+---+-+---+




But it is just rough change, some point needs to put into consideration. 
What i do is no backing file when using shared storage,
use qcow2 to resize directly rather than change the disk to raw then to qcow2.


At 2014-07-25 10:09:09, Tian, Shuangtai shuangtai.t...@intel.com wrote:
Agree with you now

-Original Message-
From: fdsafdsafd [mailto:jaze...@163.com] 
Sent: Thursday, July 24, 2014 5:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova]resize


No.
before L5156, we convert it from qcow2 to qcow2, in which it strips backing 
file.
I think here, we should wirte like this:
 
if info['type'] == 'qcow2' and info['backing_file']:
   if shared_storage:
 utils.execute('cp', from_path, img_path)
   else:
 tmp_path = from_path + _rbase
 # merge backing file
 utils.execute('qemu-img', 'convert', '-f', 'qcow2',
  '-O', 'qcow2', from_path, tmp_path)
libvirt_utils.copy_image(tmp_path, img_path, host=dest)
utils.execute('rm', '-f', tmp_path)
else:  # raw or qcow2 with no backing file
 libvirt_utils.copy_image(from_path, img_path, host=dest)



At 2014-07-24 05:02:39, Tian, Shuangtai 

Re: [openstack-dev] [nova]resize

2014-07-25 Thread Jesse Pretorius
On 24 July 2014 20:38, Vishvananda Ishaya vishvana...@gmail.com wrote:

 The resize code as written originally did the simplest possible thing. It
 converts and copies the whole file so that it doesn’t have to figure out
 how
 to sync backing files etc. This could definitely be improved, especially
 now that
 there is code in _create_images_and_backing that can ensure that backing
 files are
 downloaded/created if they are not there.

 Additionally the resize code should be using something other than
 ssh/rsync. I’m
 a fan of using glance to store the file during transfer, but others have
 suggested
 using the live migrate code or libvirt to transfer the disks.


I'd like to suggest an additional improvement - if the resize is only a
CPU/Memory resize, then if the host can handle it the entire disk
flattening/conversion/migration should be skipped and the resize of the
CPU/Memory spec should be done in-place on the host.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][DVR]: Will DVR support external gateway with snat disabled?

2014-07-25 Thread Wuhongning
Now in L3 API, we can create a router with external gateway, while enable_snat 
setting to false.
Now in DVR code, all cenral NS processing is related with snat, so does it 
still support NS central gw without snat?

Also, is it possible to separate the scheduling of NS central gateway and 
SNAT(or other central service)? This will make sense if we want to have linux 
router as the NS gw to terminate all internal traffic, but leave all L4 
service(NAT/FW/VPN/LB) to some hardware device.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]resize

2014-07-25 Thread Li Tianqing



But right now, we trigged resize by flavor changed. Do you mean we can split 
the resize by cpu, by memory, or by disk?


在 2014-07-25 03:25:46,Jesse Pretorius jesse.pretor...@gmail.com 写道:

On 24 July 2014 20:38, Vishvananda Ishaya vishvana...@gmail.com wrote:
The resize code as written originally did the simplest possible thing. It
converts and copies the whole file so that it doesn’t have to figure out how
to sync backing files etc. This could definitely be improved, especially now 
that
there is code in _create_images_and_backing that can ensure that backing files 
are
downloaded/created if they are not there.

Additionally the resize code should be using something other than ssh/rsync. I’m
a fan of using glance to store the file during transfer, but others have 
suggested
using the live migrate code or libvirt to transfer the disks.



I'd like to suggest an additional improvement - if the resize is only a 
CPU/Memory resize, then if the host can handle it the entire disk 
flattening/conversion/migration should be skipped and the resize of the 
CPU/Memory spec should be done in-place on the host. ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vhost-scsi support in Nova

2014-07-25 Thread Nicholas A. Bellinger
On Thu, 2014-07-24 at 11:06 +0100, Daniel P. Berrange wrote:
 On Wed, Jul 23, 2014 at 10:32:44PM -0700, Nicholas A. Bellinger wrote:
  *) vhost-scsi doesn't support migration
  
  Since it's initial merge in QEMU v1.5, vhost-scsi has a migration blocker
  set.  This is primarily due to requiring some external orchestration in
  order to setup the necessary vhost-scsi endpoints on the migration
  destination to match what's running on the migration source.
  
  Here are a couple of points that Stefan detailed some time ago about what's
  involved for properly supporting live migration with vhost-scsi:
  
  (1) vhost-scsi needs to tell QEMU when it dirties memory pages, either by
  DMAing to guest memory buffers or by modifying the virtio vring (which also
  lives in guest memory).  This should be straightforward since the
  infrastructure is already present in vhost (it's called the log) and used
  by drivers/vhost/net.c.
  
  (2) The harder part is seamless target handover to the destination host.
  vhost-scsi needs to serialize any SCSI target state from the source machine
  and load it on the destination machine.  We could be in the middle of
  emulating a SCSI command.
  
  An obvious solution is to only support active-passive or active-active HA
  setups where tcm already knows how to fail over.  This typically requires
  shared storage and maybe some communication for the clustering mechanism.
  There are more sophisticated approaches, so this straightforward one is just
  an example.
  
  That said, we do intended to support live migration for vhost-scsi using
  iSCSI/iSER/FC shared storage.
  
  *) vhost-scsi doesn't support qcow2
  
  Given all other cinder drivers do not use QEMU qcow2 to access storage
  blocks, with the exception of the Netapp and Gluster driver, this argument
  is not particularly relevant here.
  
  However, this doesn't mean that vhost-scsi (and target-core itself) cannot
  support qcow2 images.  There is currently an effort to add a userspace
  backend driver for the upstream target (tcm_core_user [3]), that will allow
  for supporting various disk formats in userspace.
  
  The important part for vhost-scsi is that regardless of what type of target
  backend driver is put behind the fabric LUNs (raw block devices using
  IBLOCK, qcow2 images using target_core_user, etc) the changes required in
  Nova and libvirt to support vhost-scsi remain the same.  They do not change
  based on the backend driver.
  
  *) vhost-scsi is not intended for production
  
  vhost-scsi has been included the upstream kernel since the v3.6 release, and
  included in QEMU since v1.5.  vhost-scsi runs unmodified out of the box on a
  number of popular distributions including Fedora, Ubuntu, and OpenSuse.  It
  also works as a QEMU boot device with Seabios, and even with the Windows
  virtio-scsi mini-port driver.
  
  There is at least one vendor who has already posted libvirt patches to
  support vhost-scsi, so vhost-scsi is already being pushed beyond a debugging
  and development tool.
  
  For instance, here are a few specific use cases where vhost-scsi is
  currently the only option for virtio-scsi guests:
  
- Low (sub 100 usec) latencies for AIO reads/writes with small iodepth
  workloads
- 1M+ small block IOPs workloads at low CPU utilization with large
  iopdeth workloads.
- End-to-end data integrity using T10 protection information (DIF)
 
 IIUC, there is also missing support for block jobs like drive-mirror
 which is needed by Nova.
 

This limitation can be considered an acceptable trade-off in initial
support by some users, given the already considerable performance +
efficiency gains that vhost logic provides to KVM/virtio guests.

Others would like to utilize virtio-scsi to access features like SPC-3
Persistent Reservations, ALUA Explicit/Implicit Multipath, and
EXTENDED_COPY offload provided by the LIO target subsystem.

Note these three SCSI features are enabled (by default) on all LIO
target fabric LUNs, starting with v3.12 kernel code.

 From a functionality POV migration  drive-mirror support are the two
 core roadblocks to including vhost-scsi in Nova (as well as libvirt
 support for it of course). Realistically it doesn't sound like these
 are likely to be solved soon enough to give us confidence in taking
 this for the Juno release cycle.
 

The spec is for initial support of vhost-scsi controller endpoints
during Juno, and as mentioned earlier by Vish, should be considered a
experimental feature given caveats you've highlighted above.

We also understand that code will ultimately have to pass Nova + libvirt
upstream review in order to be merged, and that the approval of any
vhost-scsi spec now is not a guarantee the feature will actually make it
into any official Juno release.

Thanks,

--nab


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] vhost-scsi support in Nova

2014-07-25 Thread Nicholas A. Bellinger
On Thu, 2014-07-24 at 11:06 +0100, Daniel P. Berrange wrote:
 On Wed, Jul 23, 2014 at 10:32:44PM -0700, Nicholas A. Bellinger wrote:
  *) vhost-scsi doesn't support migration
  
  Since it's initial merge in QEMU v1.5, vhost-scsi has a migration blocker
  set.  This is primarily due to requiring some external orchestration in
  order to setup the necessary vhost-scsi endpoints on the migration
  destination to match what's running on the migration source.
  
  Here are a couple of points that Stefan detailed some time ago about what's
  involved for properly supporting live migration with vhost-scsi:
  
  (1) vhost-scsi needs to tell QEMU when it dirties memory pages, either by
  DMAing to guest memory buffers or by modifying the virtio vring (which also
  lives in guest memory).  This should be straightforward since the
  infrastructure is already present in vhost (it's called the log) and used
  by drivers/vhost/net.c.
  
  (2) The harder part is seamless target handover to the destination host.
  vhost-scsi needs to serialize any SCSI target state from the source machine
  and load it on the destination machine.  We could be in the middle of
  emulating a SCSI command.
  
  An obvious solution is to only support active-passive or active-active HA
  setups where tcm already knows how to fail over.  This typically requires
  shared storage and maybe some communication for the clustering mechanism.
  There are more sophisticated approaches, so this straightforward one is just
  an example.
  
  That said, we do intended to support live migration for vhost-scsi using
  iSCSI/iSER/FC shared storage.
  
  *) vhost-scsi doesn't support qcow2
  
  Given all other cinder drivers do not use QEMU qcow2 to access storage
  blocks, with the exception of the Netapp and Gluster driver, this argument
  is not particularly relevant here.
  
  However, this doesn't mean that vhost-scsi (and target-core itself) cannot
  support qcow2 images.  There is currently an effort to add a userspace
  backend driver for the upstream target (tcm_core_user [3]), that will allow
  for supporting various disk formats in userspace.
  
  The important part for vhost-scsi is that regardless of what type of target
  backend driver is put behind the fabric LUNs (raw block devices using
  IBLOCK, qcow2 images using target_core_user, etc) the changes required in
  Nova and libvirt to support vhost-scsi remain the same.  They do not change
  based on the backend driver.
  
  *) vhost-scsi is not intended for production
  
  vhost-scsi has been included the upstream kernel since the v3.6 release, and
  included in QEMU since v1.5.  vhost-scsi runs unmodified out of the box on a
  number of popular distributions including Fedora, Ubuntu, and OpenSuse.  It
  also works as a QEMU boot device with Seabios, and even with the Windows
  virtio-scsi mini-port driver.
  
  There is at least one vendor who has already posted libvirt patches to
  support vhost-scsi, so vhost-scsi is already being pushed beyond a debugging
  and development tool.
  
  For instance, here are a few specific use cases where vhost-scsi is
  currently the only option for virtio-scsi guests:
  
- Low (sub 100 usec) latencies for AIO reads/writes with small iodepth
  workloads
- 1M+ small block IOPs workloads at low CPU utilization with large
  iopdeth workloads.
- End-to-end data integrity using T10 protection information (DIF)
 
 IIUC, there is also missing support for block jobs like drive-mirror
 which is needed by Nova.
 

This limitation can be considered an acceptable trade-off in initial
support by some users, given the already considerable performance +
efficiency gains that vhost logic provides to KVM/virtio guests.

Others would like to utilize virtio-scsi to access features like SPC-3
Persistent Reservations, ALUA Explicit/Implicit Multipath, and
EXTENDED_COPY offload provided by the LIO target subsystem.

Note these three SCSI features are enabled (by default) on all LIO
target fabric LUNs, starting with v3.12 kernel code.

 From a functionality POV migration  drive-mirror support are the two
 core roadblocks to including vhost-scsi in Nova (as well as libvirt
 support for it of course). Realistically it doesn't sound like these
 are likely to be solved soon enough to give us confidence in taking
 this for the Juno release cycle.
 

The spec is for initial support of vhost-scsi controller endpoints
during Juno, and as mentioned earlier by Vish, should be considered a
experimental feature given caveats you've highlighted above.

We also understand that code will ultimately have to pass Nova + libvirt
upstream review in order to be merged, and that the approval of any
vhost-scsi spec now is not a guarantee the feature will actually make it
into any official Juno release.

Thanks,

--nab



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [TripleO] Strategy for recovering crashed nodes in the Overcloud?

2014-07-25 Thread Ladislav Smola

Hi,

I believe you are looking for stack convergence in Heat. It's not fully 
implemented yet AFAIK.
You can check it out here 
https://blueprints.launchpad.net/heat/+spec/convergence


Hope it will help you.

Ladislav

On 07/23/2014 12:31 PM, Howley, Tom wrote:


(Resending to properly start new thread.)

Hi,

I'm running a HA overcloud configuration and as far as I'm aware, 
there is currently no mechanism in place for restarting failed nodes 
in the cluster. Originally, I had been wondering if we would use a 
corosync/pacemaker cluster across the control plane with STONITH 
resources configured for each node (a STONITH plugin for Ironic could 
be written). This might be fine if a corosync/pacemaker stack is 
already being used for HA of some components, but it seems overkill 
otherwise. The undercloud heat could be in a good position to restart 
the overcloud nodes -- is that the plan or are there other options 
being considered?


Thanks,

Tom



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Test Ceilometer polling in tempest

2014-07-25 Thread Ildikó Váncsa
Hi Matt,

Thanks for the reply, please see my comments inline.

Best Regards,
Ildiko

-Original Message-
From: Matthew Treinish [mailto:mtrein...@kortar.org] 
Sent: Thursday, July 24, 2014 6:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [qa] Test Ceilometer polling in tempest

On Wed, Jul 16, 2014 at 07:44:38PM +0400, Dina Belova wrote:
 Ildiko, thanks for starting this discussion.
 
 Really, that is quite painful problem for Ceilometer and QA team. As 
 far as I know, currently there is some kind of tendency of making 
 integration Tempest tests quicker and less resource consuming - that's 
 quite logical IMHO. Polling as a way of information collecting from 
 different services and projects is quite consuming speaking about load 
 on Nova API, etc. - that's why I completely understand the wish of QA 
 team to get rid of it, although polling still makes lots work inside 
 Ceilometer, and that's why integration testing for this feature is 
 really important for me as Ceilometer contributor - without pollsters 
 testing we have no way to check its workability.
 
 That's why I'll be really glad if Ildiko's (or whatever other) 
 solution that will allow polling testing in the gate will be found and 
 accepted.
 
 Problem with described above solution requires some kind of change in 
 what do we call environment preparing for the integration testing - 
 and we really need QA crew help here. Afair polling deprecation was 
 suggested in some of the IRC discussions (by only notifications 
 usage), but that's not the solution that might be just used right now 
 - but we need way of Ceilometer workability verification right now to 
 continue work on its improvement.
 
 So any suggestions and comments are welcome here :)
 
 Thanks!
 Dina
 
 
 On Wed, Jul 16, 2014 at 7:06 PM, Ildikó Váncsa 
 ildiko.van...@ericsson.com
 wrote:
 
   Hi Folks,
 
 
 
  We’ve faced with some problems during running Ceilometer integration 
  tests on the gate. The main issue is that we cannot test the polling 
  mechanism, as if we use a small polling interval, like 1 min, then 
  it puts a high pressure on Nova API. If we use a longer interval, 
  like 10 mins, then we will not be able to execute any tests 
  successfully, because it would run too long.
 
 
 
  The idea, to solve this issue,  is to reconfigure Ceilometer, when 
  the polling is tested. Which would mean to change the polling 
  interval from the default 10 mins to 1 min at the beginning of the 
  test, restart the service and when the test is finished, the polling 
  interval should be changed back to 10 mins, which will require one 
  more service restart. The downside of this idea is, that it needs 
  service restart today. It is on the list of plans to support dynamic 
  re-configuration of Ceilometer, which would mean the ability to change the 
  polling interval without restarting the service.
 
 
 
  I know that this idea isn’t ideal from the PoV that the system 
  configuration is changed during running the tests, but this is an 
  expected scenario even in a production environment. We would change 
  a parameter that can be changed by a user any time in a way as users 
  do it too. Later on, when we can reconfigure the polling interval 
  without restarting the service, this approach will be even simpler.

So your saying that you expect users to be able to manually reconfigure 
Ceilometer on the fly to be able to use polling, that seems far from ideal.

ildikov: Sorry, maybe I wasn't 100% clear in my original mail. So polling will 
work out of the box after you installed Ceilometer with the default 
configuration. But it can happen that someone is not satisfied with the default 
values, like he/she needs samples from polling in every minute instead of every 
10 minutes. In this case the polling interval should be modified in one of the 
configuration files (pipeline.yaml) and then the affected services should be 
restarted. But if someone is happy with the 10 mins polling interval, then 
he/she does not have to do anything to make it work. Later on we plan to use 
some automation, so that the polling interval could be configured without 
restarting the services, which would make the user's life a bit easier.

 
 
 
  This idea would make it possible to test the polling mechanism of 
  Ceilometer without any radical change in the ordering of test cases 
  or any other things that would be strange in integration tests. We 
  couldn’t find any better way to solve the issue of the load on the APIs 
  caused by polling.
 
 
 
  What’s your opinion about this scenario? Do you think it could be a 
  viable solution to the above described problem?
 
 
 

Umm, so frankly this approach seems kind of crazy to me. Aside from the project 
level implications of saying that as a user to ensure you can't use polling 
data reliably unless you adjust the polling frequency of Ceilometer. The bigger 
issue is that you're 

Re: [openstack-dev] [neutron] [not-only-neutron] How to Contribute upstream in OpenStack Neutron

2014-07-25 Thread Luke Gorrie
On 24 July 2014 17:09, Kyle Mestery mest...@mestery.com wrote:

 I've received a lot of emails lately, mostly private, from people who
 feel they are being left out of the Neutron process. I'm unsure if
 other projects have people who feel this way, thus the uniquely worded
 subject above. I wanted to broadly address these concerns with this
 email.


I have one idea for low-hanging fruit to put new contributors more at ease:
to explain a little about both when and why the Merge button is finally
pressed on a change.

I mean so that new contributors won't have doubts like is it bad that my
change isn't merged yet?, am I being too meek?, am I being too pushy?,
have I missed a step somewhere?, how often should I skip dinner with my
family to attend more/different IRC meetings?, and so on.

I have had a good experience with this but that is many thanks to friendly
people giving me informal feedback and reassurance outside of the Gerrit
workflow and official docs.

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-25 Thread Daniel P. Berrange
On Thu, Jul 24, 2014 at 04:01:39PM -0400, Sean Dague wrote:
 On 07/24/2014 12:40 PM, Daniel P. Berrange wrote:
  On Wed, Jul 23, 2014 at 02:39:47PM -0700, James E. Blair wrote:
  
  ==Future changes==
  
  ===Fixing Faster===
 
  We introduce bugs to OpenStack at some constant rate, which piles up
  over time. Our systems currently treat all changes as equally risky and
  important to the health of the system, which makes landing code changes
  to fix key bugs slow when we're at a high reset rate. We've got a manual
  process of promoting changes today to get around this, but that's
  actually quite costly in people time, and takes getting all the right
  people together at once to promote changes. You can see a number of the
  changes we promoted during the gate storm in June [3], and it was no
  small number of fixes to get us back to a reasonably passing gate. We
  think that optimizing this system will help us land fixes to critical
  bugs faster.
 
  [3] https://etherpad.openstack.org/p/gatetriage-june2014
 
  The basic idea is to use the data from elastic recheck to identify that
  a patch is fixing a critical gate related bug. When one of these is
  found in the queues it will be given higher priority, including bubbling
  up to the top of the gate queue automatically. The manual promote
  process should no longer be needed, and instead bugs fixing elastic
  recheck tracked issues will be promoted automatically.
 
  At the same time we'll also promote review on critical gate bugs through
  making them visible in a number of different channels (like on elastic
  recheck pages, review day, and in the gerrit dashboards). The idea here
  again is to make the reviews that fix key bugs pop to the top of
  everyone's views.
  
  In some of the harder gate bugs I've looked at (especially the infamous
  'live snapshot' timeout bug), it has been damn hard to actually figure
  out what's wrong. AFAIK, no one has ever been able to reproduce it
  outside of the gate infrastructure. I've even gone as far as setting up
  identical Ubuntu VMs to the ones used in the gate on a local cloud, and
  running the tempest tests multiple times, but still can't reproduce what
  happens on the gate machines themselves :-( As such we're relying on
  code inspection and the collected log messages to try and figure out
  what might be wrong.
  
  The gate collects alot of info and publishes it, but in this case I
  have found the published logs to be insufficient - I needed to get
  the more verbose libvirtd.log file. devstack has the ability to turn
  this on via an environment variable, but it is disabled by default
  because it would add 3% to the total size of logs collected per gate
  job.
 
 Right now we're at 95% full on 14 TB (which is the max # of volumes you
 can attach to a single system in RAX), so every gig is sacred. There has
 been a big push, which included the sprint last week in Darmstadt, to
 get log data into swift, at which point our available storage goes way up.
 
 So for right now, we're a little squashed. Hopefully within a month
 we'll have the full solution.

 As soon as we get those kinks out, I'd say we're in a position to flip
 on that logging in devstack by default.

I don't particularly mind if we don't have libvirtdd.log verbose
debugging enabled by default, if there were a way to turn it on
for individual reviews we're debugging with.

  There's no way for me to get that environment variable for devstack
  turned on for a specific review I want to test with. In the end I
  uploaded a change to nova which abused rootwrap to elevate privileges,
  install extra deb packages, reconfigure libvirtd logging and restart
  the libvirtd daemon.
  

  https://review.openstack.org/#/c/103066/11/etc/nova/rootwrap.d/compute.filters
https://review.openstack.org/#/c/103066/11/nova/virt/libvirt/driver.py
  
  This let me get further, but still not resolve it. My next attack is
  to build a custom QEMU binary and hack nova further so that it can
  download my custom QEMU binary from a website onto the gate machine
  and run the test with it. Failing that I'm going to be hacking things
  to try to attach to QEMU in the gate with GDB and get stack traces.
  Anything is doable thanks to rootwrap giving us a way to elevate
  privileges from Nova, but it is a somewhat tedious approach.
  
  I'd like us to think about whether they is anything we can do to make
  life easier in these kind of hard debugging scenarios where the regular
  logs are not sufficient.
 
 Agreed. Honestly, though we do also need to figure out first fail
 detection on our logs as well. Because realistically if we can't debug
 failures from those, then I really don't understand how we're ever going
 to expect large users to.

Ultimately there's always going ot be classes of bugs that are hard
or impossible for users to debug, which is why they'll engage vendors
for support of their OpenStack deployments. We should do as much as
possible 

[openstack-dev] [nova] [volume-delete-failure]

2014-07-25 Thread zhangtralon
HiHere,there is a problem to discuss with you.Problem: a volume may leave over 
when we delete an instance   Description:Here, two scenes may cause that a 
volume is legacy when we delete instance whose task_state is 
block_device_mapping .The first scene is that using the boot volume created by 
image creates instance; The other scene is that using image create instance 
with a volume created through a image.
Reason:Through analyzing, we find that the volume id is not update to 
block_device_mapping table in DB until a volume created by an image through 
setting parameters in Blocking Device Mapping v2 is attached to an instance 
completely.If we delete the instance before the volume id is not update to the 
block_device_mapping table, the problem mentioned above will occur.
Although the reason of the problem is found, I want to discuss the solution 
about it with you   
Two examples to reproduce the problem on latest icehousce:
1. the first scene
(1)root@devstack:~# nova list
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+
(2)root@devstack:~# nova boot --flavor m1.tiny --block-device 
id=61ebee75-5883-49a3-bf85-ad6f6c29fc1b,source=image,dest=volume,device=vda,size=1,shutdown=removed,bootindex=0
 --nic net-id=354ba9ac-e6a7-4fd6-a49f-6ae18a815e95 tralon_test
root@devstack:~# nova list
+--+-++--+-+---+
| ID | Name | Status | Task State | Power State | Networks |
+--+-++--+-+---+
| 57cbb39d-c93f-44eb-afda-9ce00110950d | tralon_test | BUILD | 
block_device_mapping | NOSTATE | private=10.0.0.20 |
+--+-++--+-+---+
(3)root@devstack:~# nova delete tralon_test
root@devstack:~# nova list
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+
(4) root@devstack:~# cinder list
+--+---+--+--+-+--+--+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--+---+--+--+-+--+--+
| 3e5579a9-5aac-42b6-9885-441e861f6cc0 | available | None | 1 | None | false | |
| a4121322-529b-4223-ac26-0f569dc7821e | available | | 1 | None | true | |
| a7ad846b-8638-40c1-be42-f2816638a917 | in-use | | 1 | None | true | 
57cbb39d-c93f-44eb-afda-9ce00110950d |
+--+---+--+--+-+--+--+
we can see that the instance 57cbb39d-c93f-44eb-afda-9ce00110950d was deleted 
while the volume still exists with the in-use status2. the scend scene
 (1)root@devstack:~# nova list
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+
(2)root@devstack:~# nova boot --flavor m1.tiny --image 
61ebee75-5883-49a3-bf85-ad6f6c29fc1b --nic 
net-id=354ba9ac-e6a7-4fd6-a49f-6ae18a815e95 --block-device 
id=61ebee75-5883-49a3-bf85-ad6f6c29fc1b,source=image,dest=volume,device=vdb,size=1,shutdown=removed
 tralon_image_instance
root@devstack:~# nova list
+--+---++--+-+---+
| ID | Name | Status | Task State | Power State | Networks |
+--+---++--+-+---+
| 25bcfe84-0c3f-40d3-a917-4791e092fa06 | tralon_image_instance | BUILD | 
block_device_mapping | NOSTATE | private=10.0.0.26 |
+--+---++--+-+---+
(3)root@devstack:~# nova delete 25bcfe84-0c3f-40d3-a917-4791e092fa06
  ( 4 ) root@devstack:~# nova list
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+
 (5) root@devstack:~# cinder list
+--+---+--+--+-+--+--+
| ID | Status | Name | Size | Volume Type | 

Re: [openstack-dev] vhost-scsi support in Nova

2014-07-25 Thread Daniel P. Berrange
On Fri, Jul 25, 2014 at 01:18:33AM -0700, Nicholas A. Bellinger wrote:
 On Thu, 2014-07-24 at 11:06 +0100, Daniel P. Berrange wrote:
  On Wed, Jul 23, 2014 at 10:32:44PM -0700, Nicholas A. Bellinger wrote:
   *) vhost-scsi doesn't support migration
   
   Since it's initial merge in QEMU v1.5, vhost-scsi has a migration blocker
   set.  This is primarily due to requiring some external orchestration in
   order to setup the necessary vhost-scsi endpoints on the migration
   destination to match what's running on the migration source.
   
   Here are a couple of points that Stefan detailed some time ago about 
   what's
   involved for properly supporting live migration with vhost-scsi:
   
   (1) vhost-scsi needs to tell QEMU when it dirties memory pages, either by
   DMAing to guest memory buffers or by modifying the virtio vring (which 
   also
   lives in guest memory).  This should be straightforward since the
   infrastructure is already present in vhost (it's called the log) and 
   used
   by drivers/vhost/net.c.
   
   (2) The harder part is seamless target handover to the destination host.
   vhost-scsi needs to serialize any SCSI target state from the source 
   machine
   and load it on the destination machine.  We could be in the middle of
   emulating a SCSI command.
   
   An obvious solution is to only support active-passive or active-active HA
   setups where tcm already knows how to fail over.  This typically requires
   shared storage and maybe some communication for the clustering mechanism.
   There are more sophisticated approaches, so this straightforward one is 
   just
   an example.
   
   That said, we do intended to support live migration for vhost-scsi using
   iSCSI/iSER/FC shared storage.
   
   *) vhost-scsi doesn't support qcow2
   
   Given all other cinder drivers do not use QEMU qcow2 to access storage
   blocks, with the exception of the Netapp and Gluster driver, this argument
   is not particularly relevant here.
   
   However, this doesn't mean that vhost-scsi (and target-core itself) cannot
   support qcow2 images.  There is currently an effort to add a userspace
   backend driver for the upstream target (tcm_core_user [3]), that will 
   allow
   for supporting various disk formats in userspace.
   
   The important part for vhost-scsi is that regardless of what type of 
   target
   backend driver is put behind the fabric LUNs (raw block devices using
   IBLOCK, qcow2 images using target_core_user, etc) the changes required in
   Nova and libvirt to support vhost-scsi remain the same.  They do not 
   change
   based on the backend driver.
   
   *) vhost-scsi is not intended for production
   
   vhost-scsi has been included the upstream kernel since the v3.6 release, 
   and
   included in QEMU since v1.5.  vhost-scsi runs unmodified out of the box 
   on a
   number of popular distributions including Fedora, Ubuntu, and OpenSuse.  
   It
   also works as a QEMU boot device with Seabios, and even with the Windows
   virtio-scsi mini-port driver.
   
   There is at least one vendor who has already posted libvirt patches to
   support vhost-scsi, so vhost-scsi is already being pushed beyond a 
   debugging
   and development tool.
   
   For instance, here are a few specific use cases where vhost-scsi is
   currently the only option for virtio-scsi guests:
   
 - Low (sub 100 usec) latencies for AIO reads/writes with small iodepth
   workloads
 - 1M+ small block IOPs workloads at low CPU utilization with large
   iopdeth workloads.
 - End-to-end data integrity using T10 protection information (DIF)
  
  IIUC, there is also missing support for block jobs like drive-mirror
  which is needed by Nova.
  
 
 This limitation can be considered an acceptable trade-off in initial
 support by some users, given the already considerable performance +
 efficiency gains that vhost logic provides to KVM/virtio guests.
 
 Others would like to utilize virtio-scsi to access features like SPC-3
 Persistent Reservations, ALUA Explicit/Implicit Multipath, and
 EXTENDED_COPY offload provided by the LIO target subsystem.
 
 Note these three SCSI features are enabled (by default) on all LIO
 target fabric LUNs, starting with v3.12 kernel code.
 
  From a functionality POV migration  drive-mirror support are the two
  core roadblocks to including vhost-scsi in Nova (as well as libvirt
  support for it of course). Realistically it doesn't sound like these
  are likely to be solved soon enough to give us confidence in taking
  this for the Juno release cycle.
  
 
 The spec is for initial support of vhost-scsi controller endpoints
 during Juno, and as mentioned earlier by Vish, should be considered a
 experimental feature given caveats you've highlighted above.

To accept something as an feature marked experimental IMHO we need to
have some confidence that it will be able to be marked non-experimental
in the future. The vhost-scsi code has been around in QEMU for over a

Re: [openstack-dev] [nova]resize

2014-07-25 Thread Jesse Pretorius
On 25 July 2014 09:50, Li Tianqing jaze...@163.com wrote:

 But right now, we trigged resize by flavor changed. Do you mean we can
 split the resize by cpu, by memory, or by disk?


No, what I mean is that if the source and target flavor have the same root
and ethereal disk size, then check if the current host can handle the
resize operation on itself and do the resize without involving any of the
disk operations.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Virtio-scsi settings nova-specs exception

2014-07-25 Thread Daniel P. Berrange
On Wed, Jul 23, 2014 at 09:09:52AM +1000, Michael Still wrote:
 Ok, I am going to take Daniel and Dan's comments as agreement that
 this spec freeze exception should go ahead, so the exception is
 approved. The exception is in the form of another week to get the spec
 merged, so quick iterations are the key.

I'm withdrawing my sponsorship of this. I thought it was going to be
a trivial addition, but it is turning into a huge can of worms. I
think it is best to push it to Kilo to allow sufficient time to
address the questions marks over it.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] vhost-scsi support in Nova

2014-07-25 Thread Nicholas A. Bellinger
Hey Stefan,

On Thu, 2014-07-24 at 21:50 +0100, Stefan Hajnoczi wrote:
 On Thu, Jul 24, 2014 at 7:45 PM, Vishvananda Ishaya
 vishvana...@gmail.com wrote:
  As I understand this work, vhost-scsi provides massive perf improvements
  over virtio, which makes it seem like a very valuable addition. I’m ok
  with telling customers that it means that migration and snapshotting are
  not supported as long as the feature is protected by a flavor type or
  image metadata (i.e. not on by default). I know plenty of customers that
  would gladly trade some of the friendly management features for better
  i/o performance.
 
  Therefore I think it is acceptable to take it with some documentation that
  it is experimental. Maybe I’m unique but I deal with people pushing for
  better performance all the time.
 
 Work to make userspace virtio-scsi scale well on multicore hosts has
 begun.  I'm not sure there will be a large IOPS scalability difference
 between the two going forward.  I have CCed Fam Zheng who is doing
 this.

The latency and efficiency gains with existing vhost-scsi vs.
virtio-scsi (minus data-plane) are pretty significant, even when a
single queue per virtio controller is used.

Note the sub 100 usec latencies we've observed with fio random 4k
iodepth=1 workloads are with vhost exposing guest I/O buffer memory as a
zero-copy direct data placement sink for remote RDMA WRITEs. 

Also, average I/O Latency is especially low when the guest is capable of
utilizing a blk-mq based virtio guest driver.  For the best possible
results in KVM guest, virtio-scsi will want to be utilize the upcoming
scsi-mq support in Linux, that will greatly benefit both QEMU data-plane
and vhost-scsi type approaches of SCSI target I/O submission.

 
 In virtio-blk vs vhost-blk a clear performance difference was never
 observed.  At the end of the day, the difference is whether a kernel
 thread or a userspace thread submits the aio request.  virtio-blk
 efforts remain focussed on userspace where ease of migration,
 management, and lower security risks are favorable.
 

All valid points, no disagreement here.

 I guess virtio-scsi will play out the same way, which is why I stopped
 working on vhost-scsi.  If others want to do the work to integrate
 vhost-scsi (aka tcm_vhost), that's great.  Just don't expect that
 performance will make the effort worthwhile.  The real difference
 between the two is that the in-kernel target is a powerful and
 configurable SCSI target, whereas the userspace QEMU target is
 focussed on emulating SCSI commands without all the configuration
 goodies.

Understood.

As mentioned, we'd like the Nova folks to consider vhost-scsi support as
a experimental feature for the Juno release of Openstack, given the
known caveats.

Thanks for your comments!

--nab


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] recheck no bug and comment

2014-07-25 Thread Sean Dague
On 07/25/2014 01:18 AM, Ian Wienand wrote:
 On 07/16/2014 11:15 PM, Alexis Lee wrote:
 What do you think about allowing some text after the words recheck no
 bug?
 
 I think this is a good idea; I am often away from a change for a bit,
 something happens in-between and Jenkins fails it, but chasing it down
 days later is fairly pointless given how fast things move.
 
 It would be nice if I could indicate I thought about this.  In fact,
 there might be an argument for *requiring* a reason
 
 I proposed [1] to allow this
 
 -i
 
 [1] https://review.openstack.org/#/c/109492/

At the QA / Infra meetup we actually talked about the recheck syntax,
and to change the way elastic recheck is interacting with the user.

https://review.openstack.org/#/q/status:open+project:openstack-infra/elastic-recheck+branch:master+topic:erchanges,n,z

and

https://review.openstack.org/#/q/status:open+project:openstack-infra/config+branch:master+topic:er,n,z

Are the result of that. Basically going forward we'll just support

'recheck.*'

If you want to provide us with info after the recheck, great, we can
mine it later. However we aren't using that a ton at this point, so
we'll make it easier on people.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Treating notifications as a contract

2014-07-25 Thread Chris Dent



There's a review in progress for a generic event format for
PaaS-services which is a move with the right spirit: allow various
services to join the the notification party without needing special
handlers.

See: https://review.openstack.org/#/c/101967/
--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] recheck no bug and comment

2014-07-25 Thread Daniel P. Berrange
On Fri, Jul 25, 2014 at 06:38:29AM -0400, Sean Dague wrote:
 On 07/25/2014 01:18 AM, Ian Wienand wrote:
  On 07/16/2014 11:15 PM, Alexis Lee wrote:
  What do you think about allowing some text after the words recheck no
  bug?
  
  I think this is a good idea; I am often away from a change for a bit,
  something happens in-between and Jenkins fails it, but chasing it down
  days later is fairly pointless given how fast things move.
  
  It would be nice if I could indicate I thought about this.  In fact,
  there might be an argument for *requiring* a reason
  
  I proposed [1] to allow this
  
  -i
  
  [1] https://review.openstack.org/#/c/109492/
 
 At the QA / Infra meetup we actually talked about the recheck syntax,
 and to change the way elastic recheck is interacting with the user.
 
 https://review.openstack.org/#/q/status:open+project:openstack-infra/elastic-recheck+branch:master+topic:erchanges,n,z
 
 and
 
 https://review.openstack.org/#/q/status:open+project:openstack-infra/config+branch:master+topic:er,n,z
 
 Are the result of that. Basically going forward we'll just support
 
 'recheck.*'

I'm not sure I understand what you mean by that ? Are we going to
use the literal string 'recheck.*' or do you mean we'll use 'recheck'
and the user can put arbitrary text after it ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] recheck no bug and comment

2014-07-25 Thread Sean Dague
On 07/25/2014 06:53 AM, Daniel P. Berrange wrote:
 On Fri, Jul 25, 2014 at 06:38:29AM -0400, Sean Dague wrote:
 On 07/25/2014 01:18 AM, Ian Wienand wrote:
 On 07/16/2014 11:15 PM, Alexis Lee wrote:
 What do you think about allowing some text after the words recheck no
 bug?

 I think this is a good idea; I am often away from a change for a bit,
 something happens in-between and Jenkins fails it, but chasing it down
 days later is fairly pointless given how fast things move.

 It would be nice if I could indicate I thought about this.  In fact,
 there might be an argument for *requiring* a reason

 I proposed [1] to allow this

 -i

 [1] https://review.openstack.org/#/c/109492/

 At the QA / Infra meetup we actually talked about the recheck syntax,
 and to change the way elastic recheck is interacting with the user.

 https://review.openstack.org/#/q/status:open+project:openstack-infra/elastic-recheck+branch:master+topic:erchanges,n,z

 and

 https://review.openstack.org/#/q/status:open+project:openstack-infra/config+branch:master+topic:er,n,z

 Are the result of that. Basically going forward we'll just support

 'recheck.*'
 
 I'm not sure I understand what you mean by that ? Are we going to
 use the literal string 'recheck.*' or do you mean we'll use 'recheck'
 and the user can put arbitrary text after it ?

Sorry, I think in regex. recheck + arbitrary string.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] recheck no bug and comment

2014-07-25 Thread Daniel P. Berrange
On Fri, Jul 25, 2014 at 07:09:56AM -0400, Sean Dague wrote:
 On 07/25/2014 06:53 AM, Daniel P. Berrange wrote:
  On Fri, Jul 25, 2014 at 06:38:29AM -0400, Sean Dague wrote:
  On 07/25/2014 01:18 AM, Ian Wienand wrote:
  On 07/16/2014 11:15 PM, Alexis Lee wrote:
  What do you think about allowing some text after the words recheck no
  bug?
 
  I think this is a good idea; I am often away from a change for a bit,
  something happens in-between and Jenkins fails it, but chasing it down
  days later is fairly pointless given how fast things move.
 
  It would be nice if I could indicate I thought about this.  In fact,
  there might be an argument for *requiring* a reason
 
  I proposed [1] to allow this
 
  -i
 
  [1] https://review.openstack.org/#/c/109492/
 
  At the QA / Infra meetup we actually talked about the recheck syntax,
  and to change the way elastic recheck is interacting with the user.
 
  https://review.openstack.org/#/q/status:open+project:openstack-infra/elastic-recheck+branch:master+topic:erchanges,n,z
 
  and
 
  https://review.openstack.org/#/q/status:open+project:openstack-infra/config+branch:master+topic:er,n,z
 
  Are the result of that. Basically going forward we'll just support
 
  'recheck.*'
  
  I'm not sure I understand what you mean by that ? Are we going to
  use the literal string 'recheck.*' or do you mean we'll use 'recheck'
  and the user can put arbitrary text after it ?
 
 Sorry, I think in regex. recheck + arbitrary string.

Would that still allow us to only trigger 3rd party CI ? eg if we do
'recheck xenserver' I don't want to trigger the main CI, only the Xen
CI.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Authentication is turned on - Fuel API and UI

2014-07-25 Thread Evgeniy L
Hi,

I have several concerns about password changing.

 Default password can be changed via UI or via fuel-cli. In case of
changing password via UI or fuel-cli password is not stored in any file
only in keystone

It's important to change password in /etc/fuel/astute.yaml
otherwise it will be impossible for user to run upgrade,

1. upgrade system uses credentials from /etc/fuel/astute.yaml
to authenticate in nailgun
2. upgrade system runs puppet to upgrade dockerctl/fuelclient
on the host system, puppet uses credentials from /etc/fuel/astute.yaml
to update config /etc/fuel/client/config.yaml [1], even if user changed
the password in the config for fuelclient, it will be overwritten after
upgrade

If we don't want to change credentials in /etc/fuel/astute.yaml
lets at least add some warning in the documentation.

[1]
https://github.com/stackforge/fuel-library/blob/705dc089037757ed8c5a25c4cf78df71f9bd33b0/deployment/puppet/nailgun/examples/host-only.pp#L51-L55



On Thu, Jul 24, 2014 at 6:17 PM, Lukasz Oles lo...@mirantis.com wrote:

 Hi all,

 one more thing. You do not need to install keystone in your development
 environment. By default it runs there in fake mode. Keystone mode is
 enabled only on iso. If you want to test it locally you have to install
 keystone and configure nailgun as Kamil explained.

 Regards,


 On Thu, Jul 24, 2014 at 3:57 PM, Mike Scherbakov mscherba...@mirantis.com
  wrote:

 Kamil,
 thank you for the detailed information.

 Meg, do we have anything documented about authx yet? I think Kamil's
 email can be used as a source to prepare user and operation guides for Fuel
 5.1.

 Thanks,


 On Thu, Jul 24, 2014 at 5:45 PM, Kamil Sambor ksam...@mirantis.com
 wrote:

 Hi folks,

 All parts of code related to stage I and II from blueprint
 http://docs-draft.openstack.org/29/96429/11/gate/gate-fuel-specs-docs/2807f30/doc/build/html/specs/5.1/access-control-master-node.htm
 http://docs-draft.openstack.org/29/96429/11/gate/gate-fuel-specs-docs/2807f30/doc/build/html/specs/5.1/access-control-master-node.html
  are
 merged. In result of that, fuel (api and UI)  we now have
 authentication via keystone and now is required as default. Keystone is
 installed in new container during master installation. We can configure
 password via fuelmenu during installation (default user:password -
 admin:admin). Password is saved in astute.yaml, also admin_token is stored
 here.
 Almost all endpoints in fuel are protected and they required
 authentication token. We made exception for few endpoints and they are
 defined in nailgun/middleware/keystone.py in public_url .
 Default password can be changed via UI or via fuel-cli. In case of
 changing password via UI or fuel-cli password is not stored in any file
 only in keystone, so if you forgot password you can change it using
 keystone client from master node and admin_token from astute.yaml using
 command: keystone --os-endpoint=http://10.20.0.2:35357/v2.0 
 --os-token=admin_token
 password-update .
 Fuel client now use for authentication user and passwords which are
 stored in /etc/fuel/client/config.yaml. Password in this file is not
 changed during changing via fuel-cli or UI, user must change this password
 manualy. If user don't want use config file can provide user and password
 to fuel-cli by flags: --os-username=admin --os-password=test. We added also
 possibilities to change password via fuel-cli, to do this we should
 execute: fuel user --change-password --new-pass=new .
 To run or disable authentication we should change
 /etc/nailgun/settings.yaml (AUTHENTICATION_METHOD) in nailgun container.

 Best regards,
 Kamil S.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Łukasz Oleś

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] recheck no bug and comment

2014-07-25 Thread Sean Dague
On 07/25/2014 07:17 AM, Daniel P. Berrange wrote:
 On Fri, Jul 25, 2014 at 07:09:56AM -0400, Sean Dague wrote:
 On 07/25/2014 06:53 AM, Daniel P. Berrange wrote:
 On Fri, Jul 25, 2014 at 06:38:29AM -0400, Sean Dague wrote:
 On 07/25/2014 01:18 AM, Ian Wienand wrote:
 On 07/16/2014 11:15 PM, Alexis Lee wrote:
 What do you think about allowing some text after the words recheck no
 bug?

 I think this is a good idea; I am often away from a change for a bit,
 something happens in-between and Jenkins fails it, but chasing it down
 days later is fairly pointless given how fast things move.

 It would be nice if I could indicate I thought about this.  In fact,
 there might be an argument for *requiring* a reason

 I proposed [1] to allow this

 -i

 [1] https://review.openstack.org/#/c/109492/

 At the QA / Infra meetup we actually talked about the recheck syntax,
 and to change the way elastic recheck is interacting with the user.

 https://review.openstack.org/#/q/status:open+project:openstack-infra/elastic-recheck+branch:master+topic:erchanges,n,z

 and

 https://review.openstack.org/#/q/status:open+project:openstack-infra/config+branch:master+topic:er,n,z

 Are the result of that. Basically going forward we'll just support

 'recheck.*'

 I'm not sure I understand what you mean by that ? Are we going to
 use the literal string 'recheck.*' or do you mean we'll use 'recheck'
 and the user can put arbitrary text after it ?

 Sorry, I think in regex. recheck + arbitrary string.
 
 Would that still allow us to only trigger 3rd party CI ? eg if we do
 'recheck xenserver' I don't want to trigger the main CI, only the Xen
 CI.

No, the 3rd party folks went off and created a grammar without
discussing it with the infra team (also against specific objections to
doing so). Such it is.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] recheck no bug and comment

2014-07-25 Thread Bob Ball
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 25 July 2014 12:36
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [infra] recheck no bug and comment
 
  Would that still allow us to only trigger 3rd party CI ? eg if we do
  'recheck xenserver' I don't want to trigger the main CI, only the Xen
  CI.
 
 No, the 3rd party folks went off and created a grammar without
 discussing it with the infra team (also against specific objections to
 doing so). Such it is.

When setting up the XenServer CI the recheck syntax I added was requested by 
reviewers and I certainly wasn't aware of these specific objections.

Do you have a proposal for the grammar you'd like 3rd party CIs to follow?

Thanks,

Bob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] recheck no bug and comment

2014-07-25 Thread Daniel P. Berrange
On Fri, Jul 25, 2014 at 07:35:52AM -0400, Sean Dague wrote:
 On 07/25/2014 07:17 AM, Daniel P. Berrange wrote:
  On Fri, Jul 25, 2014 at 07:09:56AM -0400, Sean Dague wrote:
  On 07/25/2014 06:53 AM, Daniel P. Berrange wrote:
  On Fri, Jul 25, 2014 at 06:38:29AM -0400, Sean Dague wrote:
  On 07/25/2014 01:18 AM, Ian Wienand wrote:
  On 07/16/2014 11:15 PM, Alexis Lee wrote:
  What do you think about allowing some text after the words recheck no
  bug?
 
  I think this is a good idea; I am often away from a change for a bit,
  something happens in-between and Jenkins fails it, but chasing it down
  days later is fairly pointless given how fast things move.
 
  It would be nice if I could indicate I thought about this.  In fact,
  there might be an argument for *requiring* a reason
 
  I proposed [1] to allow this
 
  -i
 
  [1] https://review.openstack.org/#/c/109492/
 
  At the QA / Infra meetup we actually talked about the recheck syntax,
  and to change the way elastic recheck is interacting with the user.
 
  https://review.openstack.org/#/q/status:open+project:openstack-infra/elastic-recheck+branch:master+topic:erchanges,n,z
 
  and
 
  https://review.openstack.org/#/q/status:open+project:openstack-infra/config+branch:master+topic:er,n,z
 
  Are the result of that. Basically going forward we'll just support
 
  'recheck.*'
 
  I'm not sure I understand what you mean by that ? Are we going to
  use the literal string 'recheck.*' or do you mean we'll use 'recheck'
  and the user can put arbitrary text after it ?
 
  Sorry, I think in regex. recheck + arbitrary string.
  
  Would that still allow us to only trigger 3rd party CI ? eg if we do
  'recheck xenserver' I don't want to trigger the main CI, only the Xen
  CI.
 
 No, the 3rd party folks went off and created a grammar without
 discussing it with the infra team (also against specific objections to
 doing so). Such it is.

Whether or not we agree with the current syntax, it is *critical* to
maintain this ability to trigger only 3rd party CI systems, otherwise
the odds of being able to get a pass from all CI go down the toilet
even further than they already are. 

We must resolve this before introducing the new syntax

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] - question about statsd messages and 404 errors

2014-07-25 Thread Seger, Mark (Cloud Services)
I'm trying to track object server GET errors using statsd and I'm not seeing 
them.  The test I'm doing is to simply do a GET on an non-existent object.  As 
expected, a 404 is returned and the object server log records it.  However, 
statsd implies it succeeded because there were no errors reported.  A read of 
the admin guide does clearly say the GET timing includes failed GETs, but my 
question then becomes how is one to tell there was a failure?  Should there be 
another type of message that DOES report errors?  Or how about including these 
in the 'object-server.GET.errors.timing' message?

Since the server I'm testing on is running all services, you get to see them 
together, but if I was looking at a standalone object server I'd never know:

account-server.HEAD.timing:1.85513496399|ms
proxy-server.account.HEAD.204.timing:21.3139057159|ms
proxy-server.account.HEAD.204.xfer:0|c
proxy-server.container.HEAD.204.timing:6.98900222778|ms
proxy-server.container.HEAD.204.xfer:0|c
account-server.HEAD.timing:1.72400474548|ms
proxy-server.account.HEAD.204.timing:19.4480419159|ms
proxy-server.account.HEAD.204.xfer:0|c
object-server.GET.timing:0.359058380127|ms
object-server.GET.timing:0.255107879639|ms
proxy-server.object.GET.404.first-byte.timing:7.84802436829|ms
proxy-server.object.GET.404.timing:8.13698768616|ms
proxy-server.object.GET.404.xfer:70|c

-mark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] recheck no bug and comment

2014-07-25 Thread Sean Dague
On 07/25/2014 07:48 AM, Bob Ball wrote:
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 25 July 2014 12:36
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [infra] recheck no bug and comment

 Would that still allow us to only trigger 3rd party CI ? eg if we do
 'recheck xenserver' I don't want to trigger the main CI, only the Xen
 CI.

 No, the 3rd party folks went off and created a grammar without
 discussing it with the infra team (also against specific objections to
 doing so). Such it is.
 
 When setting up the XenServer CI the recheck syntax I added was requested by 
 reviewers and I certainly wasn't aware of these specific objections.
 
 Do you have a proposal for the grammar you'd like 3rd party CIs to follow?

Consider: ^(recheck|check|reverify) off limits namespace.

If you want a namespace for commands specific to a 3rd party CI, that
should start with the 3rd party CI name.

^3rd party CI name: command

It should be the official short name in the system so there is no future
collisions issue.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mentor program?

2014-07-25 Thread Anne Gentle
On Thu, Jul 24, 2014 at 11:36 PM, Jay S. Bryant 
jsbry...@electronicjungle.net wrote:

 Josh,

 I agree that the mailing list is a hard place to provide such guidance
 and direction.  I find mailing lists intimidating.  Had taken me some
 time to be comfortable submitting here.

 After my response to this note the other day I was pinged internally via
 our messaging client asking for some mentoring.  They seemed more
 comfortable introducing themselves that way and talking.  I hate to
 suggest an IRC channel for this, as IRC drives me nuts, also doesn't
 seem the best fit.  Maybe instead a wiki page with IRC names of people
 that are willing to be individually contacted with requests for
 guidance.  I could be listed as a contact for Cinder or i18n as well as
 a general process contact.  You for taskflow, etc.


We started #openstack-101 for the purpose of onboarding new contributors
(not for learning OpenStack necessary). So we could use that as well as the
wiki page, which is a great idea.



 I guess a way to communicate what we feel we can help with and provide
 that as a resource for those who may need help in those areas.  Also, on
 the same wiki maybe try to cull together some of the 'getting started
 info.

 What do you think?

 Jay

 On Wed, 2014-07-23 at 15:42 -0700, Joshua Harlow wrote:
  Awesome,
 
 
  When I start to see emails on ML that say anyone need any help for
  XYZ ... (which is great btw) it makes me feel like there should be a
  more appropriate avenue for those inspirational folks looking to get
  involved (a ML isn't really the best place for this kind of guidance
  and directing).
 
 
  And in general mentoring will help all involved if we all do more of
  it :-)
 
 
  Let me know if any thing is needed that I can possible help with to
  get more of it going.
 
 
  -Josh
 
  On Jul 23, 2014, at 2:44 PM, Jay Bryant
  jsbry...@electronicjungle.net wrote:
 
   Great question Josh!
  
   Have been doing a lot of mentoring within IBM for OpenStack and have
   now been asked to formalize some of that work.  Not surprised there
   is an external need as well.
  
   Anne and Stefano.  Let me know if the kids anything I can do to
   help.
  
   Jay
  
   Hi all,
  
   I was reading over a IMHO insightful hacker news thread last night:
  
   https://news.ycombinator.com/item?id=8068547
  
   Labeled/titled: 'I made a patch for Mozilla, and you can do it too'
  
   It made me wonder what kind of mentoring support are we as a
   community offering to newbies (a random google search for 'openstack
   mentoring' shows mentors for GSoC, mentors for interns, outreach for
   women... but no mention of mentors as a way for everyone to get
   involved)?
  
   Looking at the comments in that hacker news thread, the article
   itself it seems like mentoring is stressed over and over as the way
   to get involved.
  
   Has there been ongoing efforts to establish such a program (I know
   there is training work that has been worked on, but that's not
   exactly the same).
  
   Thoughts, comments...?
  
   -Josh
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mentor program?

2014-07-25 Thread Abhishek L

Anne Gentle writes:

 On Thu, Jul 24, 2014 at 11:36 PM, Jay S. Bryant 
 jsbry...@electronicjungle.net wrote:

 Josh,

 I agree that the mailing list is a hard place to provide such guidance
 and direction.  I find mailing lists intimidating.  Had taken me some
 time to be comfortable submitting here.

 After my response to this note the other day I was pinged internally via
 our messaging client asking for some mentoring.  They seemed more
 comfortable introducing themselves that way and talking.  I hate to
 suggest an IRC channel for this, as IRC drives me nuts, also doesn't
 seem the best fit.  Maybe instead a wiki page with IRC names of people
 that are willing to be individually contacted with requests for
 guidance.  I could be listed as a contact for Cinder or i18n as well as
 a general process contact.  You for taskflow, etc.


 We started #openstack-101 for the purpose of onboarding new contributors
 (not for learning OpenStack necessary). So we could use that as well as the
 wiki page, which is a great idea.

Dev mailing lists could be a difficult place for a newbie to find some
footing. Python's core-mentorship list is a great place for people who
want to start contributing to Python, low hanging bugs are also
regularly announced to help start the contributing process. Maybe having
a list like that could be helpful? 

#openstack-101 is a great starting place, to help solving the initial
blocks etc. where to contribute to, IMO, could be better
suited for announcements in mailing lists.

-- 
Abhishek


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano]

2014-07-25 Thread McLellan, Steven
Hi Alexander,

Thanks for your 5c.

Some clarification – I don’t want to change the networking classes so much as 
how they’re invoked. Currently, Instance.deploy() creates and pushes the heat 
fragments necessary to create a network (if it’s the first instance to be 
deployed). This means that even in a derived class of Instance, I can’t add 
heat elements that refer to the server before deploy() is called, because the 
server won’t exist as far as Heat is concerned.

It seems strange that Instance.deploy is responsible for creating network 
elements, and in current usage, it feels like it is not necessary for the heat 
stack to be pushed first to create the network and then create the instance. 
One shorter term option might be that that NeutronNetwork by default calls 
stack.push, but when created through Instance.deploy() does not (because it 
will be called to instantiate the server). I will also see if it’s possible for 
the instance to ask its applications for their softwareconfig elements before 
it deploys itself, though I’m not sure yet if I like that usage pattern (that 
an Instance starts to expect things about Applications).

Steve

From: Alexander Tivelkov [mailto:ativel...@mirantis.com]
Sent: Thursday, July 24, 2014 5:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Murano]

Hi Steve,

Sorry I've missed this discussion for a while, but it looks like I have to add 
my 5 cents here now.

Initially our intension was to make each Murano component self deployable, 
i.e. to incapsulate within its deploy method all the necessary actions to 
create the component, including generation of Heat snippet, merging it to the 
environment's template, pushing this template to Heat and doing any post-heat 
configuration if needed via Murano Agent.

That's why the deploy method of NeutronNetwork class is doing 
$.environment.stack.push() - to make sure that the network is created when this 
method is called, regardless of the usages of this network in other components 
of the Environment. If you remove it from there, the call to network.deploy() 
will simply update the template in the environment.stack, but the actual update 
will not happen. So, the deploy method will not actually deploy anything - it 
will just prepare some snippet for future pushing.

I understand your concerns though. But probably the solution should be more 
complex - and I like the idea of having event-based workflow proposed by Stan 
above.
I even don't think that we do really need the deploy() methods in the Apps or 
Components.
Instead, I suggest to have more fine-grained workflow steps which are executed 
by higher-level entity , such as Environment.

For example, heat-based components may have createHeatSnippet() methods which 
just return a part of the heat template corresponding to the component. The 
deploy method of the environment may iteratively process all its components 
(and their nested components as well, of course), call this createHeatSnippet 
methods, merge the results into a single template - and then push this template 
as a single call to Heat. Then a post-heat config phase may be executed, if 
needed to run something with Murano Agent (as Heat Software Config is now the 
recommended way to deploy the software, there should be not too many of such 
needs - only for Windows-based deployments and other legacy stuff).


--
Regards,
Alexander Tivelkov

On Tue, Jul 22, 2014 at 2:59 PM, Lee Calcote (lecalcot) 
lecal...@cisco.commailto:lecal...@cisco.com wrote:
Gents,

For what it’s worth - We’ve long accounting for “extension points” within our 
VM and physical server provisioning flows, where developers may drop in code to 
augment OOTB behavior with customer/solution-specific needs.  While there are 
many extension points laced throughout different points in the provisioning 
flow, we pervasively injected “pre” and “post” provisioning extension points to 
allow for easy customization (like the one being attempted by Steve).

The notions of prepareDeploy and finishDeploy resonant well.

Regards,
Lee
  [cid:image001.png@01CFA7E1.91270F10]
Lee Calcote
Sr. Software Engineering Manager
Cloud and Virtualization Group

Phone: 512-378-8835tel:512-378-8835
Mail/Jabber/Video: lecal...@cisco.commailto:lecal...@cisco.com

United States
www.cisco.comhttp://www.cisco.com

From: Stan Lagun sla...@mirantis.commailto:sla...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, July 22, 2014 at 4:37 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Murano]

Hi Steve,

1. There are no objections whatsoever if you know how to do it without breaking 
the entire concept
2. I thing that deployment workflow need to be broken 

Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-07-25 Thread Kerrin, Michael

Coming back to this.

I have updated the review https://review.openstack.org/#/c/90134/ so 
that it passing CI for ubuntu (obviously failing on fedora) and I am 
happy with it. In order to close this off my plan is to getting 
feedback on the mysql element in this review. Any changes that people 
request in the next few days I will make and test via the CI and 
internally. Next I will rename mysql - percona and restore the old 
mysql in this review. At which point the percona code will not be 
tested via CI so I don't want to make any more changes at that point so 
I hope it will get approved. So this review will move to adding a 
percona element.


Then following the mariadb integration I would like to get this 
https://review.openstack.org/#/c/109415/ change to tripleo-incubator 
through that will include the new percona element in ubuntu images. So 
in the CI fedora will us mariadb and ubuntu will use percona.


Looking forward to any feedback,

Michael

On 09 July 2014 14:44:15, Sullivan, Jon Paul wrote:

-Original Message-
From: Giulio Fidente [mailto:gfide...@redhat.com]
Sent: 04 July 2014 14:37
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

On 07/01/2014 05:47 PM, Michael Kerrin wrote:

I propose making mysql an abstract element and user must choose either
percona or mariadb-rpm element.CI must be setup correctly


+1

seems a cleaner and more sustainable approach


There was some concern from lifeless around recreating package-style 
dependencies in dib with element-provides/element-deps, in particular a 
suggestion that meta-elements are not desirable[1] (I hope I am paraphrasing 
you correctly Rob).

That said, this is exactly the reason that element-provides was brought in, so that the 
definition of the image could have mysql as an element, but that the 
DIB_*_EXTRA_ARGS variable would provide the correct one, which would then list itself as 
providing mysql.

This would not prevent the sharing of common code through a differently-named 
element, such as mysql-common.


[1] see comments on April 10th in https://review.openstack.org/#/c/85776/


--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Thanks,
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2.
Registered Number: 361933

The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should consider 
this message and attachments as HP CONFIDENTIAL.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-07-25 Thread John Griffith
On Fri, Jul 25, 2014 at 7:38 AM, Kerrin, Michael michael.ker...@hp.com
wrote:

 Coming back to this.

 I have updated the review https://review.openstack.org/#/c/90134/ so that
 it passing CI for ubuntu (obviously failing on fedora) and I am happy with
 it. In order to close this off my plan is to getting feedback on the mysql
 element in this review. Any changes that people request in the next few
 days I will make and test via the CI and internally. Next I will rename
 mysql - percona and restore the old mysql in this review. At which point
 the percona code will not be tested via CI so I don't want to make any more
 changes at that point so I hope it will get approved. So this review will
 move to adding a percona element.

 Then following the mariadb integration I would like to get this
 https://review.openstack.org/#/c/109415/ change to tripleo-incubator
 through that will include the new percona element in ubuntu images. So in
 the CI fedora will us mariadb and ubuntu will use percona.

 Looking forward to any feedback,

 Michael


 On 09 July 2014 14:44:15, Sullivan, Jon Paul wrote:

 -Original Message-
 From: Giulio Fidente [mailto:gfide...@redhat.com]
 Sent: 04 July 2014 14:37
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

 On 07/01/2014 05:47 PM, Michael Kerrin wrote:

 I propose making mysql an abstract element and user must choose either
 percona or mariadb-rpm element.CI must be setup correctly


 +1

 seems a cleaner and more sustainable approach


 There was some concern from lifeless around recreating package-style
 dependencies in dib with element-provides/element-deps, in particular a
 suggestion that meta-elements are not desirable[1] (I hope I am
 paraphrasing you correctly Rob).

 That said, this is exactly the reason that element-provides was brought
 in, so that the definition of the image could have mysql as an element,
 but that the DIB_*_EXTRA_ARGS variable would provide the correct one, which
 would then list itself as providing mysql.

 This would not prevent the sharing of common code through a
 differently-named element, such as mysql-common.


 [1] see comments on April 10th in https://review.openstack.org/#/c/85776/

  --
 Giulio Fidente
 GPG KEY: 08D733BA

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Thanks,
 Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

 Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park,
 Galway.
 Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John
 Rogerson's Quay, Dublin 2.
 Registered Number: 361933

 The contents of this message and any attachments to it are confidential
 and may be legally privileged. If you have received this message in error
 you should delete it from your system immediately and advise the sender.

 To any recipient of this message within HP, unless otherwise stated, you
 should consider this message and attachments as HP CONFIDENTIAL.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​So this all sounds like an interesting mess.  I'm not even really sure I
follow all that's going on in the database area with the exception of the
design which it seems is something that takes no account for testing or
commonality across platforms (pretty bad IMO) but I don't have any insight
there so I'll butt out.

The LIO versus Tgt thing however is a bit troubling.  Is there a reason
that TripleO decided to do the exact opposite of what the defaults are in
the rest of OpenStack here?  Also any reason why if there was a valid
justification for this it didn't seem like it might be worthwhile to work
with the rest of the OpenStack community and share what they considered to
be the better solution here?

Sorry, I just haven't swallowed the TripleO pill.  We seem to have taken
the problem of how to make it easier to install OpenStack and turned it
into as complex and difficult a thing as possible.  Hey... it's hard to
deploy and manage a cloud; have two!  By the way, we did everything
differently here than anywhere else so everything you thought you knew,
still need it but won't help you here.. best of luck to you.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-25 Thread Doug Hellmann

On Jul 25, 2014, at 1:19 AM, Yuriy Taraday yorik@gmail.com wrote:

 
 
 
 On Fri, Jul 25, 2014 at 2:35 AM, Doug Hellmann d...@doughellmann.com wrote:
 
 On Jul 24, 2014, at 5:43 PM, Yuriy Taraday yorik@gmail.com wrote:
 
 
 
 
 On Fri, Jul 25, 2014 at 12:05 AM, Doug Hellmann d...@doughellmann.com 
 wrote:
 
 On Jul 24, 2014, at 3:08 PM, Yuriy Taraday yorik@gmail.com wrote:
 
 
 
 
 On Thu, Jul 24, 2014 at 10:31 PM, Doug Hellmann d...@doughellmann.com 
 wrote:
 
 On Jul 24, 2014, at 1:58 PM, Yuriy Taraday yorik@gmail.com wrote:
 
 
 
 
 On Thu, Jul 24, 2014 at 4:14 PM, Doug Hellmann d...@doughellmann.com 
 wrote:
 
 On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:
 
 Hi, all
  The current oslo.cfg module provides an easy way to load name known 
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load 
 them?
 
   For example, I do not know the group names (section name in the 
 configuration file), but we read the configuration file and detect the 
 definitions inside it.
 
 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2
 
Then I want to automatically load the group1. key1 and group2. 
 key2, without knowing the name of group1 first.
 
 If you don’t know the group name, how would you know where to look in the 
 parsed configuration for the resulting options?
 
 I can imagine something like this:
 1. iterate over undefined groups in config;
 2. select groups of interest (e.g. by prefix or some regular expression);
 3. register options in them;
 4. use those options.
 
 Registered group can be passed to a plugin/library that would register its 
 options in it.
 
 If the options are related to the plugin, could the plugin just register 
 them before it tries to use them?
 
 Plugin would have to register its options under a fixed group. But what if 
 we want a number of plugin instances? 
 
 Presumably something would know a name associated with each instance and 
 could pass it to the plugin to use when registering its options.
 
  
 
 I guess it’s not clear what problem you’re actually trying to solve by 
 proposing this change to the way the config files are parsed. That doesn’t 
 mean your idea is wrong, just that I can’t evaluate it or point out another 
 solution. So what is it that you’re trying to do that has led to this 
 suggestion?
 
 I don't exactly know what the original author's intention is but I don't 
 generally like the fact that all libraries and plugins wanting to use 
 config have to influence global CONF instance.
 
 That is a common misconception. The use of a global configuration option is 
 an application developer choice. The config library does not require it. 
 Some of the other modules in the oslo incubator expect a global config 
 object because they started life in applications with that pattern, but as 
 we move them to libraries we are updating the APIs to take a ConfigObj as 
 argument (see oslo.messaging and oslo.db for examples).
 
 What I mean is that instead of passing ConfigObj and a section name in 
 arguments for some plugin/lib it would be cleaner to receive an object that 
 represents one section of config, not the whole config at once.
 
 The new ConfigFilter class lets you do something like what you want [1]. The 
 options are visible only in the filtered view created by the plugin, so the 
 application can’t see them. That provides better data separation, and 
 prevents the options used by the plugin or library from becoming part of its 
 API.
 
 Doug
 
 [1] http://docs.openstack.org/developer/oslo.config/cfgfilter.html
 
 Yes, it looks like it. Didn't know about that, thanks!
 I wonder who should wrap CONF object into ConfigFilter - core or plugin.

If the plugin wraps the object then the plugin can register new options that 
other parts of the application can’t see (because their values are only known 
to the wrapper, which the application doesn’t have).

Doug

 
 -- 
 
 Kind regards, Yuriy.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-07-25 Thread John Griffith
On Fri, Jul 25, 2014 at 7:59 AM, John Griffith john.griff...@solidfire.com
wrote:




 On Fri, Jul 25, 2014 at 7:38 AM, Kerrin, Michael michael.ker...@hp.com
 wrote:

 Coming back to this.

 I have updated the review https://review.openstack.org/#/c/90134/ so
 that it passing CI for ubuntu (obviously failing on fedora) and I am happy
 with it. In order to close this off my plan is to getting feedback on the
 mysql element in this review. Any changes that people request in the next
 few days I will make and test via the CI and internally. Next I will rename
 mysql - percona and restore the old mysql in this review. At which point
 the percona code will not be tested via CI so I don't want to make any more
 changes at that point so I hope it will get approved. So this review will
 move to adding a percona element.

 Then following the mariadb integration I would like to get this
 https://review.openstack.org/#/c/109415/ change to tripleo-incubator
 through that will include the new percona element in ubuntu images. So in
 the CI fedora will us mariadb and ubuntu will use percona.

 Looking forward to any feedback,

 Michael


 On 09 July 2014 14:44:15, Sullivan, Jon Paul wrote:

 -Original Message-
 From: Giulio Fidente [mailto:gfide...@redhat.com]
 Sent: 04 July 2014 14:37
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

 On 07/01/2014 05:47 PM, Michael Kerrin wrote:

 I propose making mysql an abstract element and user must choose either
 percona or mariadb-rpm element.CI must be setup correctly


 +1

 seems a cleaner and more sustainable approach


 There was some concern from lifeless around recreating package-style
 dependencies in dib with element-provides/element-deps, in particular a
 suggestion that meta-elements are not desirable[1] (I hope I am
 paraphrasing you correctly Rob).

 That said, this is exactly the reason that element-provides was brought
 in, so that the definition of the image could have mysql as an element,
 but that the DIB_*_EXTRA_ARGS variable would provide the correct one, which
 would then list itself as providing mysql.

 This would not prevent the sharing of common code through a
 differently-named element, such as mysql-common.


 [1] see comments on April 10th in https://review.openstack.org/#
 /c/85776/

  --
 Giulio Fidente
 GPG KEY: 08D733BA

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Thanks,
 Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

 Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park,
 Galway.
 Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John
 Rogerson's Quay, Dublin 2.
 Registered Number: 361933

 The contents of this message and any attachments to it are confidential
 and may be legally privileged. If you have received this message in error
 you should delete it from your system immediately and advise the sender.

 To any recipient of this message within HP, unless otherwise stated, you
 should consider this message and attachments as HP CONFIDENTIAL.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  ​So this all sounds like an interesting mess.  I'm not even really sure I
 follow all that's going on in the database area with the exception of the
 design which it seems is something that takes no account for testing or
 commonality across platforms (pretty bad IMO) but I don't have any insight
 there so I'll butt out.

 The LIO versus Tgt thing however is a bit troubling.  Is there a reason
 that TripleO decided to do the exact opposite of what the defaults are in
 the rest of OpenStack here?  Also any reason why if there was a valid
 justification for this it didn't seem like it might be worthwhile to work
 with the rest of the OpenStack community and share what they considered to
 be the better solution here?

 Sorry, I just haven't swallowed the TripleO pill.  We seem to have taken
 the problem of how to make it easier to install OpenStack and turned it
 into as complex and difficult a thing as possible.  Hey... it's hard to
 deploy and manage a cloud; have two!  By the way, we did everything
 differently here than anywhere else so everything you thought you knew,
 still need it but won't help you here.. best of luck to you.

 ​Oh.. before the CD flames come my way, yes I know that's something
somebody was interested in.​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

[openstack-dev] [qa] The role of an abstract client in tempest

2014-07-25 Thread David Kranz
Even as a core contributor for several years, it has never been clear 
what the scope of these tests should be.
As we move forward with the necessity of moving functional testing to 
projects, we need to answer this question for real, understanding that 
part of the mission for these tests now is validation of clouds.  Doing 
so is made difficult by the fact that the tempest api tests take a very 
opinionated view of how services are invoked. In particular, the tempest 
client is very low-level and at present the way a functional test is 
written depends on how and where it is going to run.


In an ideal world, functional tests could execute in a variety of 
environments ranging from those that completely bypass wsgi layers and 
make project api calls directly, to running in a fully integrated real 
environment as the tempest tests currently do. The challenge is that 
there are mismatches between how the tempest client looks to test code 
and how doing object-model api calls looks. Most of this discrepancy is 
because many pieces of invoking a service are hard-coded into the tests 
rather than being abstracted in a client. Some examples are:


1. Response validation
2. json serialization/deserialization
3. environment description (tempest.conf)
4. Forced usage of addCleanUp

Maru Newby and I have proposed changing the test code to use a more 
abstract client by defining the expected signature and functionality
of methods on the client. Roughly, the methods would take positional 
arguments for pieces that go in the url part of a REST call, and kwargs 
for the json payload. The client would take care of these enumerated 
issues (if necessary) and return an attribute dict. The test code itself 
would then just be service calls and checks of returned data. Returned 
data would be inspected as resource.id instead of resource['id']. There 
is a strawman example of this for a few neutron apis here: 
https://review.openstack.org/#/c/106916/
Doing this would have the twin advantages of eliminating the need for 
boilerplate code in tests and making it possible to run the tests in 
different environments. It would also allow the inclusion of project 
functional tests in more general validation scenarios.


Since we are proposing to move parts of tempest into a stable library 
https://review.openstack.org/108858, we need to define the client in a 
way that meets all the needs outlined here before doing so. The actual 
work of defining the client in tempest and changing the code that uses 
it could largely be done one service at a time, in the tempest code, 
before being split out.


What do folks think about this idea?

 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-25 Thread Steven Hardy
On Wed, Jul 23, 2014 at 02:39:47PM -0700, James E. Blair wrote:
snip
   * Put the burden for a bunch of these tests back on the projects as
 functional tests. Basically a custom devstack environment that a
 project can create with a set of services that they minimally need
 to do their job. These functional tests will live in the project
 tree, not in Tempest, so can be atomically landed as part of the
 project normal development process.

+1 - FWIW I don't think the current process where we require tempest
cores to review our project test cases is working well, so allowing
projects to own their own tests will be a major improvement.

In terms of how this works in practice, will the in-tree tests still be run
via tempest, e.g will there be a (relatively) stable tempest api we can
develop the tests against, as Angus has already mentioned?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-25 Thread David Kranz

On 07/25/2014 10:01 AM, Steven Hardy wrote:

On Wed, Jul 23, 2014 at 02:39:47PM -0700, James E. Blair wrote:
snip

   * Put the burden for a bunch of these tests back on the projects as
 functional tests. Basically a custom devstack environment that a
 project can create with a set of services that they minimally need
 to do their job. These functional tests will live in the project
 tree, not in Tempest, so can be atomically landed as part of the
 project normal development process.

+1 - FWIW I don't think the current process where we require tempest
cores to review our project test cases is working well, so allowing
projects to own their own tests will be a major improvement.

++
We will still need some way to make sure it is difficult to break api 
compatibility by submitting a change to both code and its tests, which
currently requires a tempest two-step. Also, tempest will still need 
to retain integration testing of apis that use apis from other projects.


In terms of how this works in practice, will the in-tree tests still be run
via tempest, e.g will there be a (relatively) stable tempest api we can
develop the tests against, as Angus has already mentioned?
That is a really good question. I hope the answer is that they can still 
be run  by tempest, but don't have to be. I tried to address this in a 
message within the last hour 
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041244.html


 -David


Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Looking for Coraid cinder contact

2014-07-25 Thread Yacine Kheddache

Le 23/07/2014 22:31, Duncan Thomas a écrit :

Hi

I'm looking for a maintainer email address for the cinder coraid
driver. http://stackalytics.com/report/driverlog?project_id=openstack%2Fcinder
just lists it as Alyseo team with no contact details.


Thanks


Hi,

Alyseo team details are here :
https://launchpad.net/~alyseo

Email is openst...@alyseo.com
But will check with Coraid regarding who gonna follow for Juno 
requirements : guess it is the goal of your request ?


Thank you
Regards
*Yacine KHEDDACHE*
Directeur Technique - CTO
Mobile: +33 6 60 66 81 53| Fixe: +33 1 83 35 10 11| skype: yacinealyseo

ALYSEO**- 44 rue Armand Carrel   | 93100 MONTREUIL -  FRANCE

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] Support Stateful and Stateless DHCPv6 by dnsmasq

2014-07-25 Thread Kyle Mestery
On Thu, Jul 24, 2014 at 8:46 PM, CARVER, PAUL pc2...@att.com wrote:
 Collins, Sean wrote:

 On Wed, Jul 23, 2014 at 12:06:06AM EDT, Xu Han Peng wrote:
 I would like to request one Juno Spec freeze exception for Support Stateful
 and Stateless DHCPv6 by dnsmasq BP.

 The spec is under review:
 https://review.openstack.org/#/c/102411/

 Code change for this BP is submitted as well for a while:
 https://review.openstack.org/#/c/106299/

I'd like to +1 this request, if this work landed in Juno, this would
mean Neutron would have 100% support for all IPv6 subnet attribute
settings, since slaac support landed in J-2.

 +1 on this from me too. It's getting more and more difficult to keep
 claiming OpenStack will have IPv6 any day now and not having it
 in Juno will hurt credibility a lot.

 IPv4 address space is basically gone. ATT has a fair amount of it
 but even we're feeling the pinch. A lot of companies have it worse.
 NAT is a mediocre stop-gap at best.
 We've been running IPv6 in production for well over a year.

 Our pre-OpenStack environments support IPv6 where needed
 even though we have a lot of IPv4 running where we aren't feeling
 immediate pressure. We're having to turn internal applications away
 from our OpenStack based cloud because they require IPv6 and we
 can't provide it.

 We're actively searching for workarounds but none of them are
 attractive.

I've given a spec exception to this, and targeted this work at Juno-3
as medium priority. Given that at the start of Juno we agreed to get
to IPV6 parity in Juno, this one falls into the community work area
for exceptions.

Thanks,
Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-25 Thread Sean Dague
On 07/25/2014 10:01 AM, Steven Hardy wrote:
 On Wed, Jul 23, 2014 at 02:39:47PM -0700, James E. Blair wrote:
 snip
   * Put the burden for a bunch of these tests back on the projects as
 functional tests. Basically a custom devstack environment that a
 project can create with a set of services that they minimally need
 to do their job. These functional tests will live in the project
 tree, not in Tempest, so can be atomically landed as part of the
 project normal development process.
 
 +1 - FWIW I don't think the current process where we require tempest
 cores to review our project test cases is working well, so allowing
 projects to own their own tests will be a major improvement.
 
 In terms of how this works in practice, will the in-tree tests still be run
 via tempest, e.g will there be a (relatively) stable tempest api we can
 develop the tests against, as Angus has already mentioned?

No, not run by tempest, not using tempest code.

The vision is that you'd have:

heat/tests/functional/

And tox -e functional would run them. It would require some config for
end points. But the point being it would be fully owned by the project
team. That it could do both blackbox/whitebox testing (and because it's
in the project tree would know things like the data model and could poke
behind the scenes).

The tight coupling of everything is part of what's gotten us into these
deadlocks, decoupling here is really required in order to reduce the
fragility of the system.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Looking for Coraid cinder contact

2014-07-25 Thread Duncan Thomas
That is indeed my aim, and thanks for following up

On 25 July 2014 15:29, Yacine Kheddache yac...@alyseo.com wrote:
 Le 23/07/2014 22:31, Duncan Thomas a écrit :

 Hi

 I'm looking for a maintainer email address for the cinder coraid
 driver.
 http://stackalytics.com/report/driverlog?project_id=openstack%2Fcinder
 just lists it as Alyseo team with no contact details.


 Thanks

 Hi,

 Alyseo team details are here :
 https://launchpad.net/~alyseo

 Email is openst...@alyseo.com
 But will check with Coraid regarding who gonna follow for Juno requirements
 : guess it is the goal of your request ?

 Thank you
 Regards
 Yacine KHEDDACHE
 Directeur Technique - CTO
 Mobile: +33 6 60 66 81 53 | Fixe: +33 1 83 35 10 11 | skype: yacinealyseo

 ALYSEO - 44 rue Armand Carrel   |  93100 MONTREUIL -  FRANCE


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] dvr router modes?

2014-07-25 Thread Robert Collins
Excuse my ignorance here, but I'm hearing that dvr l3 agent will run
in two different modes - and that a typical deployment will want some
running in each: in a scaled all-in-one setup - say 3 nodes, galera,
rabbit, all our APIs, and nova-compute on each - would we then want to
run *two* l3 agents (one in dvr-snat mode, one in dvr mode) on each
node?

Isn't it possible to avoid this and have just one l3 dvr mode ?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-25 Thread Steve Baker
On 25/07/14 11:18, Sean Dague wrote:
 On 07/25/2014 10:01 AM, Steven Hardy wrote:
 On Wed, Jul 23, 2014 at 02:39:47PM -0700, James E. Blair wrote:
 snip
   * Put the burden for a bunch of these tests back on the projects as
 functional tests. Basically a custom devstack environment that a
 project can create with a set of services that they minimally need
 to do their job. These functional tests will live in the project
 tree, not in Tempest, so can be atomically landed as part of the
 project normal development process.
 +1 - FWIW I don't think the current process where we require tempest
 cores to review our project test cases is working well, so allowing
 projects to own their own tests will be a major improvement.

 In terms of how this works in practice, will the in-tree tests still be run
 via tempest, e.g will there be a (relatively) stable tempest api we can
 develop the tests against, as Angus has already mentioned?
 No, not run by tempest, not using tempest code.

 The vision is that you'd have:

 heat/tests/functional/

 And tox -e functional would run them. It would require some config for
 end points. But the point being it would be fully owned by the project
 team. That it could do both blackbox/whitebox testing (and because it's
 in the project tree would know things like the data model and could poke
 behind the scenes).

 The tight coupling of everything is part of what's gotten us into these
 deadlocks, decoupling here is really required in order to reduce the
 fragility of the system.


Since the tempest scenario orchestration tests use heatclient then
hopefully it wouldn't be too much effort to forklift them into
heat/tests/functional without any tempest dependencies.

We can leave the orchestration api tests where they are until the
tempest-lib process results in something ready to use.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-07-25 Thread Clint Byrum
Excerpts from John Griffith's message of 2014-07-25 06:59:38 -0700:
 On Fri, Jul 25, 2014 at 7:38 AM, Kerrin, Michael michael.ker...@hp.com
 wrote:
 
  Coming back to this.
 
  I have updated the review https://review.openstack.org/#/c/90134/ so that
  it passing CI for ubuntu (obviously failing on fedora) and I am happy with
  it. In order to close this off my plan is to getting feedback on the mysql
  element in this review. Any changes that people request in the next few
  days I will make and test via the CI and internally. Next I will rename
  mysql - percona and restore the old mysql in this review. At which point
  the percona code will not be tested via CI so I don't want to make any more
  changes at that point so I hope it will get approved. So this review will
  move to adding a percona element.
 
  Then following the mariadb integration I would like to get this
  https://review.openstack.org/#/c/109415/ change to tripleo-incubator
  through that will include the new percona element in ubuntu images. So in
  the CI fedora will us mariadb and ubuntu will use percona.
 
  Looking forward to any feedback,
 
  Michael
 
 
  On 09 July 2014 14:44:15, Sullivan, Jon Paul wrote:
 
  -Original Message-
  From: Giulio Fidente [mailto:gfide...@redhat.com]
  Sent: 04 July 2014 14:37
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora
 
  On 07/01/2014 05:47 PM, Michael Kerrin wrote:
 
  I propose making mysql an abstract element and user must choose either
  percona or mariadb-rpm element.CI must be setup correctly
 
 
  +1
 
  seems a cleaner and more sustainable approach
 
 
  There was some concern from lifeless around recreating package-style
  dependencies in dib with element-provides/element-deps, in particular a
  suggestion that meta-elements are not desirable[1] (I hope I am
  paraphrasing you correctly Rob).
 
  That said, this is exactly the reason that element-provides was brought
  in, so that the definition of the image could have mysql as an element,
  but that the DIB_*_EXTRA_ARGS variable would provide the correct one, which
  would then list itself as providing mysql.
 
  This would not prevent the sharing of common code through a
  differently-named element, such as mysql-common.
 
 
  [1] see comments on April 10th in https://review.openstack.org/#/c/85776/
 
   --
  Giulio Fidente
  GPG KEY: 08D733BA
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  Thanks,
  Jon-Paul Sullivan ☺ Cloud Services - @hpcloud
 
  Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park,
  Galway.
  Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John
  Rogerson's Quay, Dublin 2.
  Registered Number: 361933
 
  The contents of this message and any attachments to it are confidential
  and may be legally privileged. If you have received this message in error
  you should delete it from your system immediately and advise the sender.
 
  To any recipient of this message within HP, unless otherwise stated, you
  should consider this message and attachments as HP CONFIDENTIAL.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ​So this all sounds like an interesting mess.  I'm not even really sure I
 follow all that's going on in the database area with the exception of the
 design which it seems is something that takes no account for testing or
 commonality across platforms (pretty bad IMO) but I don't have any insight
 there so I'll butt out.
 
 The LIO versus Tgt thing however is a bit troubling.  Is there a reason
 that TripleO decided to do the exact opposite of what the defaults are in
 the rest of OpenStack here?  Also any reason why if there was a valid
 justification for this it didn't seem like it might be worthwhile to work
 with the rest of the OpenStack community and share what they considered to
 be the better solution here?
 

John, please be specific when you say the defaults are in the rest of
OpenStack. We have a stated goal to deploy _with the defaults_. The
default iscsi_helper is tgtadmin. We deploy with that unless another is
selected. As you see below, nothing is asserted there unless a value is
set:

https://git.openstack.org/cgit/openstack/tripleo-image-elements/tree/elements/cinder/os-apply-config/etc/cinder/cinder.conf#n41

And the default in the Heat templates that will set that value matches
cinder's current default:


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-07-25 Thread James Slagle
On Fri, Jul 25, 2014 at 9:59 AM, John Griffith
john.griff...@solidfire.com wrote:

 The LIO versus Tgt thing however is a bit troubling.  Is there a reason that
 TripleO decided to do the exact opposite of what the defaults are in the
 rest of OpenStack here?  Also any reason why if there was a valid
 justification for this it didn't seem like it might be worthwhile to work
 with the rest of the OpenStack community and share what they considered to
 be the better solution here?

Not really following what you find troubling. Cinder allows you to
configure it to use Tgt or LIO. Are you objecting to the fact that
TripleO allows people to *choose* to use LIO?

As was explained in the review[1], Tgt is the default for TripleO. If
you want to use LIO, TripleO offers that choice, just like Cinder
does.

[1] https://review.openstack.org/#/c/78463/


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] recheck no bug and comment

2014-07-25 Thread Jay Pipes

On 07/25/2014 08:27 AM, Sean Dague wrote:

On 07/25/2014 07:48 AM, Bob Ball wrote:

-Original Message-
From: Sean Dague [mailto:s...@dague.net]
Sent: 25 July 2014 12:36
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [infra] recheck no bug and comment


Would that still allow us to only trigger 3rd party CI ? eg if we do
'recheck xenserver' I don't want to trigger the main CI, only the Xen
CI.


No, the 3rd party folks went off and created a grammar without
discussing it with the infra team (also against specific objections to
doing so). Such it is.


When setting up the XenServer CI the recheck syntax I added was requested by 
reviewers and I certainly wasn't aware of these specific objections.

Do you have a proposal for the grammar you'd like 3rd party CIs to follow?


Consider: ^(recheck|check|reverify) off limits namespace.

If you want a namespace for commands specific to a 3rd party CI, that
should start with the 3rd party CI name.

^3rd party CI name: command

It should be the official short name in the system so there is no future
collisions issue.


Well, I apologize if I furthered the idea that 3rd party CI systems 
should implement a recheck $VENDOR trigger. Sorry, I never knew they 
were supposed to be off limits :(


-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Off Topic] Women Who Code, San Francisco, looking for mentors

2014-07-25 Thread Stefano Maffulli
In the lead up to the next round of the GNOME Outreach Program for Women
(application deadline 10/22/2014), the group is hosting a series of
meetups around getting involved in free and open source software
development.

Given OpenStack's committment to Outreach Program for Women, we're
looking for OpenStack contributors who would be interested/available to
attend as mentors for these possible new contributors.

http://www.meetup.com/Women-Who-Code-SF/events/195850392/

The meetups are 10am - 4pm, and breakfast  lunch are provided for all.

If you're interested in being a mentor in San Francisco in any of those
dates, please reach out to me in private.

Regards,
Stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Proposed Changes to Tempest Core

2014-07-25 Thread Attila Fazekas
+1


- Original Message -
 From: Matthew Treinish mtrein...@kortar.org
 To: openstack-dev@lists.openstack.org
 Sent: Tuesday, July 22, 2014 12:34:28 AM
 Subject: [openstack-dev] [QA] Proposed Changes to Tempest Core
 
 
 Hi Everyone,
 
 I would like to propose 2 changes to the Tempest core team:
 
 First, I'd like to nominate Andrea Fritolli to the Tempest core team. Over
 the
 past cycle Andrea has been steadily become more actively engaged in the
 Tempest
 community. Besides his code contributions around refactoring Tempest's
 authentication and credentials code, he has been providing reviews that have
 been of consistently high quality that show insight into both the project
 internals and it's future direction. In addition he has been active in the
 qa-specs repo both providing reviews and spec proposals, which has been very
 helpful as we've been adjusting to using the new process. Keeping in mind
 that
 becoming a member of the core team is about earning the trust from the
 members
 of the current core team through communication and quality reviews, not
 simply a
 matter of review numbers, I feel that Andrea will make an excellent addition
 to
 the team.
 
 As per the usual, if the current Tempest core team members would please vote
 +1
 or -1(veto) to the nomination when you get a chance. We'll keep the polls
 open
 for 5 days or until everyone has voted.
 
 References:
 
 https://review.openstack.org/#/q/reviewer:%22Andrea+Frittoli+%22,n,z
 
 http://stackalytics.com/?user_id=andrea-frittolimetric=marksmodule=qa-group
 
 
 The second change that I'm proposing today is to remove Giulio Fidente from
 the
 core team. He asked to be removed from the core team a few weeks back because
 he
 is no longer able to dedicate the required time to Tempest reviews. So if
 there
 are no objections to this I will remove him from the core team in a few days.
 Sorry to see you leave the team Giulio...
 
 
 Thanks,
 
 Matt Treinish
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-07-25 Thread John Griffith
On Fri, Jul 25, 2014 at 10:43 AM, James Slagle james.sla...@gmail.com
wrote:

 On Fri, Jul 25, 2014 at 9:59 AM, John Griffith
 john.griff...@solidfire.com wrote:

  The LIO versus Tgt thing however is a bit troubling.  Is there a reason
 that
  TripleO decided to do the exact opposite of what the defaults are in the
  rest of OpenStack here?  Also any reason why if there was a valid
  justification for this it didn't seem like it might be worthwhile to work
  with the rest of the OpenStack community and share what they considered
 to
  be the better solution here?

 Not really following what you find troubling. Cinder allows you to
 configure it to use Tgt or LIO. Are you objecting to the fact that
 TripleO allows people to *choose* to use LIO?

​Nope, that's fine... what I'm uneasy about is there appears to be a
suggestion to have different defaults for things based on distribution.  If
this turns out to be the only way to make things work, then sure... we do
what we have to do.

Maybe I'm missing some points in the thread, if it's strictly do we allow
options then sure, I'm not saying that's a bad thing particularly if those
options are the same as we already do in other projects.

 LIO is used on RHEL, because RHEL doesn't offer TGT. So what is the LIO
 versus Tgt thing?

What version of RHEL don't offer tgtadm?  It's in 6.5; also I responded to
a post on this subject a while back about having consistent defaults
whether you're dealing with an over-cloud or an under-cloud; IMO uniqueness
between the two in terms of defaults is something that should be carefully
weighed.  My suggestion was and is, that if for example LIO was the only
option, then we should work together to make that a consistent default
across the board.  Sounds like maybe I misunderstood what was being done
here so perhaps it's not an issue.

 TripleO is a program aimed at installing, upgrading and operating
 OpenStack clouds using OpenStack's own cloud facilities as the
foundations
 - building on nova, neutron and heat to automate fleet management at
 datacentre scale (and scaling down to as few as 2 machines).

Sure, but for some crazy reason I had always assumed that there would be
some 'value add' here, whether that be efficiency, ease of deployment or
whatever.  Not just I can use OpenStack to deploy OpenStack and throw away
the large number of tools designed to do this sort of thing and reinvent
them because it's neat.

I'm not saying that's how I see it, but I am saying maybe the stated
purpose could use a bit of detail.

 Nobody expects datacenter scale to be easy. We do, however, have a
 reasonable expectation that OpenStack can be used to deploy large,
 complex applications. If it cannot, OpenStack is in trouble.

Sure... but I think by adding (more to the point exposing) as much
complexity as possible and reinventing tools OpenStack will be in trouble
as well (likely more trouble).

Granted Heat is great for the deployment of applications in an OpenStack
Cloud, so you're saying that since TripleO uses Heat it falls into the same
category?  OpenStack on OpenStack is just another application we're
deploying?  Then maybe all of this should be folded in to Heat in perhaps a
different design, or maybe it already is for the most part.  Can't really
tell for sure as I've never really been able to do get anything really
working with TripleO as of yet, which goes back to my earlier comment about
complexity and what the actual goal is.


 As was explained in the review[1], Tgt is the default for TripleO. If
 you want to use LIO, TripleO offers that choice, just like Cinder
 does.


​Cool... you can disregard any comments I made on that subject then.
​


 [1] https://review.openstack.org/#/c/78463/


 --
 -- James Slagle
 --

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [volume-delete-failure]

2014-07-25 Thread Joe Gordon
On Fri, Jul 25, 2014 at 2:03 AM, zhangtralon ztra...@hotmail.com wrote:

 Hi

 Here,there is a problem to discuss with you.


This sounds like a bug. We use https://bugs.launchpad.net/nova/ to track
bugs and discuss possible solutions etc.


 *Problem:* a volume may leave over when we delete an instance



 *Description*:Here, two scenes may cause that a volume is legacy when we
 delete instance whose task_state is block_device_mapping .The first scene
 is that using the boot volume created by image creates instance; The other
 scene is that using image create instance with a volume created through a
 image.


 *Reason:*Through analyzing, we find that the volume id is not update to
 block_device_mapping table in DB until a volume created by an image
 through setting parameters in Blocking Device Mapping v2 is attached to an
 instance completely.If we delete the instance before the volume id is not
 update to the block_device_mapping table, the problem mentioned above
 will occur.


 Although the reason of the problem is found, I want to discuss
 the solution about it with you


 Two examples to reproduce the problem on latest icehousce:
 1. the first scene
 (1)root@devstack:~# nova list
 ++--+++-+--+
 | ID | Name | Status | Task State | Power State | Networks |
 ++--+++-+--+
 ++--+++-+--+
 (2)root@devstack:~# nova boot --flavor m1.tiny --block-device id=61ebee75-
 5883-49a3-bf85-ad6f6c29fc1b,source=image,dest=volume,device=vda,size=
 1,shutdown=removed,bootindex=0 --nic net-id=354ba9ac-e6a7-4fd6-
 a49f-6ae18a815e95 tralon_test
 root@devstack:~# nova list
 +--+-++---
 ---+-+---+
 | ID | Name | Status | Task State | Power State | Networks |
 +--+-++---
 ---+-+---+
 | 57cbb39d-c93f-44eb-afda-9ce00110950d | tralon_test | BUILD |
 block_device_mapping | NOSTATE | private=10.0.0.20 |
 +--+-++---
 ---+-+---+
 (3)root@devstack:~# nova delete tralon_test
 root@devstack:~# nova list
 ++--+++-+--+
 | ID | Name | Status | Task State | Power State | Networks |
 ++--+++-+--+
 ++--+++-+--+
 (4) root@devstack:~# cinder list
 +--+---+--+--+
 -+--+--+
 | ID | Status | Name | Size | Volume Type | Bootable | Attached to |
 +--+---+--+--+
 -+--+--+
 | 3e5579a9-5aac-42b6-9885-441e861f6cc0 | available | None | 1 | None |
 false | |
 | a4121322-529b-4223-ac26-0f569dc7821e | available | | 1 | None | true | |
 | a7ad846b-8638-40c1-be42-f2816638a917 | in-use | | 1 | None | true |
 57cbb39d-c93f-44eb-afda-9ce00110950d |
 +--+---+--+--+
 -+--+--+
 we can see that the instance 57cbb39d-c93f-44eb-afda-9ce00110950d was
 deleted while the volume still exists with the in-use status

 2. the scend scene
  (1)root@devstack:~# nova list
 ++--+++-+--+
 | ID | Name | Status | Task State | Power State | Networks |
 ++--+++-+--+
 ++--+++-+--+
 (2)root@devstack:~# nova boot --flavor m1.tiny --image 61ebee75-5883-49a3-
 bf85-ad6f6c29fc1b --nic net-id=354ba9ac-e6a7-4fd6-a49f-6ae18a815e95
 --block-device id=61ebee75-5883-49a3-bf85-ad6f6c29fc1b,source=image,dest=
 volume,device=vdb,size=1,shutdown=removed tralon_image_instance
 root@devstack:~# nova list
 +--+---+--
 --+--+-+---+
 | ID | Name | Status | Task State | Power State | Networks |
 +--+---+--
 --+--+-+---+
 | 25bcfe84-0c3f-40d3-a917-4791e092fa06 | tralon_image_instance | BUILD |
 block_device_mapping | NOSTATE | private=10.0.0.26 |
 +--+---+--
 --+--+-+---+
 (3)root@devstack:~# nova delete 25bcfe84-0c3f-40d3-a917-4791e092fa06
   ( 4 ) root@devstack:~# nova list
 ++--+++-+--+
 | ID | Name | Status | Task State | Power State | Networks |
 

Re: [openstack-dev] [QA] Proposed Changes to Tempest Core

2014-07-25 Thread Matthew Treinish
So all of the current core team members have voted unanimously in favor of
adding Andrea to the team.

Welcome to the team Andrea.

-Matt Treinish

On Fri, Jul 25, 2014 at 01:32:27PM -0400, Attila Fazekas wrote:
 +1
 
 
 - Original Message -
  From: Matthew Treinish mtrein...@kortar.org
  To: openstack-dev@lists.openstack.org
  Sent: Tuesday, July 22, 2014 12:34:28 AM
  Subject: [openstack-dev] [QA] Proposed Changes to Tempest Core
  
  
  Hi Everyone,
  
  I would like to propose 2 changes to the Tempest core team:
  
  First, I'd like to nominate Andrea Fritolli to the Tempest core team. Over
  the
  past cycle Andrea has been steadily become more actively engaged in the
  Tempest
  community. Besides his code contributions around refactoring Tempest's
  authentication and credentials code, he has been providing reviews that have
  been of consistently high quality that show insight into both the project
  internals and it's future direction. In addition he has been active in the
  qa-specs repo both providing reviews and spec proposals, which has been very
  helpful as we've been adjusting to using the new process. Keeping in mind
  that
  becoming a member of the core team is about earning the trust from the
  members
  of the current core team through communication and quality reviews, not
  simply a
  matter of review numbers, I feel that Andrea will make an excellent addition
  to
  the team.
  
  As per the usual, if the current Tempest core team members would please vote
  +1
  or -1(veto) to the nomination when you get a chance. We'll keep the polls
  open
  for 5 days or until everyone has voted.
  
  References:
  
  https://review.openstack.org/#/q/reviewer:%22Andrea+Frittoli+%22,n,z
  
  http://stackalytics.com/?user_id=andrea-frittolimetric=marksmodule=qa-group
  
  
  The second change that I'm proposing today is to remove Giulio Fidente from
  the
  core team. He asked to be removed from the core team a few weeks back 
  because
  he
  is no longer able to dedicate the required time to Tempest reviews. So if
  there
  are no objections to this I will remove him from the core team in a few 
  days.
  Sorry to see you leave the team Giulio...
  
  
  Thanks,
  
  Matt Treinish
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


pgpvDtBZmTngP.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Proposed Changes to Tempest Core

2014-07-25 Thread Ricardo Carrillo Cruz
Congrats Andrea, well deserved!


2014-07-25 19:37 GMT+02:00 Matthew Treinish mtrein...@kortar.org:

 So all of the current core team members have voted unanimously in favor of
 adding Andrea to the team.

 Welcome to the team Andrea.

 -Matt Treinish

 On Fri, Jul 25, 2014 at 01:32:27PM -0400, Attila Fazekas wrote:
  +1
 
 
  - Original Message -
   From: Matthew Treinish mtrein...@kortar.org
   To: openstack-dev@lists.openstack.org
   Sent: Tuesday, July 22, 2014 12:34:28 AM
   Subject: [openstack-dev] [QA] Proposed Changes to Tempest Core
  
  
   Hi Everyone,
  
   I would like to propose 2 changes to the Tempest core team:
  
   First, I'd like to nominate Andrea Fritolli to the Tempest core team.
 Over
   the
   past cycle Andrea has been steadily become more actively engaged in the
   Tempest
   community. Besides his code contributions around refactoring Tempest's
   authentication and credentials code, he has been providing reviews
 that have
   been of consistently high quality that show insight into both the
 project
   internals and it's future direction. In addition he has been active in
 the
   qa-specs repo both providing reviews and spec proposals, which has
 been very
   helpful as we've been adjusting to using the new process. Keeping in
 mind
   that
   becoming a member of the core team is about earning the trust from the
   members
   of the current core team through communication and quality reviews, not
   simply a
   matter of review numbers, I feel that Andrea will make an excellent
 addition
   to
   the team.
  
   As per the usual, if the current Tempest core team members would
 please vote
   +1
   or -1(veto) to the nomination when you get a chance. We'll keep the
 polls
   open
   for 5 days or until everyone has voted.
  
   References:
  
   https://review.openstack.org/#/q/reviewer:%22Andrea+Frittoli+%22,n,z
  
  
 http://stackalytics.com/?user_id=andrea-frittolimetric=marksmodule=qa-group
  
  
   The second change that I'm proposing today is to remove Giulio Fidente
 from
   the
   core team. He asked to be removed from the core team a few weeks back
 because
   he
   is no longer able to dedicate the required time to Tempest reviews. So
 if
   there
   are no objections to this I will remove him from the core team in a
 few days.
   Sorry to see you leave the team Giulio...
  
  
   Thanks,
  
   Matt Treinish
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] dvr router modes?

2014-07-25 Thread Smith, Michael (HPN RD)
It all depends on how you want services scheduled.  If you want to isolate 
(centralize) services you can.  But if you don't care what services run on 
which nodes, you can have all your l3-agents run in dvr_snat mode.  The 
dvr_snat mode l3-agent will be capable of hosting centralized/legacy routers, 
dvr routers, and centralized snat services.

Does this help?

Yours,

Michael Smith
Hewlett-Packard Company
HP Networking RD
8000 Foothills Blvd. M/S 5557
Roseville, CA 95747
PC Phone: 916 540-1884
Ph: 916 785-0918
Fax: 916 785-1199  

-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net] 
Sent: Friday, July 25, 2014 8:23 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron] dvr router modes?

Excuse my ignorance here, but I'm hearing that dvr l3 agent will run in two 
different modes - and that a typical deployment will want some running in each: 
in a scaled all-in-one setup - say 3 nodes, galera, rabbit, all our APIs, and 
nova-compute on each - would we then want to run *two* l3 agents (one in 
dvr-snat mode, one in dvr mode) on each node?

Isn't it possible to avoid this and have just one l3 dvr mode ?

-Rob

--
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] Splitting Content of Blueprint

2014-07-25 Thread Stefano Maffulli
moving the thread to the more appropriate openstack-dev mailing list,
since this conversation is about the 'future'. Get familiar with the
topics of the mailing lists:

https://wiki.openstack.org/wiki/MailingLists#Future_Development

On Fri 25 Jul 2014 06:29:52 AM PDT, Andreas Scheuring wrote:
 Hi,
 I'm currently preparing a blueprint that should add another network
 virtualization option to openstack. This would require code changes to
 novas libvirt driver and to the neutron linuxbridge agent. More to
 come soon...
 So what do you think, should I split the content up into two
 blueprints (as nova and neutron is affected) and cross reference them
 or is it also fine to go with a single one?

 If two is the way to go, should there also be two specs, or would one
 common spec be sufficient, both are directly related to each other?

 Thanks  Regards, Andreas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [not-only-neutron] How to Contribute upstream in OpenStack Neutron

2014-07-25 Thread Stefano Maffulli
On Fri 25 Jul 2014 01:25:16 AM PDT, Luke Gorrie wrote:
 I have one idea for low-hanging fruit to put new contributors more at
 ease: to explain a little about both when and why the Merge button
 is finally pressed on a change.

Indeed, communication is key. I'm not sure how you envision to 
implement this though. We do send a message to first time 
contributors[1] to explain them how the review process works and give 
them very basic suggestions on how to react to comments (including what 
to do if things seem stuck). The main issue here though is that few 
people read emails, it's a basic fact of life.

Can you explain more what you have in mind?

thanks,
stef

[1] 
http://git.openstack.org/cgit/openstack-infra/jeepyb/tree/jeepyb/cmd/welcome_message.py

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Dynamic extension loading using stevedore -- BP ready for review

2014-07-25 Thread boden

Gents,
As we discussed at the BP meeting on July 14 - I've created a new BP and 
BP wiki to outline the dynamic extension loading using stevedore.


BP: https://blueprints.launchpad.net/trove/+spec/dynamic-extension-loading
Wiki: https://wiki.openstack.org/wiki/Trove/DynamicExtensionLoading
PoC code: 
https://github.com/bodenr/trove/commit/fa06e1d96e6a49a2a54057e8feb8e624edeaf728


I've also added this to the agenda for the next BP meeting: 
https://wiki.openstack.org/wiki/Meetings/TroveBPMeeting


Please feel free to add comments to wiki or via email / IRC (@boden); 
otherwise we can sync-up on Monday's BP meeting.


Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-25 Thread Joe Gordon
On Thu, Jul 24, 2014 at 3:54 PM, Sean Dague s...@dague.net wrote:

 On 07/24/2014 05:57 PM, Matthew Treinish wrote:
  On Wed, Jul 23, 2014 at 02:39:47PM -0700, James E. Blair wrote:
  OpenStack has a substantial CI system that is core to its development
  process.  The goals of the system are to facilitate merging good code,
  prevent regressions, and ensure that there is at least one configuration
  of upstream OpenStack that we know works as a whole.  The project
  gating technique that we use is effective at preventing many kinds of
  regressions from landing, however more subtle, non-deterministic bugs
  can still get through, and these are the bugs that are currently
  plaguing developers with seemingly random test failures.
 
  Most of these bugs are not failures of the test system; they are real
  bugs.  Many of them have even been in OpenStack for a long time, but are
  only becoming visible now due to improvements in our tests.  That's not
  much help to developers whose patches are being hit with negative test
  results from unrelated failures.  We need to find a way to address the
  non-deterministic bugs that are lurking in OpenStack without making it
  easier for new bugs to creep in.
 
  The CI system and project infrastructure are not static.  They have
  evolved with the project to get to where they are today, and the
  challenge now is to continue to evolve them to address the problems
  we're seeing now.  The QA and Infrastructure teams recently hosted a
  sprint where we discussed some of these issues in depth.  This post from
  Sean Dague goes into a bit of the background: [1].  The rest of this
  email outlines the medium and long-term changes we would like to make to
  address these problems.
 
  [1] https://dague.net/2014/07/22/openstack-failures/
 
  ==Things we're already doing==
 
  The elastic-recheck tool[2] is used to identify random failures in
  test runs.  It tries to match failures to known bugs using signatures
  created from log messages.  It helps developers prioritize bugs by how
  frequently they manifest as test failures.  It also collects information
  on unclassified errors -- we can see how many (and which) test runs
  failed for an unknown reason and our overall progress on finding
  fingerprints for random failures.
 
  [2] http://status.openstack.org/elastic-recheck/
 
  We added a feature to Zuul that lets us manually promote changes to
  the top of the Gate pipeline.  When the QA team identifies a change that
  fixes a bug that is affecting overall gate stability, we can move that
  change to the top of the queue so that it may merge more quickly.
 
  We added the clean check facility in reaction to the January gate break
  down. While it does mean that any individual patch might see more tests
  run on it, it's now largely kept the gate queue at a countable number of
  hours, instead of regularly growing to more than a work day in
  length. It also means that a developer can Approve a code merge before
  tests have returned, and not ruin it for everyone else if there turned
  out to be a bug that the tests could catch.
 
  ==Future changes==
 
  ===Communication===
  We used to be better at communicating about the CI system.  As it and
  the project grew, we incrementally added to our institutional knowledge,
  but we haven't been good about maintaining that information in a form
  that new or existing contributors can consume to understand what's going
  on and why.
 
  We have started on a major effort in that direction that we call the
  infra-manual project -- it's designed to be a comprehensive user
  manual for the project infrastructure, including the CI process.  Even
  before that project is complete, we will write a document that
  summarizes the CI system and ensure it is included in new developer
  documentation and linked to from test results.
 
  There are also a number of ways for people to get involved in the CI
  system, whether focused on Infrastructure or QA, but it is not always
  clear how to do so.  We will improve our documentation to highlight how
  to contribute.
 
  ===Fixing Faster===
 
  We introduce bugs to OpenStack at some constant rate, which piles up
  over time. Our systems currently treat all changes as equally risky and
  important to the health of the system, which makes landing code changes
  to fix key bugs slow when we're at a high reset rate. We've got a manual
  process of promoting changes today to get around this, but that's
  actually quite costly in people time, and takes getting all the right
  people together at once to promote changes. You can see a number of the
  changes we promoted during the gate storm in June [3], and it was no
  small number of fixes to get us back to a reasonably passing gate. We
  think that optimizing this system will help us land fixes to critical
  bugs faster.
 
  [3] https://etherpad.openstack.org/p/gatetriage-june2014
 
  The basic idea is to use the data from elastic recheck to 

[openstack-dev] [keystone][oslo][ceilometer] Moving PyCADF from the Oslo program to Identity (Keystone)

2014-07-25 Thread Dolph Mathews
Hello everyone,

This is me waving my arms around trying to gather feedback on a change in
scope that seems agreeable to a smaller audience. Most recently, we've
discussed this in both the Keystone [1] and Oslo [2] weekly meetings.

tl;dr it seems to make sense to move the PyCADF library from the oslo
program to the Identity program, and increase the scope of the Identity
program's mission statement accordingly.

I've included ceilometer on this thread since I believe it was originally
proposed that PyCADF be included in that program, but was subsequently
rejected.

Expand scope of the Identity program to include auditing:
https://review.openstack.org/#/c/109664/

As a closely related but subsequent (and dependent) change, I've proposed
renaming the Identity program to better reflect the new scope:
https://review.openstack.org/#/c/108739/

The commit messages on these two changes hopefully explain the reasoning
behind each change, if it's not already self-explanatory. Although only the
TC has voting power in this repo, your review comments and replies on this
thread are equally welcome.

As an important consequence, Doug suggested maintaining pycadf-core [3] as
a discrete core group focused on the library during today's oslo meeting.
If any other program/project has gone through a similar process, I'd be
interested in hearing about the experience if there's anything we can learn
from. Otherwise, Doug's suggestion sounds like a reasonable approach to me.

[1]
http://eavesdrop.openstack.org/meetings/keystone/2014/keystone.2014-07-22-18.02.html

[2]
http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-25-16.00.html

[3] https://review.openstack.org/#/admin/groups/192,members

Thanks!

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mentor program?

2014-07-25 Thread Stefano Maffulli
On 07/23/2014 11:02 AM, Anne Gentle wrote:
 I'll let Stefano answer further, but yes, we've discussed a centralized
 mentoring program for a year or so. I'm not sure we have enough mentors
 available, there are certainly plenty of people seeking and needing
 mentoring. So he can elaborate more on our current thinking of how we'd
 overcome the imbalance and get more centralized coordination in this area.

We've been talking for a while, we setup OPW and Google Summer of Code
at the moment as two opportunities to introduce new developers, mostly
young and at the beginning of their careers. Upstream Training is
another effort (BTW, we'll have it in Paris again, tripled the size to
~75 new students --full announcement next week).

We're lacking a more formal path to onboard new developers during the year.

I'd like to capitalize on this spur of enthusiasm and start collecting
volunteer mentors on a wiki page, like
https://wiki.openstack.org/wiki/Mentors, start it and add:

 - name
 - area of expertise (neutron, nova, etc)
 - availability (timezone, # of hours available per week)

I'll be back on Tuesday from a brief holiday, we may dedicate some time
next week for more discussions.

Thanks,
Stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Heat Db Model updates

2014-07-25 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-07-24 12:09:39 -0700:
 On 17/07/14 07:51, Ryan Brown wrote:
  On 07/17/2014 03:33 AM, Steven Hardy wrote:
  On Thu, Jul 17, 2014 at 12:31:05AM -0400, Zane Bitter wrote:
  On 16/07/14 23:48, Manickam, Kanagaraj wrote:
  SNIP
  *Resource*
 
  Status  action should be enum of predefined status
 
  +1
 
  Rsrc_metadata - make full name resource_metadata
 
  -0. I don't see any benefit here.
 
  Agreed
 
 
  I'd actually be in favor of the change from rsrc-resource, I feel like
  rsrc is a pretty opaque abbreviation.
 
 I'd just like to remind everyone that these changes are not free. 
 Database migrations are a pain to manage, and every new one slows down 
 our unit tests.
 
 We now support multiple heat-engines connected to the same database and 
 people want to upgrade their installations, so that means we have to be 
 able to handle different versions talking to the same database. Unless 
 somebody has a bright idea I haven't thought of, I assume that means 
 carrying code to handle both versions for 6 months before actually being 
 able to implement the migration. Or are we saying that you have to 
 completely shut down all instances of Heat to do an upgrade?
 
 The name of the nova_instance column is so egregiously misleading that 
 it's probably worth the pain. Using an enumeration for the states will 
 save a lot of space in the database (though it would be a much more 
 obvious win if we were querying on those columns). Changing a random 
 prefix that was added to avoid a namespace conflict to a slightly 
 different random prefix is well below the cost-benefit line IMO.

In past lives managing apps like Heat, We've always kept supporting the
previous schema in new code versions. So the process is:

* Upgrade all code
* Restart all services
* Upgrade database schema
* Wait a bit for reverts
* Remove backward compatibility

Now this was always in more of a continuous delivery environment, so
there was not more than a few weeks of waiting for reverts. In OpenStack
we'd have a single release to wait.

We're not special though, doesn't Nova have some sort of object versioning
code that helps them manage the versions of each type of data for this
very purpose?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Heat Db Model updates

2014-07-25 Thread Zane Bitter

On 25/07/14 13:50, Clint Byrum wrote:

Excerpts from Zane Bitter's message of 2014-07-24 12:09:39 -0700:

On 17/07/14 07:51, Ryan Brown wrote:

On 07/17/2014 03:33 AM, Steven Hardy wrote:

On Thu, Jul 17, 2014 at 12:31:05AM -0400, Zane Bitter wrote:

On 16/07/14 23:48, Manickam, Kanagaraj wrote:

SNIP
*Resource*

Status  action should be enum of predefined status


+1


Rsrc_metadata - make full name resource_metadata


-0. I don't see any benefit here.


Agreed



I'd actually be in favor of the change from rsrc-resource, I feel like
rsrc is a pretty opaque abbreviation.


I'd just like to remind everyone that these changes are not free.
Database migrations are a pain to manage, and every new one slows down
our unit tests.

We now support multiple heat-engines connected to the same database and
people want to upgrade their installations, so that means we have to be
able to handle different versions talking to the same database. Unless
somebody has a bright idea I haven't thought of, I assume that means
carrying code to handle both versions for 6 months before actually being
able to implement the migration. Or are we saying that you have to
completely shut down all instances of Heat to do an upgrade?

The name of the nova_instance column is so egregiously misleading that
it's probably worth the pain. Using an enumeration for the states will
save a lot of space in the database (though it would be a much more
obvious win if we were querying on those columns). Changing a random
prefix that was added to avoid a namespace conflict to a slightly
different random prefix is well below the cost-benefit line IMO.


In past lives managing apps like Heat, We've always kept supporting the
previous schema in new code versions. So the process is:

* Upgrade all code
* Restart all services
* Upgrade database schema
* Wait a bit for reverts
* Remove backward compatibility


OK, so we can put the migration in now but we still have to support the 
old DB format for 6 months. I'm fine with that; I think my point that 
trivial cosmetic changes don't justify the extra cruft stands.


I'm curious now if this is the upgrade process that we're documenting, 
though? I can think of a bunch of places where we added e.g. new tables 
or columns to the DB and AFAIK the code assumes that they'll be there 
(i.e. that the schema upgrade happened before trying to use that feature).


cheers,
Zane.


Now this was always in more of a continuous delivery environment, so
there was not more than a few weeks of waiting for reverts. In OpenStack
we'd have a single release to wait.

We're not special though, doesn't Nova have some sort of object versioning
code that helps them manage the versions of each type of data for this
very purpose?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][ceilometer] Moving PyCADF from the Oslo program to Identity (Keystone)

2014-07-25 Thread Doug Hellmann

On Jul 25, 2014, at 2:09 PM, Dolph Mathews dolph.math...@gmail.com wrote:

 Hello everyone,
 
 This is me waving my arms around trying to gather feedback on a change in 
 scope that seems agreeable to a smaller audience. Most recently, we've 
 discussed this in both the Keystone [1] and Oslo [2] weekly meetings.
 
 tl;dr it seems to make sense to move the PyCADF library from the oslo program 
 to the Identity program, and increase the scope of the Identity program's 
 mission statement accordingly.
 
 I've included ceilometer on this thread since I believe it was originally 
 proposed that PyCADF be included in that program, but was subsequently 
 rejected.
 
 Expand scope of the Identity program to include auditing: 
 https://review.openstack.org/#/c/109664/

I think this move makes sense. It provides a good home with interested 
contributors for PyCADF, and if it makes it easier for the Identity team to 
manage cross-repository changes then that’s a bonus.

Before we move ahead, I would like to hear from the other current pycadf and 
oslo team members, especially Gordon since he is the primary maintainer.

Doug

 
 As a closely related but subsequent (and dependent) change, I've proposed 
 renaming the Identity program to better reflect the new scope: 
 https://review.openstack.org/#/c/108739/
 
 The commit messages on these two changes hopefully explain the reasoning 
 behind each change, if it's not already self-explanatory. Although only the 
 TC has voting power in this repo, your review comments and replies on this 
 thread are equally welcome.
 
 As an important consequence, Doug suggested maintaining pycadf-core [3] as a 
 discrete core group focused on the library during today's oslo meeting. If 
 any other program/project has gone through a similar process, I'd be 
 interested in hearing about the experience if there's anything we can learn 
 from. Otherwise, Doug's suggestion sounds like a reasonable approach to me.
 
 [1] 
 http://eavesdrop.openstack.org/meetings/keystone/2014/keystone.2014-07-22-18.02.html
 
 [2] 
 http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-25-16.00.html
 
 [3] https://review.openstack.org/#/admin/groups/192,members
 
 Thanks!
 
 -Dolph
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Add support for hardware transports when using open-iscsi

2014-07-25 Thread Anish Bhatt
Pinging again since I was hoping for some community feedback. I also messed up 
a bit in my original mail, the current supported hardware transports are cxgb3i 
, cxgb4i (Chelsio), bnx2i (Broadcom/QLogic), qla4xxx (QLogic) and be2iscsi  
ocs (Emulex).
-Anish

 -Original Message-
 From: Anish Bhatt [mailto:an...@chelsio.com]
 Sent: Wednesday, July 23, 2014 9:53 PM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [nova] Add support for hardware transports when
 using open-iscsi
 
 Hi,
 
 Currently, the implementation that uses open-iscsi to log in to iscsi targets
 does not support the use of hardware transports (currently bnx2i, cxgb3i 
 cxgb4i are supported by open-iscsi)
 
 The only change would be adding a -I transport_iface_file parameter to
 the standard login/discovery command when the requisite hardware is
 available. The transport iface files can be generated via iscsiadm itself. No
 other commands would change at all. The default value is -I tcp, which as the
 same as not giving the -I parameter.
 As far as I can see, all changes would be localized to the following nova 
 files :
 
 nova/virt/libvirt/volume.py
 nova/cmd/baremetal_deploy_helper.py
 nova/tests/virt/libvirt/test_volume.py
 
 Would this be a useful addition to openstack ?
 
 Thanks,
 Anish
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][ceilometer] Moving PyCADF from the Oslo program to Identity (Keystone)

2014-07-25 Thread Davanum Srinivas
+1 to move pycadf to Identity program.

-- dims

On Fri, Jul 25, 2014 at 3:18 PM, Doug Hellmann d...@doughellmann.com wrote:

 On Jul 25, 2014, at 2:09 PM, Dolph Mathews dolph.math...@gmail.com wrote:

 Hello everyone,

 This is me waving my arms around trying to gather feedback on a change in
 scope that seems agreeable to a smaller audience. Most recently, we've
 discussed this in both the Keystone [1] and Oslo [2] weekly meetings.

 tl;dr it seems to make sense to move the PyCADF library from the oslo
 program to the Identity program, and increase the scope of the Identity
 program's mission statement accordingly.

 I've included ceilometer on this thread since I believe it was originally
 proposed that PyCADF be included in that program, but was subsequently
 rejected.

 Expand scope of the Identity program to include auditing:
 https://review.openstack.org/#/c/109664/


 I think this move makes sense. It provides a good home with interested
 contributors for PyCADF, and if it makes it easier for the Identity team to
 manage cross-repository changes then that’s a bonus.

 Before we move ahead, I would like to hear from the other current pycadf and
 oslo team members, especially Gordon since he is the primary maintainer.

 Doug


 As a closely related but subsequent (and dependent) change, I've proposed
 renaming the Identity program to better reflect the new scope:
 https://review.openstack.org/#/c/108739/


 The commit messages on these two changes hopefully explain the reasoning
 behind each change, if it's not already self-explanatory. Although only the
 TC has voting power in this repo, your review comments and replies on this
 thread are equally welcome.

 As an important consequence, Doug suggested maintaining pycadf-core [3] as a
 discrete core group focused on the library during today's oslo meeting. If
 any other program/project has gone through a similar process, I'd be
 interested in hearing about the experience if there's anything we can learn
 from. Otherwise, Doug's suggestion sounds like a reasonable approach to me.

 [1]
 http://eavesdrop.openstack.org/meetings/keystone/2014/keystone.2014-07-22-18.02.html

 [2]
 http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-25-16.00.html

 [3] https://review.openstack.org/#/admin/groups/192,members

 Thanks!

 -Dolph
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

2014-07-25 Thread Steve Gordon
- Original Message -
 From: Jay Pipes jaypi...@gmail.com
 To: openstack-dev@lists.openstack.org
 
 On 07/24/2014 10:05 AM, CARVER, PAUL wrote:
  Alan Kavanagh wrote:
 
  If we have more work being put on the table, then more Core
  members would definitely go a long way with assisting this, we cant
  wait for folks to be reviewing stuff as an excuse to not get
  features landed in a given release.
 
 We absolutely can and should wait for folks to be reviewing stuff
 properly. A large number of problems in OpenStack code and flawed design
 can be attributed to impatience and pushing through code that wasn't ready.
 
 I've said this many times, but the best way to get core reviews on
 patches that you submit is to put the effort into reviewing others'
 code. Core reviewers are more willing to do reviews for someone who is
 clearly trying to help the project in more ways than just pushing their
 own code. Note that, Alan, I'm not trying to imply that you are guilty
 of the above! :) I'm just recommending techniques for the general
 contributor community who are not on a core team (including myself!).

I agree with all of the above, I do think however there is another un-addressed 
area where there *may* be room for optimization - which is how we use the 
earlier milestones. I apologize in advance because this is somewhat tangential 
to Alan's points but I think it is relevant to the general frustration around 
what did/didn't get approved in time for the deadline and ultimately what will 
or wont get reviewed in time to make the release versus being punted to Kilo or 
even further down the road.

We land very, very, little in terms of feature work in the *-1 and *-2 
milestones in each release (and this is not just a Neutron thing). Even though 
we know without a doubt that the amount of work currently approved for J-3 is 
not realistic we also know that we will land significantly more features in 
this milestone than the other two that have already been and gone, which to my 
way of thinking is actually kind of backwards to the ideal situation.

What is unclear to me however is how much of this is a result of difficulty 
identifying and approving less controversial/more straightforward 
specifications quickly following summit (keeping in mind this time around there 
was arguably some additional delay as the *-specs repository approach was 
bedded down), an unavoidable result of human nature being to *really* push when 
there is a *hard* deadline to beat, or just that these earlier milestones are 
somewhat impacted from fatigue from the summit (I know a lot of people also try 
to take some well earned time off around this period + of course many are still 
concentrated on stabilization of the previous release). As a result it's 
unclear whether there is anything concrete that can be done to change this but 
I thought I would bring it up in case anyone else has any bright ideas!

 [SNIP]

  We ought to (in my personal opinion) be supplying core reviewers to
  at least a couple of OpenStack projects. But one way or another we
  need to get more capabilities reviewed and merged. My personal top
  disappointments are with the current state of IPv6, HA, and QoS, but
  I'm sure other folks can list lots of other capabilities that
  they're really going to be frustrated to find lacking in Juno.
 
 I agree with you. It's not something that is fixable overnight, or by a
 small group of people, IMO. It's something that needs to be addressed by
 the core project teams, acting as a group in order to reduce review wait
 times and ensure that there is responsiveness, transparency and
 thoroughness to the review (code as well as spec) process.
 
 I put together some slides recently that have some insights and
 (hopefully) some helpful suggestions for both doing and receiving code
 reviews, as well as staying sane in the era of corporate agendas.
 Perhaps folks will find it useful:
 
 http://bit.ly/navigating-openstack-community

As an aside this is a very well put together deck, thanks for sharing!

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-25 Thread Matthew Treinish
On Thu, Jul 24, 2014 at 06:54:38PM -0400, Sean Dague wrote:
 On 07/24/2014 05:57 PM, Matthew Treinish wrote:
  On Wed, Jul 23, 2014 at 02:39:47PM -0700, James E. Blair wrote:
  OpenStack has a substantial CI system that is core to its development
  process.  The goals of the system are to facilitate merging good code,
  prevent regressions, and ensure that there is at least one configuration
  of upstream OpenStack that we know works as a whole.  The project
  gating technique that we use is effective at preventing many kinds of
  regressions from landing, however more subtle, non-deterministic bugs
  can still get through, and these are the bugs that are currently
  plaguing developers with seemingly random test failures.
 
  Most of these bugs are not failures of the test system; they are real
  bugs.  Many of them have even been in OpenStack for a long time, but are
  only becoming visible now due to improvements in our tests.  That's not
  much help to developers whose patches are being hit with negative test
  results from unrelated failures.  We need to find a way to address the
  non-deterministic bugs that are lurking in OpenStack without making it
  easier for new bugs to creep in.
 
  The CI system and project infrastructure are not static.  They have
  evolved with the project to get to where they are today, and the
  challenge now is to continue to evolve them to address the problems
  we're seeing now.  The QA and Infrastructure teams recently hosted a
  sprint where we discussed some of these issues in depth.  This post from
  Sean Dague goes into a bit of the background: [1].  The rest of this
  email outlines the medium and long-term changes we would like to make to
  address these problems.
 
  [1] https://dague.net/2014/07/22/openstack-failures/
 
  ==Things we're already doing==
 
  The elastic-recheck tool[2] is used to identify random failures in
  test runs.  It tries to match failures to known bugs using signatures
  created from log messages.  It helps developers prioritize bugs by how
  frequently they manifest as test failures.  It also collects information
  on unclassified errors -- we can see how many (and which) test runs
  failed for an unknown reason and our overall progress on finding
  fingerprints for random failures.
 
  [2] http://status.openstack.org/elastic-recheck/
 
  We added a feature to Zuul that lets us manually promote changes to
  the top of the Gate pipeline.  When the QA team identifies a change that
  fixes a bug that is affecting overall gate stability, we can move that
  change to the top of the queue so that it may merge more quickly.
 
  We added the clean check facility in reaction to the January gate break
  down. While it does mean that any individual patch might see more tests
  run on it, it's now largely kept the gate queue at a countable number of
  hours, instead of regularly growing to more than a work day in
  length. It also means that a developer can Approve a code merge before
  tests have returned, and not ruin it for everyone else if there turned
  out to be a bug that the tests could catch.
 
  ==Future changes==
 
  ===Communication===
  We used to be better at communicating about the CI system.  As it and
  the project grew, we incrementally added to our institutional knowledge,
  but we haven't been good about maintaining that information in a form
  that new or existing contributors can consume to understand what's going
  on and why.
 
  We have started on a major effort in that direction that we call the
  infra-manual project -- it's designed to be a comprehensive user
  manual for the project infrastructure, including the CI process.  Even
  before that project is complete, we will write a document that
  summarizes the CI system and ensure it is included in new developer
  documentation and linked to from test results.
 
  There are also a number of ways for people to get involved in the CI
  system, whether focused on Infrastructure or QA, but it is not always
  clear how to do so.  We will improve our documentation to highlight how
  to contribute.
 
  ===Fixing Faster===
 
  We introduce bugs to OpenStack at some constant rate, which piles up
  over time. Our systems currently treat all changes as equally risky and
  important to the health of the system, which makes landing code changes
  to fix key bugs slow when we're at a high reset rate. We've got a manual
  process of promoting changes today to get around this, but that's
  actually quite costly in people time, and takes getting all the right
  people together at once to promote changes. You can see a number of the
  changes we promoted during the gate storm in June [3], and it was no
  small number of fixes to get us back to a reasonably passing gate. We
  think that optimizing this system will help us land fixes to critical
  bugs faster.
 
  [3] https://etherpad.openstack.org/p/gatetriage-june2014
 
  The basic idea is to use the data from elastic recheck to 

Re: [openstack-dev] [infra] recheck no bug and comment

2014-07-25 Thread Jay Bryant
Sean,

Thanks for making this change!

Jay
On Jul 25, 2014 5:41 AM, Sean Dague s...@dague.net wrote:

 On 07/25/2014 01:18 AM, Ian Wienand wrote:
  On 07/16/2014 11:15 PM, Alexis Lee wrote:
  What do you think about allowing some text after the words recheck no
  bug?
 
  I think this is a good idea; I am often away from a change for a bit,
  something happens in-between and Jenkins fails it, but chasing it down
  days later is fairly pointless given how fast things move.
 
  It would be nice if I could indicate I thought about this.  In fact,
  there might be an argument for *requiring* a reason
 
  I proposed [1] to allow this
 
  -i
 
  [1] https://review.openstack.org/#/c/109492/

 At the QA / Infra meetup we actually talked about the recheck syntax,
 and to change the way elastic recheck is interacting with the user.


 https://review.openstack.org/#/q/status:open+project:openstack-infra/elastic-recheck+branch:master+topic:erchanges,n,z

 and


 https://review.openstack.org/#/q/status:open+project:openstack-infra/config+branch:master+topic:er,n,z

 Are the result of that. Basically going forward we'll just support

 'recheck.*'

 If you want to provide us with info after the recheck, great, we can
 mine it later. However we aren't using that a ton at this point, so
 we'll make it easier on people.

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][ceilometer] Moving PyCADF from the Oslo program to Identity (Keystone)

2014-07-25 Thread gordon chung
  Before we move ahead, I would like to hear from the other current pycadf and
  oslo team members, especially Gordon since he is the primary maintainer.
this move makes sense to me. auditing and identity have a strong link and all 
of the pyCADF work done so far has been connected to Keystone in some form so 
it makes sense to have it fall under Keystone's expanded scope.
as a sidebar... glad to have more help on pyCADF.
cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][ceilometer] Moving PyCADF from the Oslo program to Identity (Keystone)

2014-07-25 Thread Brad Topol
+1 Makes a lot of sense to me as well!!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   gordon chung g...@live.ca
To: Davanum Srinivas dava...@gmail.com, OpenStack Development 
Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   07/25/2014 04:43 PM
Subject:Re: [openstack-dev] [keystone][oslo][ceilometer] Moving 
PyCADF from the Oslo program to Identity (Keystone)



  Before we move ahead, I would like to hear from the other current 
pycadf and
  oslo team members, especially Gordon since he is the primary 
maintainer.

this move makes sense to me. auditing and identity have a strong link and 
all of the pyCADF work done so far has been connected to Keystone in some 
form so it makes sense to have it fall under Keystone's expanded scope.

as a sidebar... glad to have more help on pyCADF.

cheers,
gord___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN 0021] Owners of compromised accounts should verify Keystone trusts

2014-07-25 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Owners of compromised accounts should verify Keystone trusts
- ---

### Summary ###
The Keystone 'trusts' API allows for delegation of privileges to one
user on behalf of another. This API can allow for an attacker of a
compromised account to set up backdoor access into the account. This
backdoor may not be easily detected, even if the account compromise is
detected.

### Affected Services / Software ###
Keystone, Grizzly, Havana, Icehouse

### Discussion ###
The Keystone trusts system allows for delegation of roles to Keystone
users without disclosing the main token, or sharing the account secret
key with those users. That means, after an account is compromised, the
change of the secret key and the invalidation of existing tokens may not
be enough to prevent future access from an attackers.

If an attacker obtains access to the account (via stolen credentials or
service exploitation), they can create a new Keystone trust. This new
trust may grant access not dependent on any knowledge of the compromised
user's secret key and can also be set to never expire. In this case, the
trust has to be manually found and removed by the account owner.

Information about using trusts can be found at:

https://wiki.openstack.org/wiki/Keystone/Trusts

### Recommended Actions ###
If the account has been compromised, or is being audited, the owner
should check the list of active trusts and verify that:

- - all the active trusts are needed
- - all the active trusts have the expected roles and delegation depth
- - all the active trusts have appropriate expiration lifetimes

At the time of writing this OSSN, trusts can be listed by using the
Keystone API directly:

-  begin CLI example 
# get ENDPOINT from the last field of the output
keystone endpoint-get --service identity --attr versionId \
  --value 3.0
# get TOKEN from the last field of the output
keystone token-get
# list the trusts by running:
curl -i -X GET ENDPOINT/trusts/ -H X-Auth-Token: TOKEN \
  -H Content-Type: application/json -H Accept: application/json
-  end CLI example 

If some trust (with id TRUST_ID) is identified as invalid, it can be
deleted using:

-  begin CLI example 
curl -i -X DELETE ENDPOINT/trusts/TRUST_ID \
  -H X-Auth-Token: TOKEN -H Content-Type: application/json \
  -H Accept: application/json
-  end CLI example 

In the future, operators will be able to use keystoneclient for a more
convenient method of accessing and updating this information.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0021
Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1341849
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJT0sTLAAoJEJa+6E7Ri+EVQpYIAJHSUsW4V1h6xD3Uvi+8sYVU
rc5+rDuOqoNwWmRw19qf0fuLPsBmoB/HvG/hfgdgazcrcBK6I/hR74bdH3CLE7Ew
dCFabstGUexNBDp84RchqDyu6vjB6oNGI3325fwgZcTq9WFTr5Jbc6gw1xov3gPC
0BForhceXpwVj3y7im2xtkId23wQwwB/AYerRnuZ8DsvFy9xPWiFub7w6WmzwpHj
BM38MTLS4GJZ3cDCXchp9u+z7rh6Jb34PHMKeXWzka+LasK0A+RqamvfC8OYB2rv
9Tmrt0GxbfSb/ereB3EEpu6LPkMtepjJtBxE+cv6PekfDLdri7+wHZUDXVYTtZ4=
=l08k
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Soft Code Freeze in action

2014-07-25 Thread Mike Scherbakov
Hi Fuelers,
it is time to freeze with fixing Low and Medium bugs. Let's postpone those
open to 6.0 release.

As promised, let's do an exception for those patches, which have at least
one +1 from someone. If core dev think that such patch can be landed, let's
merge. Otherwise - please move the bug to 6.0.

Now, let's focus on Critical and High bugs. We just have 2 weeks to squash
them.. let's work collaboratively, regroup when needed in order to make
Fuel stable!

Thanks all for the hard work!
-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ALL] Removing the tox==1.6.1 pin

2014-07-25 Thread Clark Boylan
Hello,

The recent release of tox 1.7.2 has fixed the {posargs} interpolation
issues we had with newer tox which forced us to be pinned to tox==1.6.1.
Before we can remove the pin and start telling people to use latest tox
we need to address a new default behavior in tox.

New tox sets a random PYTHONHASHSEED value by default. Arguably this is
a good thing as it forces you to write code that handles unknown hash
seeds, but unfortunately many projects' unittests don't currently deal
with this very well. A work around is to hard set a PYTHONHASHSEED of 0
in tox.ini files. I have begun to propose these changes to the projects
that I have tested and found to not handle random seeds. It would be
great if we could get these reviewed and merged so that infra can update
the version of tox used on our side.

I probably won't be able to test every single project and propose fixes
with backports to stable branches for everything. It would be a massive
help if individual projects tested and proposed fixes as necessary too
(these changes will need to be backported to stable branches). You can
test by running `tox -epy27` in your project with tox version 1.7.2. If
that fails add PYTHONHASHSEED=0 as in
https://review.openstack.org/#/c/109700/ and rerun `tox -epy27` to
confirm that succeeds.

This will get us over the immediate hump of the tox upgrade, but we
should also start work to make our tests run with random hashes. This
shouldn't be too hard to do as it will be a self gating change once
infra is able to update the version of tox used in the gate. Most of the
issues appear related to dict entry ordering. I have gone ahead and
created https://bugs.launchpad.net/cinder/+bug/1348818 to track this
work.

Thank you,
Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

2014-07-25 Thread Kyle Mestery
On Fri, Jul 25, 2014 at 2:50 PM, Steve Gordon sgor...@redhat.com wrote:
 - Original Message -
 From: Jay Pipes jaypi...@gmail.com
 To: openstack-dev@lists.openstack.org

 On 07/24/2014 10:05 AM, CARVER, PAUL wrote:
  Alan Kavanagh wrote:
 
  If we have more work being put on the table, then more Core
  members would definitely go a long way with assisting this, we cant
  wait for folks to be reviewing stuff as an excuse to not get
  features landed in a given release.

 We absolutely can and should wait for folks to be reviewing stuff
 properly. A large number of problems in OpenStack code and flawed design
 can be attributed to impatience and pushing through code that wasn't ready.

 I've said this many times, but the best way to get core reviews on
 patches that you submit is to put the effort into reviewing others'
 code. Core reviewers are more willing to do reviews for someone who is
 clearly trying to help the project in more ways than just pushing their
 own code. Note that, Alan, I'm not trying to imply that you are guilty
 of the above! :) I'm just recommending techniques for the general
 contributor community who are not on a core team (including myself!).

 I agree with all of the above, I do think however there is another 
 un-addressed area where there *may* be room for optimization - which is how 
 we use the earlier milestones. I apologize in advance because this is 
 somewhat tangential to Alan's points but I think it is relevant to the 
 general frustration around what did/didn't get approved in time for the 
 deadline and ultimately what will or wont get reviewed in time to make the 
 release versus being punted to Kilo or even further down the road.

 We land very, very, little in terms of feature work in the *-1 and *-2 
 milestones in each release (and this is not just a Neutron thing). Even 
 though we know without a doubt that the amount of work currently approved for 
 J-3 is not realistic we also know that we will land significantly more 
 features in this milestone than the other two that have already been and 
 gone, which to my way of thinking is actually kind of backwards to the ideal 
 situation.

This is how it always is, yes.

 What is unclear to me however is how much of this is a result of difficulty 
 identifying and approving less controversial/more straightforward 
 specifications quickly following summit (keeping in mind this time around 
 there was arguably some additional delay as the *-specs repository approach 
 was bedded down), an unavoidable result of human nature being to *really* 
 push when there is a *hard* deadline to beat, or just that these earlier 
 milestones are somewhat impacted from fatigue from the summit (I know a lot 
 of people also try to take some well earned time off around this period + of 
 course many are still concentrated on stabilization of the previous release). 
 As a result it's unclear whether there is anything concrete that can be done 
 to change this but I thought I would bring it up in case anyone else has any 
 bright ideas!

I think it's a little bit of human nature, and a little bit of
stalling. The final milestone for a release is the *final* milestone
for that release. So, a rush is always going to happen. I also think
that I find cores focus on reviews easier near the end. I've tried
experimenting with review assignments near the end of Juno-2 (didn't
work out well), and I'm going to try it again in Juno-3 to see if it
works better there.

The bottom line is that I agree with you, and I'm open to ideas in how
to solve the final milestone issue.

Thanks,
Kyle

 [SNIP]

  We ought to (in my personal opinion) be supplying core reviewers to
  at least a couple of OpenStack projects. But one way or another we
  need to get more capabilities reviewed and merged. My personal top
  disappointments are with the current state of IPv6, HA, and QoS, but
  I'm sure other folks can list lots of other capabilities that
  they're really going to be frustrated to find lacking in Juno.

 I agree with you. It's not something that is fixable overnight, or by a
 small group of people, IMO. It's something that needs to be addressed by
 the core project teams, acting as a group in order to reduce review wait
 times and ensure that there is responsiveness, transparency and
 thoroughness to the review (code as well as spec) process.

 I put together some slides recently that have some insights and
 (hopefully) some helpful suggestions for both doing and receiving code
 reviews, as well as staying sane in the era of corporate agendas.
 Perhaps folks will find it useful:

 http://bit.ly/navigating-openstack-community

 As an aside this is a very well put together deck, thanks for sharing!

 -Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev 

Re: [openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

2014-07-25 Thread Mandeep Dhami
Thanks for the deck Jay, that is very helpful.

Also, would it help the process by having some clear
guidelines/expectations around review time as well? In particular, if you
have put a -1 or -2, and the issues that you have identified have been
addressed by an update (or at least the original author thinks that he has
addressed your concern), is it reasonable to expect that you will re-review
in a reasonable time? This way, the updates can either proceed, or be
rejected, as they are being developed instead of accumulating in a backlog
that we then try to get approved on the last day of the cut-off?

Regards,
Mandeep



On Fri, Jul 25, 2014 at 12:50 PM, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -
  From: Jay Pipes jaypi...@gmail.com
  To: openstack-dev@lists.openstack.org
 
  On 07/24/2014 10:05 AM, CARVER, PAUL wrote:
   Alan Kavanagh wrote:
  
   If we have more work being put on the table, then more Core
   members would definitely go a long way with assisting this, we cant
   wait for folks to be reviewing stuff as an excuse to not get
   features landed in a given release.
 
  We absolutely can and should wait for folks to be reviewing stuff
  properly. A large number of problems in OpenStack code and flawed design
  can be attributed to impatience and pushing through code that wasn't
 ready.
 
  I've said this many times, but the best way to get core reviews on
  patches that you submit is to put the effort into reviewing others'
  code. Core reviewers are more willing to do reviews for someone who is
  clearly trying to help the project in more ways than just pushing their
  own code. Note that, Alan, I'm not trying to imply that you are guilty
  of the above! :) I'm just recommending techniques for the general
  contributor community who are not on a core team (including myself!).

 I agree with all of the above, I do think however there is another
 un-addressed area where there *may* be room for optimization - which is how
 we use the earlier milestones. I apologize in advance because this is
 somewhat tangential to Alan's points but I think it is relevant to the
 general frustration around what did/didn't get approved in time for the
 deadline and ultimately what will or wont get reviewed in time to make the
 release versus being punted to Kilo or even further down the road.

 We land very, very, little in terms of feature work in the *-1 and *-2
 milestones in each release (and this is not just a Neutron thing). Even
 though we know without a doubt that the amount of work currently approved
 for J-3 is not realistic we also know that we will land significantly more
 features in this milestone than the other two that have already been and
 gone, which to my way of thinking is actually kind of backwards to the
 ideal situation.

 What is unclear to me however is how much of this is a result of
 difficulty identifying and approving less controversial/more
 straightforward specifications quickly following summit (keeping in mind
 this time around there was arguably some additional delay as the *-specs
 repository approach was bedded down), an unavoidable result of human nature
 being to *really* push when there is a *hard* deadline to beat, or just
 that these earlier milestones are somewhat impacted from fatigue from the
 summit (I know a lot of people also try to take some well earned time off
 around this period + of course many are still concentrated on stabilization
 of the previous release). As a result it's unclear whether there is
 anything concrete that can be done to change this but I thought I would
 bring it up in case anyone else has any bright ideas!

  [SNIP]

   We ought to (in my personal opinion) be supplying core reviewers to
   at least a couple of OpenStack projects. But one way or another we
   need to get more capabilities reviewed and merged. My personal top
   disappointments are with the current state of IPv6, HA, and QoS, but
   I'm sure other folks can list lots of other capabilities that
   they're really going to be frustrated to find lacking in Juno.
 
  I agree with you. It's not something that is fixable overnight, or by a
  small group of people, IMO. It's something that needs to be addressed by
  the core project teams, acting as a group in order to reduce review wait
  times and ensure that there is responsiveness, transparency and
  thoroughness to the review (code as well as spec) process.
 
  I put together some slides recently that have some insights and
  (hopefully) some helpful suggestions for both doing and receiving code
  reviews, as well as staying sane in the era of corporate agendas.
  Perhaps folks will find it useful:
 
  http://bit.ly/navigating-openstack-community

 As an aside this is a very well put together deck, thanks for sharing!

 -Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-25 Thread Robert Collins
On 26 July 2014 08:20, Matthew Treinish mtrein...@kortar.org wrote:

 This is also more of a pragmatic organic approach to figuring out the
 interfaces we need to lock down. When one projects breaks depending on
 an interface in another project, that should trigger this kind of
 contract growth, which hopefully formally turns into a document later
 for a stable interface.

 So notifications are a good example of this, but I think how we handled this
 is also an example of what not to do. The order was backwards, there should
 have been a stability guarantee upfront, with a versioning mechanism on
 notifications when another project started relying on using them. The fact 
 that
 there are at least 2 ML threads on how to fix and test this at this point in
 ceilometer's life seems like a poor way to handle it. I don't want to see us
 repeat this by allowing cross-project interactions to depend on unstable
 interfaces.

+1

 I agree that there is a scaling issue, our variable testing quality and 
 coverage
 between all the projects in the tempest tree is proof enough of this. I just
 don't want to see us lose the protection we have against inadvertent changes.
 Having the friction of something like the tempest two step is important, we've
 blocked a lot of breaking api changes because of this.

 The other thing to consider is that when we adopted branchless tempest part of
 the goal there was to ensure the consistency between release boundaries. If
 we're really advocating dropping most of the API coverage out of tempest part 
 of
 the story needs to be around how we prevent things from slipping between 
 release
 boundaries too.

I'm also worried about the impact on TripleO - we run everything
together functionally, and we've been aiming at the gate since
forever: we need more stability, and I'm worried that this may lead to
less. I don't think more lock-down and a bigger matrix is needed - and
I support doing an experiment to see if we end up in a better place.
Still worried :).


 But, having worked on this stuff for ~2 years I can say from personal 
 experience
 that every project slips when it comes to API stability, despite the best
 intentions, unless there was test coverage for it. I don't want to see us open
 the flood gates on this just because we've gotten ourselves into a bad 
 situation
 with the state of the gate.

+1


 Our current model leans far too much on the idea of the only time we
 ever try to test things for real is when we throw all 1 million lines of
 source code into one pot and stir. It really shouldn't be surprising how
 many bugs shake out there. And this is the wrong layer to debug from, so
 I firmly believe we need to change this back to something we can
 actually manage to shake the bugs out with. Because right now we're
 finding them, but our infrastructure isn't optimized for fixing them,
 and we need to change that.


 I agree a layered approach is best, I'm not disagreeing on that point. I just 
 am
 not sure how much we really should be decreasing the scope of Tempest as the 
 top
 layer around the api tests. I don't think we should too much just because we
 beefing up the middle with improved functional testing. In my view having some
 duplication between the layers is fine and desirable actually.

 Anyway, I feel like I'm diverging this thread off into a different area, so 
 I'll
 shoot off a separate thread on the topic of scale and scope of Tempest and the
 new in-tree project specific functional tests. But to summarize, what I think 
 we
 should be clear about at the high level for this thread is that for right now 
 is
 that for the short term we aren't changing the scope of Tempest. Instead we
 should just be vigilant in managing tempest's growth (which we've been trying 
 to
 do already) We can revisit the discussion of decreasing Tempest's size once
 everyone's figured out the per project functional testing. This will also give
 us time to collect longer term data about test stability in the gate so we can
 figure out which things are actually valuable to have in tempest. I think this
 is what probably got lost in the noise here but has been discussed elsewhere.

I'm pretty interested in having contract tests within each project, I
think thats the right responsibility for them - my specific concern is
the recovery process / time to recovery when a regression does get
through.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] debug logs and defaults was (Thoughts on the patch test failure rate and moving forward)

2014-07-25 Thread Robert Collins
On 25 July 2014 10:44, Doug Hellmann d...@doughellmann.com wrote:


 So - I know thats brief - what we'd like to do is to poll a slightly
 wider set of deployers - e.g. via a spec, perhaps some help from Tom

 This one would be a good place for that conversation to start: 
 https://review.openstack.org/#/c/91446/

Sorry no- you've missed my point: that review is the *new standards
for logging*, I'm talking about capturing what is *in production now*,
and changing the *defaults* - a one-line patch to each project, and
removing the overrides from devstack.

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] debug logs and defaults was (Thoughts on the patch test failure rate and moving forward)

2014-07-25 Thread Robert Collins
On 25 July 2014 11:03, Sean Dague s...@dague.net wrote:
 On 07/24/2014 06:44 PM, Doug Hellmann wrote:

 So - I know thats brief - what we'd like to do is to poll a slightly
 wider set of deployers - e.g. via a spec, perhaps some help from Tom

 This one would be a good place for that conversation to start: 
 https://review.openstack.org/#/c/91446/

 Right, kind of already been doing that for the last few months. :)

 Assistance moving the ball forward appreciated. I think we really need
 to just land this stuff in phases, as even getting through the minor
 adjusts in that spec (like AUDIT change) is going to take a while. A
 bunch of people have been going preemptively on it which is good.

So as I said to Doug, this is a different but very important thing.

I totally support having better logging standards, but its going to
take time to align them with what deployers are actually running - and
 we may have it wrong. I want to *add into the mix* a committment to
run what our users are running, so that we can:
 - tell if we got it wrong
 - get developers seeing what deployers see

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Windows] New Windows Disk Image Builder Project

2014-07-25 Thread Robert Collins
On 10 July 2014 16:19, Kumar, Om (Cloud OS RD) om.ku...@hp.com wrote:
 Hi Rob,

 So far, no dissenting opinions. Can we get started with merge proposal to 
 -infra to setup repositories etc..?

Please do.

 On a side note, we are okay with releasing the code under Apache V2 license. 
 But I could not find any coding practices guide for PowerShell on 
 https://wiki.openstack.org/wiki/CodingStandards. Is there a separate link 
 that I should look at..?

You may need to make some up.

-Rob
-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

2014-07-25 Thread Kyle Mestery
On Fri, Jul 25, 2014 at 4:48 PM, Mandeep Dhami dh...@noironetworks.com wrote:

 Thanks for the deck Jay, that is very helpful.

 Also, would it help the process by having some clear guidelines/expectations
 around review time as well? In particular, if you have put a -1 or -2, and
 the issues that you have identified have been addressed by an update (or at
 least the original author thinks that he has addressed your concern), is it
 reasonable to expect that you will re-review in a reasonable time? This
 way, the updates can either proceed, or be rejected, as they are being
 developed instead of accumulating in a backlog that we then try to get
 approved on the last day of the cut-off?

I agree, if someone puts a -2 on a patch stressing an issue and the
committer has resolved those issues, the -2 should also be resolved in
a timely manner. If the issue can't be resolved in the review itself,
as this wiki page [1] indicates, the issue should be moved to the
mailing list.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/CodeReviewGuidelines

 Regards,
 Mandeep



 On Fri, Jul 25, 2014 at 12:50 PM, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -
  From: Jay Pipes jaypi...@gmail.com
  To: openstack-dev@lists.openstack.org
 
  On 07/24/2014 10:05 AM, CARVER, PAUL wrote:
   Alan Kavanagh wrote:
  
   If we have more work being put on the table, then more Core
   members would definitely go a long way with assisting this, we cant
   wait for folks to be reviewing stuff as an excuse to not get
   features landed in a given release.
 
  We absolutely can and should wait for folks to be reviewing stuff
  properly. A large number of problems in OpenStack code and flawed design
  can be attributed to impatience and pushing through code that wasn't
  ready.
 
  I've said this many times, but the best way to get core reviews on
  patches that you submit is to put the effort into reviewing others'
  code. Core reviewers are more willing to do reviews for someone who is
  clearly trying to help the project in more ways than just pushing their
  own code. Note that, Alan, I'm not trying to imply that you are guilty
  of the above! :) I'm just recommending techniques for the general
  contributor community who are not on a core team (including myself!).

 I agree with all of the above, I do think however there is another
 un-addressed area where there *may* be room for optimization - which is how
 we use the earlier milestones. I apologize in advance because this is
 somewhat tangential to Alan's points but I think it is relevant to the
 general frustration around what did/didn't get approved in time for the
 deadline and ultimately what will or wont get reviewed in time to make the
 release versus being punted to Kilo or even further down the road.

 We land very, very, little in terms of feature work in the *-1 and *-2
 milestones in each release (and this is not just a Neutron thing). Even
 though we know without a doubt that the amount of work currently approved
 for J-3 is not realistic we also know that we will land significantly more
 features in this milestone than the other two that have already been and
 gone, which to my way of thinking is actually kind of backwards to the ideal
 situation.

 What is unclear to me however is how much of this is a result of
 difficulty identifying and approving less controversial/more straightforward
 specifications quickly following summit (keeping in mind this time around
 there was arguably some additional delay as the *-specs repository approach
 was bedded down), an unavoidable result of human nature being to *really*
 push when there is a *hard* deadline to beat, or just that these earlier
 milestones are somewhat impacted from fatigue from the summit (I know a lot
 of people also try to take some well earned time off around this period + of
 course many are still concentrated on stabilization of the previous
 release). As a result it's unclear whether there is anything concrete that
 can be done to change this but I thought I would bring it up in case anyone
 else has any bright ideas!

  [SNIP]

   We ought to (in my personal opinion) be supplying core reviewers to
   at least a couple of OpenStack projects. But one way or another we
   need to get more capabilities reviewed and merged. My personal top
   disappointments are with the current state of IPv6, HA, and QoS, but
   I'm sure other folks can list lots of other capabilities that
   they're really going to be frustrated to find lacking in Juno.
 
  I agree with you. It's not something that is fixable overnight, or by a
  small group of people, IMO. It's something that needs to be addressed by
  the core project teams, acting as a group in order to reduce review wait
  times and ensure that there is responsiveness, transparency and
  thoroughness to the review (code as well as spec) process.
 
  I put together some slides recently that have some insights and
  (hopefully) some helpful 

Re: [openstack-dev] [swift] - question about statsd messages and 404 errors

2014-07-25 Thread Samuel Merritt

On 7/25/14, 4:58 AM, Seger, Mark (Cloud Services) wrote:

I’m trying to track object server GET errors using statsd and I’m not
seeing them.  The test I’m doing is to simply do a GET on an
non-existent object.  As expected, a 404 is returned and the object
server log records it.  However, statsd implies it succeeded because
there were no errors reported.  A read of the admin guide does clearly
say the GET timing includes failed GETs, but my question then becomes
how is one to tell there was a failure?  Should there be another type of
message that DOES report errors?  Or how about including these in the
‘object-server.GET.errors.timing’ message?


What error means with respect to Swift's backend-server timing metrics 
is pretty fuzzy at the moment, and could probably use some work.


The idea is that object-server.GET.timing has timing data for everything 
that Swift handled successfully, and object-server.GET.timing.errors has 
timing data for things where Swift failed.


Some things are pretty easy to divide up. For example, 200-series status 
code always counts as success, and 500-series status code always counts 
as error.


It gets tricky in the 400-series status codes. For example, a 404 means 
that a client asked for an object that doesn't exist. That's not Swift's 
fault, so that goes into the success bucket (object-server.GET.timing). 
Similarly, a 412 means that a client set an unsatisfiable precondition 
in the If-Match, If-None-Match, If-Modified-Since, or 
If-Unmodified-Since headers, and Swift correctly determined that the 
requested object can't fulfill the precondition, so that one goes in the 
success bucket too.


However, there are other status codes that are more ambiguous. Consider 
409; the object server responds with 409 if the request's X-Timestamp is 
less than the object's X-Timestamp (on PUT/POST/DELETE). You can get 
this with two near-simultaneous POSTs:


  1. request A hits proxy; proxy assigns X-Timestamp: 1406316223.851131
  2. request B hits proxy; proxy assigns X-Timestamp: 1406316223.851132
  3. request B hits object server and gets 202
  4. request A hits object server and gets 409

Does that error count as Swift's fault? If the client requests were 
nearly simultaneous, then I think not; there's always going to be *some* 
delay between accept() and gettimeofday(). On the other hand, if one 
proxy server's time is significantly behind another's, then it is 
Swift's fault.


It's even worse with 400; sometimes it's for bad paths (like asking an 
object server for /partition/account/container; this can happen if 
the administrator misconfigures their rings), and sometimes it's for bad 
X-Delete-At / X-Delete-After values (which are set by the client).


I'm not sure what the best way to fix this is, but if you just want to 
see some error metrics, unmount a disk to get some 507s.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] team meeting for 1 Aug cancelled

2014-07-25 Thread Doug Hellmann
As we discussed during today’s meeting, we will not hold a meeting next week (1 
Aug 2014). A lot of the team is either taking vacation or traveling for other 
reasons.

We will resume meeting on 8 Aug.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing the tox==1.6.1 pin

2014-07-25 Thread Matthew Oliver
Just tested tox 1.7.2 with swift on my Dev box and tox -epy27 runs fine.

So seems Swift isn't affected by this.

Matt
 On Jul 26, 2014 7:44 AM, Clark Boylan cboy...@sapwetik.org wrote:

 Hello,

 The recent release of tox 1.7.2 has fixed the {posargs} interpolation
 issues we had with newer tox which forced us to be pinned to tox==1.6.1.
 Before we can remove the pin and start telling people to use latest tox
 we need to address a new default behavior in tox.

 New tox sets a random PYTHONHASHSEED value by default. Arguably this is
 a good thing as it forces you to write code that handles unknown hash
 seeds, but unfortunately many projects' unittests don't currently deal
 with this very well. A work around is to hard set a PYTHONHASHSEED of 0
 in tox.ini files. I have begun to propose these changes to the projects
 that I have tested and found to not handle random seeds. It would be
 great if we could get these reviewed and merged so that infra can update
 the version of tox used on our side.

 I probably won't be able to test every single project and propose fixes
 with backports to stable branches for everything. It would be a massive
 help if individual projects tested and proposed fixes as necessary too
 (these changes will need to be backported to stable branches). You can
 test by running `tox -epy27` in your project with tox version 1.7.2. If
 that fails add PYTHONHASHSEED=0 as in
 https://review.openstack.org/#/c/109700/ and rerun `tox -epy27` to
 confirm that succeeds.

 This will get us over the immediate hump of the tox upgrade, but we
 should also start work to make our tests run with random hashes. This
 shouldn't be too hard to do as it will be a self gating change once
 infra is able to update the version of tox used in the gate. Most of the
 issues appear related to dict entry ordering. I have gone ahead and
 created https://bugs.launchpad.net/cinder/+bug/1348818 to track this
 work.

 Thank you,
 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Boot from ISO feature status

2014-07-25 Thread Maksym Lobur
Hi Vish!

Appreciate your feedback! Are there some significant pitfalls that forced
Nova team to decide that?

Currently I'm testing my local nova modifications to get real boot from ISO
functionality like described in the spec. I'm fetching ISO image from
glance into the separate file under the instances/uuid/ dir, attaching it
as a CDROM and boot from it. I also do a blank root drive 'disk' which I
use to install OS to.

Are there any things require extra attention? Any pitfalls in such an
approach?

Best regards,
Max Lobur,
OpenStack Developer, Mirantis, Inc.

Mobile: +38 (093) 665 14 28
Skype: max_lobur

38, Lenina ave. Kharkov, Ukraine
www.mirantis.com
www.mirantis.ru


On Tue, Jul 22, 2014 at 8:57 AM, Vishvananda Ishaya vishvana...@gmail.com
wrote:

 This is somewhat confusing, but long ago the decision was made that
 booting from an ISO image should use the ISO as a root drive. This means
 that it is only really useful for things like live cds. I believe you could
 use the new block device mapping code to create an instance that boots from
 an iso and has an ephemeral drive as well but I haven’t tested this.

 Vish

 On Jul 22, 2014, at 7:57 AM, Maksym Lobur mlo...@mirantis.com wrote:

 Hi Folks,

 Could someone please share his experience with Nova Boot from ISO
 feature [1].

 We test it on Havana + KVM, uploaded the image with DISK_FORMAT set to
 'iso'. Windows deployment does not happen. The VM has two volumes: one is
 config-2 (CDFS, ~400Kb, don't know what that is); and the second one is
 our flavor volume (80Gb). The windows ISO contents (about 500Mb) for some
 reason are inside a flavor volume instead of separate CD drive.

 So far I found only two patches for nova: vmware [2] and Xen [3].
 Does it work with KVM? Maybe some specific nova configuration required for
 KVM.

 [1] https://wiki.openstack.org/wiki/BootFromISO
 [2] https://review.openstack.org/#/c/63084/
 [3] https://review.openstack.org/#/c/38650/


 Thanks beforehand!

 Max Lobur,
 OpenStack Developer, Mirantis, Inc.

 Mobile: +38 (093) 665 14 28
 Skype: max_lobur

 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

2014-07-25 Thread Mandeep Dhami
What would be a good guideline for timely manner? I would recommend
something like 2-3 days unless the reviewer is on vacation or is
indisposed. Is it possible to update gerrit/jenkins to send reminders to
reviewers in such a scenario?

Regards,
Mandeep
-




On Fri, Jul 25, 2014 at 3:14 PM, Kyle Mestery mest...@mestery.com wrote:

 On Fri, Jul 25, 2014 at 4:48 PM, Mandeep Dhami dh...@noironetworks.com
 wrote:
 
  Thanks for the deck Jay, that is very helpful.
 
  Also, would it help the process by having some clear
 guidelines/expectations
  around review time as well? In particular, if you have put a -1 or -2,
 and
  the issues that you have identified have been addressed by an update (or
 at
  least the original author thinks that he has addressed your concern), is
 it
  reasonable to expect that you will re-review in a reasonable time? This
  way, the updates can either proceed, or be rejected, as they are being
  developed instead of accumulating in a backlog that we then try to get
  approved on the last day of the cut-off?
 
 I agree, if someone puts a -2 on a patch stressing an issue and the
 committer has resolved those issues, the -2 should also be resolved in
 a timely manner. If the issue can't be resolved in the review itself,
 as this wiki page [1] indicates, the issue should be moved to the
 mailing list.

 Thanks,
 Kyle

 [1] https://wiki.openstack.org/wiki/CodeReviewGuidelines

  Regards,
  Mandeep
 
 
 
  On Fri, Jul 25, 2014 at 12:50 PM, Steve Gordon sgor...@redhat.com
 wrote:
 
  - Original Message -
   From: Jay Pipes jaypi...@gmail.com
   To: openstack-dev@lists.openstack.org
  
   On 07/24/2014 10:05 AM, CARVER, PAUL wrote:
Alan Kavanagh wrote:
   
If we have more work being put on the table, then more Core
members would definitely go a long way with assisting this, we cant
wait for folks to be reviewing stuff as an excuse to not get
features landed in a given release.
  
   We absolutely can and should wait for folks to be reviewing stuff
   properly. A large number of problems in OpenStack code and flawed
 design
   can be attributed to impatience and pushing through code that wasn't
   ready.
  
   I've said this many times, but the best way to get core reviews on
   patches that you submit is to put the effort into reviewing others'
   code. Core reviewers are more willing to do reviews for someone who is
   clearly trying to help the project in more ways than just pushing
 their
   own code. Note that, Alan, I'm not trying to imply that you are guilty
   of the above! :) I'm just recommending techniques for the general
   contributor community who are not on a core team (including myself!).
 
  I agree with all of the above, I do think however there is another
  un-addressed area where there *may* be room for optimization - which is
 how
  we use the earlier milestones. I apologize in advance because this is
  somewhat tangential to Alan's points but I think it is relevant to the
  general frustration around what did/didn't get approved in time for the
  deadline and ultimately what will or wont get reviewed in time to make
 the
  release versus being punted to Kilo or even further down the road.
 
  We land very, very, little in terms of feature work in the *-1 and *-2
  milestones in each release (and this is not just a Neutron thing). Even
  though we know without a doubt that the amount of work currently
 approved
  for J-3 is not realistic we also know that we will land significantly
 more
  features in this milestone than the other two that have already been and
  gone, which to my way of thinking is actually kind of backwards to the
 ideal
  situation.
 
  What is unclear to me however is how much of this is a result of
  difficulty identifying and approving less controversial/more
 straightforward
  specifications quickly following summit (keeping in mind this time
 around
  there was arguably some additional delay as the *-specs repository
 approach
  was bedded down), an unavoidable result of human nature being to
 *really*
  push when there is a *hard* deadline to beat, or just that these earlier
  milestones are somewhat impacted from fatigue from the summit (I know a
 lot
  of people also try to take some well earned time off around this period
 + of
  course many are still concentrated on stabilization of the previous
  release). As a result it's unclear whether there is anything concrete
 that
  can be done to change this but I thought I would bring it up in case
 anyone
  else has any bright ideas!
 
   [SNIP]
 
We ought to (in my personal opinion) be supplying core reviewers to
at least a couple of OpenStack projects. But one way or another we
need to get more capabilities reviewed and merged. My personal top
disappointments are with the current state of IPv6, HA, and QoS, but
I'm sure other folks can list lots of other capabilities that
they're really going to be frustrated to find lacking in Juno.
  
   

Re: [openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

2014-07-25 Thread Ivar Lazzaro
I agree that it's important to set a guideline for this topic.
What if the said reviewer is on vacation or indisposed? Should a fallback
strategy exist for that case? A reviewer could indicate a delegate core
to review its -2s whenever he has no chance to do it.

Thanks,
Ivar.


On Fri, Jul 25, 2014 at 5:35 PM, Mandeep Dhami dh...@noironetworks.com
wrote:


 What would be a good guideline for timely manner? I would recommend
 something like 2-3 days unless the reviewer is on vacation or is
 indisposed. Is it possible to update gerrit/jenkins to send reminders to
 reviewers in such a scenario?

 Regards,
 Mandeep
 -




 On Fri, Jul 25, 2014 at 3:14 PM, Kyle Mestery mest...@mestery.com wrote:

 On Fri, Jul 25, 2014 at 4:48 PM, Mandeep Dhami dh...@noironetworks.com
 wrote:
 
  Thanks for the deck Jay, that is very helpful.
 
  Also, would it help the process by having some clear
 guidelines/expectations
  around review time as well? In particular, if you have put a -1 or -2,
 and
  the issues that you have identified have been addressed by an update
 (or at
  least the original author thinks that he has addressed your concern),
 is it
  reasonable to expect that you will re-review in a reasonable time?
 This
  way, the updates can either proceed, or be rejected, as they are being
  developed instead of accumulating in a backlog that we then try to get
  approved on the last day of the cut-off?
 
 I agree, if someone puts a -2 on a patch stressing an issue and the
 committer has resolved those issues, the -2 should also be resolved in
 a timely manner. If the issue can't be resolved in the review itself,
 as this wiki page [1] indicates, the issue should be moved to the
 mailing list.

 Thanks,
 Kyle

 [1] https://wiki.openstack.org/wiki/CodeReviewGuidelines

  Regards,
  Mandeep
 
 
 
  On Fri, Jul 25, 2014 at 12:50 PM, Steve Gordon sgor...@redhat.com
 wrote:
 
  - Original Message -
   From: Jay Pipes jaypi...@gmail.com
   To: openstack-dev@lists.openstack.org
  
   On 07/24/2014 10:05 AM, CARVER, PAUL wrote:
Alan Kavanagh wrote:
   
If we have more work being put on the table, then more Core
members would definitely go a long way with assisting this, we
 cant
wait for folks to be reviewing stuff as an excuse to not get
features landed in a given release.
  
   We absolutely can and should wait for folks to be reviewing stuff
   properly. A large number of problems in OpenStack code and flawed
 design
   can be attributed to impatience and pushing through code that wasn't
   ready.
  
   I've said this many times, but the best way to get core reviews on
   patches that you submit is to put the effort into reviewing others'
   code. Core reviewers are more willing to do reviews for someone who
 is
   clearly trying to help the project in more ways than just pushing
 their
   own code. Note that, Alan, I'm not trying to imply that you are
 guilty
   of the above! :) I'm just recommending techniques for the general
   contributor community who are not on a core team (including myself!).
 
  I agree with all of the above, I do think however there is another
  un-addressed area where there *may* be room for optimization - which
 is how
  we use the earlier milestones. I apologize in advance because this is
  somewhat tangential to Alan's points but I think it is relevant to the
  general frustration around what did/didn't get approved in time for the
  deadline and ultimately what will or wont get reviewed in time to make
 the
  release versus being punted to Kilo or even further down the road.
 
  We land very, very, little in terms of feature work in the *-1 and *-2
  milestones in each release (and this is not just a Neutron thing). Even
  though we know without a doubt that the amount of work currently
 approved
  for J-3 is not realistic we also know that we will land significantly
 more
  features in this milestone than the other two that have already been
 and
  gone, which to my way of thinking is actually kind of backwards to the
 ideal
  situation.
 
  What is unclear to me however is how much of this is a result of
  difficulty identifying and approving less controversial/more
 straightforward
  specifications quickly following summit (keeping in mind this time
 around
  there was arguably some additional delay as the *-specs repository
 approach
  was bedded down), an unavoidable result of human nature being to
 *really*
  push when there is a *hard* deadline to beat, or just that these
 earlier
  milestones are somewhat impacted from fatigue from the summit (I know
 a lot
  of people also try to take some well earned time off around this
 period + of
  course many are still concentrated on stabilization of the previous
  release). As a result it's unclear whether there is anything concrete
 that
  can be done to change this but I thought I would bring it up in case
 anyone
  else has any bright ideas!
 
   [SNIP]
 
We ought to (in my personal opinion) be 

[openstack-dev] [heat] Stack update and raw_template backup

2014-07-25 Thread Anant Patil
Hi,

When we do a stack update, I see that there are 2 copies of raw_template
stored in database for each update. For n updates there are 2n + 1
entries of raw_template in database. Is this expected or is it a bug?
When I dug more into it, I see that the deep copy of template is not
copying the template.id resulting in it being stored when we do
backup_stack.store and new_stack.store.

Ideally, we should be keeping one copy of old template when the stack is
updated. With this we have the history of templates updated over time.
When the stack is updated, a diff of updated template and current
template can be stored to optimize database.  And perhaps Heat should
have an API to retrieve this history of templates for inspection etc.
when the stack admin needs it.

Please share your thoughts!

- Anant



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev