Re: [openstack-dev] Medium Availability VMs

2013-09-20 Thread Mike Spreitzer
 From: Tim Bell tim.b...@cern.ch
 ...
 Is this something that will be added into OpenStack or made 
 available as open source through something like stackforge ?

I and some others think that the OpenStack architecture should have a 
place for holistic infrastructure scheduling.  I also think this is an 
area where vendors will want to compete; I think my company has some 
pretty good technology for this and will want to sell it for money.  
https://wiki.openstack.org/wiki/Open requires that the free OpenStack 
includes a pretty good implementation of this function too, and I think 
others have some they want to contribute.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Client and Policy

2013-09-20 Thread Jamie Lennox
(Resend this as i realize it didn't get to the list) 


I just want to clarify where some of this discussion came from. I
actually think that oslo does a great job at keeping so many project up
to date with common code without the restrictions that going to a
library straight away. The problem is I don't think it will work for
code that should exist client side.

The most obvious example we have of this is the currently in oslo
apiclient/exceptions.py. This is a really good set of exceptions however
if novaclient imports exceptions to
novaclient.openstack.common.apiclient.exceptions and keystoneclient does
it to keystoneclient.openstack.common.apiclient.exceptions then there
are NOT the same objects. It will be a fairly common situation where
someone is using a number of clients together and so this is a bad
situation. So for client side code we may not be able to go through the
standard oslo process to achieve a common base library. 

This brought up the issue of auth_token and whether it should eventually
live in keystoneclient. The problem with having it live in
keystoneclient is that if there is a CVE in auth_token, or for example
we wish to push the recent changes to allow CA validation on HTTPS
connections in auth_token - the only way to do this is to push the most
cutting edge version of keystoneclient into the global requirements.txt
and pushed throughout OpenStack. Now I'm still confident that this
wouldn't cause any API issues regarding the use of keystoneclient
however the vast majority of keystoneclient does not need to be on this
schedule and i'm sure it would be a problem for packagers.

Policy is not my main focus here - though i agree it should be more
related to keystone, but we should be able to affect these changes in
OSLO. The thought was if we were to spin auth_token out of
keystoneclient into a new package (and i haven't really heard a name for
it i like yet - i'll call it python-openstackauth) then this package
would make a good resting place for oslo policy. This would also become
a good place for the auth_plugins that are currently in nova and we are
trying to push through to other clients.

So splitting two libraries out of OSLO/keystoneclient, baseclient and
openstackauth, openstackauth would depend on baseclient, keystoneclient
would depend on baseclient, openstackauth may require a dependency on
keystoneclient for populating auth_token fields.


Jamie 


On Thu, 2013-09-19 at 13:30 -0700, Mark McLoughlin wrote:
 On Thu, 2013-09-19 at 15:22 -0500, Dolph Mathews wrote:
  
  On Thu, Sep 19, 2013 at 2:59 PM, Adam Young ayo...@redhat.com wrote:
  I can submit a summit proposal.  I was thinking of making it
  more general than just the Policy piece.  Here is my proposed
  session.  Let me know if it rings true:
  
  
  Title: Extracting Shared Libraries from incubator
  
  Some of the security-sensitive code in OpenStack is coped into
  various projects from Oslo-Incubator.  If there is a CVE
  identified in one of these pieces, there is no rapid way to
  update them short of syncing code to all projects.  This
  meeting is to identify the pieces of Oslo-incubator that
  should be extracted into stand alone libraries.
  
  
  
  I believe the goal of oslo-incubator IS to spin out common code into
  standalone libraries in the long run, as appropriate.
 
 Indeed.
 
 https://wiki.openstack.org/wiki/Oslo
 
   Mission Statement:
 
 To produce a set of python libraries containing code shared by 
 OpenStack projects
 
 https://wiki.openstack.org/wiki/Oslo#Incubation
 
   Incubation shouldn't be seen as a long term option for any API - it 
   is merely a stepping stone to inclusion into a published Oslo
   library. 
 
  Some of the code would be best reviewed by members of other
  projects:  Network specific code by Neutron, Policy by
  Keystone, and so forth.  As part of the discussion, we will
  identify a code review process that gets the right reviewers
  for those subprojects.
  
  
  It sounds like the real goal is how do we get relevant/interested
  reviewers in front of oslo reviews without overloading them with
  noise? I'm sure that's a topic that Mark already has an opinion on,
  so I've opened this thread this to openstack-dev.
 
 To take the specific example of the policy API, if someone actively
 wanted to help the process of moving it into a standalone library should
 volunteer to help Flavio out as a maintainer:
 
   https://git.openstack.org/cgit/openstack/oslo-incubator/tree/MAINTAINERS
 
   == policy ==
 
   M: Flavio Percoco fla...@redhat.com
   S: Maintained
   F: policy.py
 
 
 Another aspect is how someone would go about helping do reviews on a
 specific API in oslo-incubator. That's a common need - e.g. for
 maintainers of virt drivers in Nova - and AIUI, these folks just
 subscribe to all gerrit 

Re: [openstack-dev] configapplier licensing

2013-09-20 Thread Roman Podolyaka
Hi Thomas,

I believe all OpenStack projects (including diskimage-builder [1] and
os-apply-config [2]) are distributed under Apache license.

Thanks,
Roman

[1] https://github.com/openstack/diskimage-builder/blob/master/LICENSE
[2] https://github.com/openstack/os-apply-config/blob/master/LICENSE


On Fri, Sep 20, 2013 at 8:02 AM, Thomas Goirand z...@debian.org wrote:

 Hi,

 While trying to package diskimage-builder for Debian, I saw that in some
 files, it's written this file is release under the same license as
 configapplier. However, I haven't been able to find the license of
 configapplier anywhere.

 So, under which license is configapplier released? I need this
 information to populate the debian/copyright file before uploading to
 Sid (to pass the NEW queue).

 Cheers,

 Thomas Goirand (zigo)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-20 Thread Mike Spreitzer
I have written a new outline of my thoughts, you can find it at 
https://docs.google.com/document/d/1RV_kN2Io4dotxZREGEks9DM0Ih_trFZ-PipVDdzxq_E

It is intended to stand up better to independent study.  However, it is 
still just an outline.  I am still learning about stuff going on in 
OpenStack, and am learning and thinking faster than I can write.  Trying 
to figure out how to cope.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Grizzly] Quantum-L3-Agent quite slow on initialization

2013-09-20 Thread Chu Duc Minh
I use Quantum-ovs with IP network-namespace.
I only have 1 quantum-router with 30 networks.
Each network have exactly 1 subnet (like: 10.0.x.0/24)
Currently, i only use 10 floating IP for some VM instances.

But when i (re-)start quantum-la3-agent, i took  5 minutes to complete
initialization:
(complete initialization means finish build iptables rules, ip namespace,
add floating ip,...)
I recheck quantum-l3-agent log and I sure about this.

*Do you think 5 minutes is too much for a network configuration like that?*

Best Regards,
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Client and Policy

2013-09-20 Thread Flavio Percoco

On 19/09/13 17:10 -0400, Adam Young wrote:

On 09/19/2013 04:30 PM, Mark McLoughlin wrote:

To take the specific example of the policy API, if someone actively
wanted to help the process of moving it into a standalone library should
volunteer to help Flavio out as a maintainer:

  https://git.openstack.org/cgit/openstack/oslo-incubator/tree/MAINTAINERS

  == policy ==

  M: Flavio Percoco fla...@redhat.com
  S: Maintained
  F: policy.py


Would it make sense to explicitly add Keystone developers, or can we 
include the launchpad keystone-core group to this module?
If we want to keep it per user,  I'm willing to do so, and I think we 
have a couple of other likely candidates from Keystone:  I'll let 
then speak up for themselves.



I don't think it is possible to have per-file core reviewers. Not sure
what your plans are w.r.t policy.py but there's something still
missing, which is porting other modules to the latest release. 


I'm not saying that Oslo module maintainers are expected to port other OS
modules to the latest version but, since the maintainer knows what's
changed and when that changed, I think it should be the maintainer
taking the first step to align all OS modules.

That being said, I admit I didn't do a great job w.r.t aligning other
OS modules policy code, no excuses there.

If someone from keystone's team wants to become the policy.py
maintainer, I'm happy to give it away, I think keystone guys have more
context than me there.:D

One last thing, there's this blueprint[0] I created that plans to add
some kind of persistence to policy.py - besides policy.json -, IIRC,
there's something already going on in Keystone related to this.
Please, feel free to take the blueprint, add comments, work items and
what not.

[0] https://blueprints.launchpad.net/oslo/+spec/policy-persistence

Should we submit the names as review requests against the MAINTAINERS 
file in that repo?




Yup

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VM Ensembles

2013-09-20 Thread Simon Pasquier

Le 20/09/2013 11:06, Rodrigo Alejandre Prada a écrit :

Hello experts,

Is anybody aware if 'VM Ensembles' feature
(https://blueprints.launchpad.net/nova/+spec/vm-ensembles) will be
finally included in Havana release? According project information it's
on Approved state but no milestone-related info.


VM ensembles depends on the instance grouping API [1] which didn't make 
it for Havana [2].


[1] https://wiki.openstack.org/wiki/InstanceGroupApiExtension
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2013-September/014732.html


Cheers,



Thanks in advance for your feedback.

Cheers,
Rodrigo A.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Simon Pasquier
Software Engineer
Bull, Architect of an Open World
Phone: + 33 4 76 29 71 49
http://www.bull.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [energy] Kwapi Ceilometer plugin

2013-09-20 Thread François Rossigneux

Hello,
I fixed the issue.
Thanks.


Le 20/09/2013 10:09, Elton Kevani a écrit :

Hello,
 I'm trying to install Kwapi from source and make it work with 
ceilometer.  The kwapi-driver,kwapi-forwarder and kwapi-rrd are 
working fine but when i try starting kwapi-api i have these errors:


2013-09-20 10:04:37.231 19656 INFO kwapi.plugins.api.app [-] Starting API
2013-09-20 10:04:37.234 19656 INFO kwapi.plugins.api.collector [-] 
Starting Collector
2013-09-20 10:04:37.234 19656 INFO kwapi.plugins.api.collector [-] 
Cleaning collector
2013-09-20 10:04:37.235 19656 INFO kwapi.plugins.api.collector [-] API 
listening to ['ipc:///tmp/kwapi-forwarder']
2013-09-20 10:04:37.236 19656 INFO 
keystoneclient.middleware.auth_token [-] Starting keystone auth_token 
middleware
2013-09-20 10:04:37.237 19656 INFO 
keystoneclient.middleware.auth_token [-] Using 
/tmp/keystone-signing-xuW4AL as cache directory for signing certificate
2013-09-20 10:04:37.240 19656 INFO werkzeug [-]  * Running on 
http://0.0.0.0:5000/
2013-09-20 10:05:25.427 19656 INFO 
keystoneclient.middleware.auth_token [-] Auth Token proceeding with 
requested v2.0 apis
2013-09-20 10:05:25.620 19656 ERROR kwapi.plugins.api.app [-] 
Exception on /v1/probes/ [GET]
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app Traceback 
(most recent call last):
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app   File 
/usr/local/lib/python2.7/dist-packages/flask/app.py, line 1817, in 
wsgi_app
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app response 
= self.full_dispatch_request()
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app   File 
/usr/local/lib/python2.7/dist-packages/flask/app.py, line 1477, in 
full_dispatch_request
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app rv = 
self.handle_user_exception(e)
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app   File 
/usr/local/lib/python2.7/dist-packages/flask/app.py, line 1381, in 
handle_user_exception
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app 
reraise(exc_type, exc_value, tb)
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app   File 
/usr/local/lib/python2.7/dist-packages/flask/app.py, line 1473, in 
full_dispatch_request
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app rv = 
self.preprocess_request()
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app   File 
/usr/local/lib/python2.7/dist-packages/flask/app.py, line 1666, in 
preprocess_request

2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app rv = func()
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app   File 
/root/kwapi/kwapi/plugins/api/acl.py, line 49, in check
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app if not 
policy.check_is_admin(headers.get('X-Roles', ).split(,)):
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app   File 
/root/kwapi/kwapi/policy.py, line 53, in check_is_admin

2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app init()
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app   File 
/root/kwapi/kwapi/policy.py, line 41, in init
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app 
reload_func=_set_rules)
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app   File 
/root/kwapi/kwapi/utils.py, line 41, in read_cached_file
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app 
reload_func(cache_info['data'])
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app   File 
/root/kwapi/kwapi/policy.py, line 46, in _set_rules
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app 
policy.set_rules(policy.Rules.load_json(data, default_rule))
2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app 
AttributeError: 'module' object has no attribute 'set_rules'

2013-09-20 10:05:25.620 19656 TRACE kwapi.plugins.api.app
2013-09-20 10:05:25.631 19656 INFO werkzeug [-] 10.10.10.101 - - 
[20/Sep/2013 10:05:25] GET /v1/probes/ HTTP/1.1 500 -



My api.conf for kwapi is :


# Kwapi config file

[DEFAULT]

# Communication
api_port = 5000
probes_endpoint = ipc:///tmp/kwapi-forwarder

# Signature
signature_checking = true
driver_metering_secret = test

# ACL
acl_enabled = true
#acl_auth_url = http://10.10.10.121:5000/v2.0
policy_file = /etc/kwapi/policy.json

# Timers
cleaning_interval = 300

# Log files
log_file = /var/log/kwapi/kwapi-api.log
verbose = true

[keystone_authtoken]
auth_uri = http://10.10.10.101:5000/v2.0
auth_host = 10.10.10.101
auth_port = 35357
auth_protocol = http
auth_version = v2.0
admin_user = kwapi
admin_password = test
admin_tenant_name = service


Any suggestions Emoji?




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Metrics] Activity_Board now in infra

2013-09-20 Thread Jesus M. Gonzalez-Barahona
Hi all,

The OpenStack Development Dashboard [1] is now in infra [2]. If you want
to browse details about how to get the data (JSON and SQL), how to clone
 deploy the dashboard elsewhere, or how to reproduce the data retrieval
and analysis process, you can refer to the README [3] or the wiki [4].

[1] http://activity.openstack.org/dash/
[2] http://git.openstack.org/cgit/openstack-infra/activity-board/
[3]
http://git.openstack.org/cgit/openstack-infra/activity-board/tree/README.md
[4] https://wiki.openstack.org/wiki/Activity_Board

Bug reports and patches are welcome. For reports, please use the
OpenStack_Community tracker [5] (tag: activityboard). For patches,
please use the usual code review process.

[5] https://launchpad.net/openstack-community

Any feedback is welcome.

Saludos,

Jesus.

-- 
-- 
Bitergia: http://bitergia.com http://blog.bitergia.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VM Ensembles

2013-09-20 Thread Rodrigo Alejandre Prada
Thanks very much Simon.


On 20 September 2013 12:33, Simon Pasquier simon.pasqu...@bull.net wrote:

 Le 20/09/2013 11:06, Rodrigo Alejandre Prada a écrit :

  Hello experts,

 Is anybody aware if 'VM Ensembles' feature
 (https://blueprints.launchpad.**net/nova/+spec/vm-ensembleshttps://blueprints.launchpad.net/nova/+spec/vm-ensembles)
 will be
 finally included in Havana release? According project information it's
 on Approved state but no milestone-related info.


 VM ensembles depends on the instance grouping API [1] which didn't make it
 for Havana [2].

 [1] 
 https://wiki.openstack.org/**wiki/InstanceGroupApiExtensionhttps://wiki.openstack.org/wiki/InstanceGroupApiExtension
 [2] http://lists.openstack.org/**pipermail/openstack-dev/2013-**
 September/014732.htmlhttp://lists.openstack.org/pipermail/openstack-dev/2013-September/014732.html

 Cheers,


 Thanks in advance for your feedback.

 Cheers,
 Rodrigo A.


 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Simon Pasquier
 Software Engineer
 Bull, Architect of an Open World
 Phone: + 33 4 76 29 71 49
 http://www.bull.com

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [energy] Kwapi Ceilometer plugin

2013-09-20 Thread Elton Kevani

Thanks for the quick fix.
Now kwapi-api is working just fine :-D . Now i have errors in ceilometer side:
2013-09-20 12:05:41 INFO [urllib3.connectionpool] Starting new HTTP 
connection (1): 10.10.10.1212013-09-20 12:05:41DEBUG 
[urllib3.connectionpool] GET /v1/probes/ HTTP/1.1 200 3442013-09-20 12:05:41  
WARNING [ceilometer.central.manager] Continue after error from kwapi: 
'instancemethod' object has no attribute '__getitem__'2013-09-20 12:05:41
ERROR [ceilometer.central.manager] 'instancemethod' object has no attribute 
'__getitem__'Traceback (most recent call last):  File 
/usr/lib/python2.7/dist-packages/ceilometer/central/manager.py, line 50, in 
poll_and_publishself.manager)))  File 
/usr/lib/python2.7/dist-packages/ceilometer/energy/kwapi.py, line 82, in 
get_countersfor probe in self.iter_probes(manager.keystone):  File 
/usr/lib/python2.7/dist-packages/ceilometer/energy/kwapi.py, line 45, in 
iter_probesprobes = message['probes']TypeError: 'instancemethod' object has 
no attribute '__getitem__'

Do you know something about this error?
Date: Fri, 20 Sep 2013 11:38:36 +0200
From: francois.rossign...@inria.fr
To: openstack-dev@lists.openstack.org
CC: eltonkev...@hotmail.com
Subject: Re: [openstack-dev] [energy] Kwapi Ceilometer plugin


  

  
  
Hello,

I fixed the issue.

Thanks.





Le 20/09/2013 10:09, Elton Kevani a
  écrit :



  
  Hello,
 I'm trying to install Kwapi from source and make it work
  with ceilometer.  The kwapi-driver,kwapi-forwarder and
  kwapi-rrd are working fine but
when i try starting kwapi-api i have these errors:


  

  2013-09-20 10:04:37.231 19656
INFO kwapi.plugins.api.app [-] Starting API
  2013-09-20 10:04:37.234 19656
INFO kwapi.plugins.api.collector [-] Starting Collector
  2013-09-20 10:04:37.234 19656
INFO kwapi.plugins.api.collector [-] Cleaning collector
  2013-09-20 10:04:37.235 19656
INFO kwapi.plugins.api.collector [-] API listening to
['ipc:///tmp/kwapi-forwarder']
  2013-09-20 10:04:37.236 19656
INFO keystoneclient.middleware.auth_token [-] Starting
keystone auth_token middleware
  2013-09-20 10:04:37.237 19656
INFO keystoneclient.middleware.auth_token [-] Using
/tmp/keystone-signing-xuW4AL as cache directory for signing
certificate
  2013-09-20 10:04:37.240 19656
INFO werkzeug [-]  * Running on http://0.0.0.0:5000/
  2013-09-20 10:05:25.427 19656
INFO keystoneclient.middleware.auth_token [-] Auth Token
proceeding with requested v2.0 apis
  2013-09-20 10:05:25.620 19656
ERROR kwapi.plugins.api.app [-] Exception on /v1/probes/
[GET]
  2013-09-20 10:05:25.620 19656
TRACE kwapi.plugins.api.app Traceback (most recent call
last):
  2013-09-20 10:05:25.620 19656
TRACE kwapi.plugins.api.app   File
/usr/local/lib/python2.7/dist-packages/flask/app.py, line
1817, in wsgi_app
  2013-09-20 10:05:25.620 19656
TRACE kwapi.plugins.api.app response =
self.full_dispatch_request()
  2013-09-20 10:05:25.620 19656
TRACE kwapi.plugins.api.app   File
/usr/local/lib/python2.7/dist-packages/flask/app.py, line
1477, in full_dispatch_request
  2013-09-20 10:05:25.620 19656
TRACE kwapi.plugins.api.app rv =
self.handle_user_exception(e)
  2013-09-20 10:05:25.620 19656
TRACE kwapi.plugins.api.app   File
/usr/local/lib/python2.7/dist-packages/flask/app.py, line
1381, in handle_user_exception
  2013-09-20 10:05:25.620 19656
TRACE kwapi.plugins.api.app reraise(exc_type, exc_value,
tb)
  2013-09-20 10:05:25.620 19656
TRACE kwapi.plugins.api.app   File
/usr/local/lib/python2.7/dist-packages/flask/app.py, line
1473, in full_dispatch_request
  2013-09-20 10:05:25.620 19656
TRACE kwapi.plugins.api.app rv =
self.preprocess_request()
  2013-09-20 10:05:25.620 19656
TRACE kwapi.plugins.api.app   File
/usr/local/lib/python2.7/dist-packages/flask/app.py, line
1666, in preprocess_request
  2013-09-20 10:05:25.620 19656
TRACE kwapi.plugins.api.app rv = func()
  2013-09-20 10:05:25.620 19656
TRACE kwapi.plugins.api.app   File
/root/kwapi/kwapi/plugins/api/acl.py, line 49, in check
  2013-09-20 10:05:25.620 19656
TRACE kwapi.plugins.api.app if not
policy.check_is_admin(headers.get('X-Roles',
).split(,)):
  2013-09-20 

Re: [openstack-dev] [energy] Kwapi Ceilometer plugin

2013-09-20 Thread François Rossigneux

Yes, you should update Ceilometer.

Or modify the ceilometer/energy/kwapi.py like this (line 46) :
- message = request.json
+ message = request.json()


Le 20/09/2013 12:13, Elton Kevani a écrit :


Thanks for the quick fix.

Now kwapi-api is working just fine :-D . Now i have errors in 
ceilometer side:


2013-09-20 12:05:41 INFO [urllib3.connectionpool] Starting new 
HTTP connection (1): 10.10.10.121
2013-09-20 12:05:41DEBUG [urllib3.connectionpool] GET /v1/probes/ 
HTTP/1.1 200 344
2013-09-20 12:05:41  WARNING [ceilometer.central.manager] Continue 
after error from kwapi: 'instancemethod' object has no attribute 
'__getitem__'
2013-09-20 12:05:41ERROR [ceilometer.central.manager] 
'instancemethod' object has no attribute '__getitem__'

Traceback (most recent call last):
  File 
/usr/lib/python2.7/dist-packages/ceilometer/central/manager.py, line 
50, in poll_and_publish

self.manager)))
  File /usr/lib/python2.7/dist-packages/ceilometer/energy/kwapi.py, 
line 82, in get_counters

for probe in self.iter_probes(manager.keystone):
  File /usr/lib/python2.7/dist-packages/ceilometer/energy/kwapi.py, 
line 45, in iter_probes

probes = message['probes']
TypeError: 'instancemethod' object has no attribute '__getitem__'


Do you know something about this error?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] [Ceilometer] Adding Alarms gives error

2013-09-20 Thread Somanchi Trinath-B39208
Hi Julien-

With respect to ceilometers implementation and code base, I have a query about 
pecon framework

[*] Is Ceilometer completely integrated with pecon framework.
[*] As I see, Ceilometer implementation is different from Nova, Neutron type of 
implementations. Is this due the new WSGI framework?

Kindly, help me understand the same.

Thanking you.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048


-Original Message-
From: Julien Danjou [mailto:jul...@danjou.info] 
Sent: Thursday, September 19, 2013 6:31 PM
To: Somanchi Trinath-B39208
Cc: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Openstack-dev] [Ceilometer] Adding Alarms gives 
error

On Thu, Sep 19 2013, Somanchi Trinath-B39208 wrote:

 I get the following error when I create an alarm.

I think this is bug #1227264 that got fixed a few hours ago, you may want to 
update your copy of Ceilometer.

--
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] [Ceilometer] Adding Alarms gives error

2013-09-20 Thread Mehdi Abaakouk
Hi,

On Fri, Sep 20, 2013 at 10:44:14AM +, Somanchi Trinath-B39208 wrote:
 I still have the following issue:
 
 2013-09-20 15:12:58.116 21869 ERROR wsme.api [-] Server-side error: 'Alarm' 
 object has no attribute 'None_rule'. Detail: 
 Traceback (most recent call last):
 
   File /usr/local/lib/python2.7/dist-packages/wsmeext/pecan.py, line 72, in 
 callfunction
 result = f(self, *args, **kwargs)
 
   File 
 /usr/local/lib/python2.7/dist-packages/ceilometer/api/controllers/v2.py, 
 line 1286, in post
 change = data.as_dict(storage.models.Alarm)
 
   File 
 /usr/local/lib/python2.7/dist-packages/ceilometer/api/controllers/v2.py, 
 line 1080, in as_dict
 d['rule'] = getattr(self, %s_rule % self.type).as_dict()
 
 AttributeError: 'Alarm' object has no attribute 'None_rule'
 
 10.10.10.100 - - [20/Sep/2013 15:13:06] POST /v2/alarms HTTP/1.1 500 135
 
 
 
 From Ceilometer client I'm not sending any None_rule attribute. But then, 
 Is API appending any thing ... ?

The alarm json representation have been changed a bit, the ceilometerclient 
will be
backward compatible, the review that offer the backward
compatibility is not yet merged: https://review.openstack.org/#/c/46707/

About the error message itself, a issue in wsme make the error message
is not the excepted one. I'm currently writting a workaround to ensure the
good error message and return code are returned.


Regards, 

-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Openstack-devel] PGP key signing party during the HK summit

2013-09-20 Thread Jeremy Stanley
On 2013-09-20 14:33:47 +0800 (+0800), Thomas Goirand wrote:
 Has anyone thought about having a PGP key signing party during the
 summit?
[...]

I'm preparing some documents to help socialize an OpenPGP web of
trust amongst our Release Cycle Management team members, with a hope
of getting a strong set of validated signatures between each of us
while we're in Hong Kong. This documentation will be similar to
(essentially a superset of) the current key signing
recommendations/consensus within the Debian developer community as
well as from some other relevant sources. There are improvements I'm
eager to make to our release processes and automation which will
hinge on a solid web of trust, initially amongst those participating
in release processes (signing git tags, attesting to tarballs and so
on) but ultimately strengthened by extending that trust throughout
the contributor base and our downstream consumers.

My current goal is to organize an official key-signing party for the
entire community at the J summit--but I expect it to be a fairly
large event and would want a time slot for it which didn't overlap
with any design sessions--so we'll need to plan it fairly far in
advance. I still intend to have key management and key signing
recommendations published for the benefit of the OpenStack developer
community in the coming weeks (in time for the Icehouse summit in
Hong Kong), and encourage people to validate and sign each other's
keys at any opportunity. I personally will be happy to make time
between sessions and at evening events to exchange key fingerprints
and show/check passports with anyone who is interested, and hope
others will do the same.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][vmware] VMwareAPI sub-team reviews update 2013-09-19

2013-09-20 Thread Gary Kotton
Hi,
The following two patches are really important (they are really simple and
have been around since beginning of August - they are rebased every couple
of days):-
- https://review.openstack.org/#/c/40298/ - Tempest snapshot fails
- https://review.openstack.org/#/c/43994/ - Disk copy fails
There are a number of patches that we need to base on top of these that
are high/critical (depending on how one looks at it). They are namely:
- https://review.openstack.org/#/c/46730/ - flavor root disk sizes are not
honored
- https://review.openstack.org/#/c/47503/ - disabling linked clone and
cacheing of images
- https://review.openstack.org/#/c/46231/ - VM resize
Thanks
Gary

On 9/19/13 11:39 PM, Shawn Hartsock hartso...@vmware.com wrote:

Greetings stackers!

A quick mid-week update on the patches we're tracking for Havana-rc1.
There was a bug in my vote counting code that I use to query votes. Some
of the older patches were getting their votes counted wrong. Tracking the
age of a submitted patchset (number of days since a patchset was
posted) and the revision number helps spot these problems. I try to
validate these reports by hand, but I do miss things on occasion. Let me
know if I need to add or edit something.

Ordered by priority:
* High/Critical https://bugs.launchpad.net/bugs/1223709
https://review.openstack.org/46027 readiness:ready for core
* High/Critical https://bugs.launchpad.net/bugs/1216510
https://review.openstack.org/43616 readiness:needs one more +2/approval
* High/Critical https://bugs.launchpad.net/bugs/1226211
https://review.openstack.org/46789 readiness:ready for core
* High/Critical https://bugs.launchpad.net/bugs/1217541
https://review.openstack.org/43621 readiness:needs review
* High/High https://bugs.launchpad.net/bugs/1187853
https://review.openstack.org/45349 readiness:ready for core
* Medium/High https://bugs.launchpad.net/bugs/1190515
https://review.openstack.org/33100 readiness:ready for core
* High https://bugs.launchpad.net/bugs/1184807
https://review.openstack.org/40298 readiness:ready for core
* High https://bugs.launchpad.net/bugs/1214850
https://review.openstack.org/43270 readiness:needs review
* High https://bugs.launchpad.net/bugs/1226052
https://review.openstack.org/46730 readiness:needs review
* High https://bugs.launchpad.net/bugs/1226826
https://review.openstack.org/47030 readiness:needs review
* High https://bugs.launchpad.net/bugs/1225002
https://review.openstack.org/41977 readiness:ready for core
* High https://bugs.launchpad.net/bugs/1194018
https://review.openstack.org/43641 readiness:ready for core
* High https://bugs.launchpad.net/bugs/1171226
https://review.openstack.org/43994 readiness:ready for core
* Medium https://bugs.launchpad.net/bugs/1183654
https://review.openstack.org/45203 readiness:needs revision
* Medium https://bugs.launchpad.net/bugs/1223074
https://review.openstack.org/45864 readiness:needs review
* Medium https://bugs.launchpad.net/bugs/1199954
https://review.openstack.org/46231 readiness:needs review
* Medium https://bugs.launchpad.net/bugs/1222349
https://review.openstack.org/45570 readiness:needs one more +2/approval
* Medium https://bugs.launchpad.net/bugs/1216961
https://review.openstack.org/43721 readiness:needs one more +2/approval
* Medium https://bugs.launchpad.net/bugs/1215352
https://review.openstack.org/43268 readiness:needs one more +2/approval
* Medium https://bugs.launchpad.net/bugs/1197041
https://review.openstack.org/43621 readiness:needs review
* Medium https://bugs.launchpad.net/bugs/1222948
https://review.openstack.org/46400 readiness:needs revision
* Medium https://bugs.launchpad.net/bugs/1226238
https://review.openstack.org/46824 readiness:needs review
* Medium https://bugs.launchpad.net/bugs/1224479
https://review.openstack.org/46277 readiness:ready for core
* Medium https://bugs.launchpad.net/bugs/1207064
https://review.openstack.org/42024 readiness:needs revision
* Medium https://bugs.launchpad.net/bugs/1180044
https://review.openstack.org/43270 readiness:needs review
* Medium https://bugs.launchpad.net/bugs/1226425
https://review.openstack.org/46895 readiness:needs revision
* Low https://bugs.launchpad.net/bugs/1215958
https://review.openstack.org/43665 readiness:needs review
* Low https://bugs.launchpad.net/bugs/1226450
https://review.openstack.org/46896 readiness:ready for core

--
--
Ordered by fitness for review:

== needs one more +2/approval ==
* Medium https://bugs.launchpad.net/bugs/1222349 review:
https://review.openstack.org/45570
   title: 'VMware: datastore_regex is not honoured'
   votes: +2:1, +1:5, -1:0, -2:0  age: 11 days revision: 4
* Medium https://bugs.launchpad.net/bugs/1216961 review:
https://review.openstack.org/43721
   title: 'VMware: exceptions for RetrievePropertiesEx incorrectly handled'
   votes: +2:1, +1:5, -1:0, -2:0  age: 1 days revision: 2
* Medium https://bugs.launchpad.net/bugs/1215352 review:

Re: [openstack-dev] Fwd: [Openstack-devel] PGP key signing party during the HK summit

2013-09-20 Thread Mike Spreitzer
What's the threat model here?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][vmware] VMwareAPI sub-team reviews update 2013-09-19

2013-09-20 Thread Shawn Hartsock
Thanks. I'll change the report script if these didn't show up properly.

# Shawn Hartsock


- Original Message -
 From: Gary Kotton gkot...@vmware.com
 To: Shawn Hartsock hartso...@vmware.com, openstack-dev@lists.openstack.org
 Sent: Friday, September 20, 2013 10:07:59 AM
 Subject: Re: [openstack-dev][Nova][vmware] VMwareAPI sub-team reviews update 
 2013-09-19
 
 Hi,
 The following two patches are really important (they are really simple and
 have been around since beginning of August - they are rebased every couple
 of days):-
 - https://review.openstack.org/#/c/40298/ - Tempest snapshot fails
 - https://review.openstack.org/#/c/43994/ - Disk copy fails
 There are a number of patches that we need to base on top of these that
 are high/critical (depending on how one looks at it). They are namely:
 - https://review.openstack.org/#/c/46730/ - flavor root disk sizes are not
 honored
 - https://review.openstack.org/#/c/47503/ - disabling linked clone and
 cacheing of images
 - https://review.openstack.org/#/c/46231/ - VM resize
 Thanks
 Gary
 
 On 9/19/13 11:39 PM, Shawn Hartsock hartso...@vmware.com wrote:
 
 Greetings stackers!
 
 A quick mid-week update on the patches we're tracking for Havana-rc1.
 There was a bug in my vote counting code that I use to query votes. Some
 of the older patches were getting their votes counted wrong. Tracking the
 age of a submitted patchset (number of days since a patchset was
 posted) and the revision number helps spot these problems. I try to
 validate these reports by hand, but I do miss things on occasion. Let me
 know if I need to add or edit something.
 
 Ordered by priority:
 * High/Critical https://bugs.launchpad.net/bugs/1223709
 https://review.openstack.org/46027 readiness:ready for core
 * High/Critical https://bugs.launchpad.net/bugs/1216510
 https://review.openstack.org/43616 readiness:needs one more +2/approval
 * High/Critical https://bugs.launchpad.net/bugs/1226211
 https://review.openstack.org/46789 readiness:ready for core
 * High/Critical https://bugs.launchpad.net/bugs/1217541
 https://review.openstack.org/43621 readiness:needs review
 * High/High https://bugs.launchpad.net/bugs/1187853
 https://review.openstack.org/45349 readiness:ready for core
 * Medium/High https://bugs.launchpad.net/bugs/1190515
 https://review.openstack.org/33100 readiness:ready for core
 * High https://bugs.launchpad.net/bugs/1184807
 https://review.openstack.org/40298 readiness:ready for core
 * High https://bugs.launchpad.net/bugs/1214850
 https://review.openstack.org/43270 readiness:needs review
 * High https://bugs.launchpad.net/bugs/1226052
 https://review.openstack.org/46730 readiness:needs review
 * High https://bugs.launchpad.net/bugs/1226826
 https://review.openstack.org/47030 readiness:needs review
 * High https://bugs.launchpad.net/bugs/1225002
 https://review.openstack.org/41977 readiness:ready for core
 * High https://bugs.launchpad.net/bugs/1194018
 https://review.openstack.org/43641 readiness:ready for core
 * High https://bugs.launchpad.net/bugs/1171226
 https://review.openstack.org/43994 readiness:ready for core
 * Medium https://bugs.launchpad.net/bugs/1183654
 https://review.openstack.org/45203 readiness:needs revision
 * Medium https://bugs.launchpad.net/bugs/1223074
 https://review.openstack.org/45864 readiness:needs review
 * Medium https://bugs.launchpad.net/bugs/1199954
 https://review.openstack.org/46231 readiness:needs review
 * Medium https://bugs.launchpad.net/bugs/1222349
 https://review.openstack.org/45570 readiness:needs one more +2/approval
 * Medium https://bugs.launchpad.net/bugs/1216961
 https://review.openstack.org/43721 readiness:needs one more +2/approval
 * Medium https://bugs.launchpad.net/bugs/1215352
 https://review.openstack.org/43268 readiness:needs one more +2/approval
 * Medium https://bugs.launchpad.net/bugs/1197041
 https://review.openstack.org/43621 readiness:needs review
 * Medium https://bugs.launchpad.net/bugs/1222948
 https://review.openstack.org/46400 readiness:needs revision
 * Medium https://bugs.launchpad.net/bugs/1226238
 https://review.openstack.org/46824 readiness:needs review
 * Medium https://bugs.launchpad.net/bugs/1224479
 https://review.openstack.org/46277 readiness:ready for core
 * Medium https://bugs.launchpad.net/bugs/1207064
 https://review.openstack.org/42024 readiness:needs revision
 * Medium https://bugs.launchpad.net/bugs/1180044
 https://review.openstack.org/43270 readiness:needs review
 * Medium https://bugs.launchpad.net/bugs/1226425
 https://review.openstack.org/46895 readiness:needs revision
 * Low https://bugs.launchpad.net/bugs/1215958
 https://review.openstack.org/43665 readiness:needs review
 * Low https://bugs.launchpad.net/bugs/1226450
 https://review.openstack.org/46896 readiness:ready for core
 
 --
 --
 Ordered by fitness for review:
 
 == needs one more +2/approval ==
 * Medium 

Re: [openstack-dev] [Nova][vmware] VMwareAPI sub-team reviews update 2013-09-19

2013-09-20 Thread Shawn Hartsock
Hi Gary,

Only one of those is not already listed:  
https://review.openstack.org/#/c/47503/ - disabling linked clone and cacheing 
of images

I'll spend some cycles to see why it wasn't picked up by default. I've bumped 
up it's vmwareapi-team priority as it seems pretty important.

# Shawn Hartsock


- Original Message -
 From: Gary Kotton gkot...@vmware.com
 To: Shawn Hartsock hartso...@vmware.com, openstack-dev@lists.openstack.org
 Sent: Friday, September 20, 2013 10:07:59 AM
 Subject: Re: [openstack-dev][Nova][vmware] VMwareAPI sub-team reviews update 
 2013-09-19
 
 Hi,
 The following two patches are really important (they are really simple and
 have been around since beginning of August - they are rebased every couple
 of days):-
 - https://review.openstack.org/#/c/40298/ - Tempest snapshot fails
 - https://review.openstack.org/#/c/43994/ - Disk copy fails
 There are a number of patches that we need to base on top of these that
 are high/critical (depending on how one looks at it). They are namely:
 - https://review.openstack.org/#/c/46730/ - flavor root disk sizes are not
 honored
 - https://review.openstack.org/#/c/47503/ - disabling linked clone and
 cacheing of images
 - https://review.openstack.org/#/c/46231/ - VM resize
 Thanks
 Gary
 
 On 9/19/13 11:39 PM, Shawn Hartsock hartso...@vmware.com wrote:
 
 Greetings stackers!
 
 A quick mid-week update on the patches we're tracking for Havana-rc1.
 There was a bug in my vote counting code that I use to query votes. Some
 of the older patches were getting their votes counted wrong. Tracking the
 age of a submitted patchset (number of days since a patchset was
 posted) and the revision number helps spot these problems. I try to
 validate these reports by hand, but I do miss things on occasion. Let me
 know if I need to add or edit something.
 
 Ordered by priority:
 * High/Critical https://bugs.launchpad.net/bugs/1223709
 https://review.openstack.org/46027 readiness:ready for core
 * High/Critical https://bugs.launchpad.net/bugs/1216510
 https://review.openstack.org/43616 readiness:needs one more +2/approval
 * High/Critical https://bugs.launchpad.net/bugs/1226211
 https://review.openstack.org/46789 readiness:ready for core
 * High/Critical https://bugs.launchpad.net/bugs/1217541
 https://review.openstack.org/43621 readiness:needs review
 * High/High https://bugs.launchpad.net/bugs/1187853
 https://review.openstack.org/45349 readiness:ready for core
 * Medium/High https://bugs.launchpad.net/bugs/1190515
 https://review.openstack.org/33100 readiness:ready for core
 * High https://bugs.launchpad.net/bugs/1184807
 https://review.openstack.org/40298 readiness:ready for core
 * High https://bugs.launchpad.net/bugs/1214850
 https://review.openstack.org/43270 readiness:needs review
 * High https://bugs.launchpad.net/bugs/1226052
 https://review.openstack.org/46730 readiness:needs review
 * High https://bugs.launchpad.net/bugs/1226826
 https://review.openstack.org/47030 readiness:needs review
 * High https://bugs.launchpad.net/bugs/1225002
 https://review.openstack.org/41977 readiness:ready for core
 * High https://bugs.launchpad.net/bugs/1194018
 https://review.openstack.org/43641 readiness:ready for core
 * High https://bugs.launchpad.net/bugs/1171226
 https://review.openstack.org/43994 readiness:ready for core
 * Medium https://bugs.launchpad.net/bugs/1183654
 https://review.openstack.org/45203 readiness:needs revision
 * Medium https://bugs.launchpad.net/bugs/1223074
 https://review.openstack.org/45864 readiness:needs review
 * Medium https://bugs.launchpad.net/bugs/1199954
 https://review.openstack.org/46231 readiness:needs review
 * Medium https://bugs.launchpad.net/bugs/1222349
 https://review.openstack.org/45570 readiness:needs one more +2/approval
 * Medium https://bugs.launchpad.net/bugs/1216961
 https://review.openstack.org/43721 readiness:needs one more +2/approval
 * Medium https://bugs.launchpad.net/bugs/1215352
 https://review.openstack.org/43268 readiness:needs one more +2/approval
 * Medium https://bugs.launchpad.net/bugs/1197041
 https://review.openstack.org/43621 readiness:needs review
 * Medium https://bugs.launchpad.net/bugs/1222948
 https://review.openstack.org/46400 readiness:needs revision
 * Medium https://bugs.launchpad.net/bugs/1226238
 https://review.openstack.org/46824 readiness:needs review
 * Medium https://bugs.launchpad.net/bugs/1224479
 https://review.openstack.org/46277 readiness:ready for core
 * Medium https://bugs.launchpad.net/bugs/1207064
 https://review.openstack.org/42024 readiness:needs revision
 * Medium https://bugs.launchpad.net/bugs/1180044
 https://review.openstack.org/43270 readiness:needs review
 * Medium https://bugs.launchpad.net/bugs/1226425
 https://review.openstack.org/46895 readiness:needs revision
 * Low https://bugs.launchpad.net/bugs/1215958
 https://review.openstack.org/43665 readiness:needs review
 * Low https://bugs.launchpad.net/bugs/1226450
 https://review.openstack.org/46896 

[openstack-dev] Nominations for OpenStack PTLs (Project Technical Leads) are now open

2013-09-20 Thread Anita Kuno
Nominations for OpenStack PTLs (Project Technical Leads) are now openand 
will remain open until 23:59 UTC September 26, 2013


To announce your candidacy please start a new 
openstack-dev@lists.openstack.org mailing list thread for yourself with 
the project name as a tag, example [Glance] PTL Candidacy.


I'm sure the electorate would appreciate a bit of information about why 
you would make a great PTL and the direction you would like to take the 
project, though it is not required for eligibility.


In order to be an eligible candidate (and be allowed to vote) on a given 
PTL election, you need to have contributed an accepted patch to one of 
the correspondingprogram's projects during the Grizzly-Havana timeframe 
(from 2012-09-27 to 2013-09-26, 23:59 PST).


We need to elect PTLs for 19 projects this round:

 *   Compute (Nova) - one position

 *   Object Storage (Swift) - one position

 *   Image Service (Glance) - one position

 *   Identity (Keystone) - one position

 *   Dashboard (Horizon) - one position

 *   Networking (Neutron) - one position

 *   Block Storage (Cinder) - one position

 *   Metering/Monitoring (Ceilometer) - one position

 *   Orchestration (Heat) - one position

 *   Database Service (Trove) - one position

 *   Bare metal (Ironic) - one position

 *   Queue service (Marconi) - one position

 *   Common Libraries (Oslo) - one position

 *   Infrastructure - one position

 *   Documentation - one position

 *   Quality Assurance (QA) - one position

 *   Deployment (TripleO) - one position

 *   Devstack (DevStack) - one position

 *   Release cycle management  - one position


Additional information about the nomination process can be found here: 
https://wiki.openstack.org/wiki/PTL_Elections_Fall_2013


As I confirm candidates, I will add their name to the list of confirmed 
candidates on the above wiki page.


Elections will begin on September 27, 2013 (as soon as I get each 
election set up I will start it, it will probably be a staggered start) 
and run until at least 11:59 UTC October 3, 2013.


Happy running,
Anita Kuno (anteaya)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Federated Horizon design

2013-09-20 Thread Toshiyuki Hayashi
Hi,

Regarding UX/UI question, you can also ask on Open Stack UX G+ community.
https://plus.google.com/communities/100954512393463248122
Also, It might be better to provide context information such as:
- Purpose of the UI design
- User Goal
- Related BP or something
The information would help to advise efficiently.

Regards,
Toshi

On Fri, Sep 20, 2013 at 8:04 AM, D.Selvaraj ds...@kent.ac.uk wrote:
 Hi Stackers,

 I kindly request you to look at the attached designs of mine towards the
 goal of Modifying Horizon with federated access. I would be so thankful for
 your advices and comments on it.

 Thank you

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Toshiyuki Hayashi
NTT Innovation Institute Inc.
Tel:650-579-0800 ex4292
mail:haya...@ntti3.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Is Project ID always a number?

2013-09-20 Thread Kurt Griffiths
Does anyone have a feel for what Project ID (AKA Tenant ID) looks like in
practice? Is it always a number, or do some deployments use UUIDs or
something else?

@kgriffs



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [pci passthrough] Is extra_info broken?

2013-09-20 Thread David Kang


 I tried to put some information in the extra_info field, but it causes 
nova-compute crash.
I hacked pci_whitelist.py file to pass it by adding extra_info field in 
_WHITELIST_SCHEMA.
After that nova-compute does not crash, but extra_info is not stored in the DB. 
I want to use it to store the information of path to the device file that 
corresponds to the PCI device.
 
 Is it a bug?

 Thanks,
 David

-- 
--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Candidacy for Compute (Nova) PTL

2013-09-20 Thread Ravi Chunduru
+1


On Fri, Sep 20, 2013 at 10:12 AM, Russell Bryant rbry...@redhat.com wrote:

 Greetings,

 I would like to run for the OpenStack Compute (Nova) PTL position.

 I am the current Nova PTL.  I have been working on OpenStack since late
 2011 and have been primarily been focused on Nova since then.  I would
 love to continue in this position to help drive the Nova project
 forward.

 Quite a bit of work goes into the PTL position beyond specific technical
 work:

 https://wiki.openstack.org/wiki/PTLguide

 Most of what I will focus on in this message are the things that I have
 done and would like to do that go beyond technical topics.


 * Havana

 The Havana release is the first release where I served as the Nova PTL.
 I feel that Havana has been a successful development cycle for us so
 far.  You can find record of our progress toward the Havana release on
 each of the milestone pages:

 https://launchpad.net/nova/+milestone/havana-1
 https://launchpad.net/nova/+milestone/havana-2
 https://launchpad.net/nova/+milestone/havana-3
 https://launchpad.net/nova/+milestone/havana-rc1

 As the PTL, I led the creation of the design summit schedule for the
 Nova track, as well as the majority of the blueprint handling for the
 release roadmap.

 For Icehouse, I expect this process to be largely the same, but I would
 like to involve more people in prioritizing design summit sessions, as
 well as reviewing blueprints.


 * Code Review Process

 The PTL of Nova is certainly not the only technical leader in
 the project.  There is a team of technical leaders, the nova-core team,
 responsible for processing the high volume of code review requests we
 receive.  A key responsibility of the Nova PTL is to ensure that the
 nova-core team has the right people on it at the right time.

 To that end, I have started doing some things in the last release cycle
 to help with managing the core team.  The first is starting to document
 core team expectations:

 https://wiki.openstack.org/wiki/Nova/CoreTeam

 The second is gathering metrics around the core activity of the team:
 code reviews:

 http://russellbryant.net/openstack-stats/nova-reviewers-30.txt
 http://russellbryant.net/openstack-stats/nova-reviewers-90.txt
 http://russellbryant.net/openstack-stats/nova-reviewers-180.txt

 The Nova project has seen an ongoing increase in contributions.  As a
 result, there have been some complaints about review times.  It has been
 a priority of mine to get a handle on this from a project management
 perspective.  The first step here was to start collecting metrics on
 review times, which you can find here:

 http://russellbryant.net/openstack-stats/nova-openreviews.html

 Using these metrics, I can also compare how the Nova project's review
 team is doing compared to other OpenStack projects.

 http://russellbryant.net/openstack-stats/all-openreviews.html

 Now that we have this information, we have been able to set goals and
 make changes based on real data.

 You can find the code for generating all of these stats here:

 http://git.openstack.org/cgit/openstack-infra/reviewstats

 As for the future, I think there are some obvious improvements that
 could be made.  The biggest is that I think there is room to add more
 people to the review team when the opportunity presents itself.  I would
 also like to have another discussion about the future of compute
 drivers, and whether maintainers of some drivers would rather have their
 own repository.  I expect to have a design summit session on this topic:

 http://summit.openstack.org/cfp/details/4


 * Sub-project Leadership

 One thing that is very apparent to me is that given the Nova project's
 size, I think there are too many things for one person to carry.  There
 are multiple great people in the Nova community that step up regularly
 to make things happen.  I think we should start looking at creating some
 official sub-project leadership roles.  Here are some ideas with some
 potential responsibilities:

  - python-novaclient lead
- have a vision for python-novaclient
- review all novaclient patches
- ensure that novaclient blueprints get reviewed and bugs are triaged
- build and lead a group of people interested in novaclient

  - nova bug triage lead
- ensure bugs are triaged
- ensure the highest priority bugs are discussed, either on the
  mailing list or in the weekly nova meeting
- generate metrics on nova bugs
- set goals for nova bug processing, and track our progress against
  the goals using the generated metrics
- build and lead a group of people interested in helping nova by
  doing bug triage

  - nova-drivers team
- (This actually already exists, but I think we could formalize
  responsibilities and make more use of it)
- responsible for reviewing nova blueprints
- ensure all blueprints have appropriate design documentation and fit
  within the 

Re: [openstack-dev] [Nova] Candidacy for Compute (Nova) PTL

2013-09-20 Thread Boris Pavlovic
+1


On Fri, Sep 20, 2013 at 9:21 PM, Shake Chen shake.c...@gmail.com wrote:

 +1


 On Sat, Sep 21, 2013 at 1:15 AM, Ravi Chunduru ravi...@gmail.com wrote:

 +1


 On Fri, Sep 20, 2013 at 10:12 AM, Russell Bryant rbry...@redhat.comwrote:

 Greetings,

 I would like to run for the OpenStack Compute (Nova) PTL position.

 I am the current Nova PTL.  I have been working on OpenStack since late
 2011 and have been primarily been focused on Nova since then.  I would
 love to continue in this position to help drive the Nova project
 forward.

 Quite a bit of work goes into the PTL position beyond specific technical
 work:

 https://wiki.openstack.org/wiki/PTLguide

 Most of what I will focus on in this message are the things that I have
 done and would like to do that go beyond technical topics.


 * Havana

 The Havana release is the first release where I served as the Nova PTL.
 I feel that Havana has been a successful development cycle for us so
 far.  You can find record of our progress toward the Havana release on
 each of the milestone pages:

 https://launchpad.net/nova/+milestone/havana-1
 https://launchpad.net/nova/+milestone/havana-2
 https://launchpad.net/nova/+milestone/havana-3
 https://launchpad.net/nova/+milestone/havana-rc1

 As the PTL, I led the creation of the design summit schedule for the
 Nova track, as well as the majority of the blueprint handling for the
 release roadmap.

 For Icehouse, I expect this process to be largely the same, but I would
 like to involve more people in prioritizing design summit sessions, as
 well as reviewing blueprints.


 * Code Review Process

 The PTL of Nova is certainly not the only technical leader in
 the project.  There is a team of technical leaders, the nova-core team,
 responsible for processing the high volume of code review requests we
 receive.  A key responsibility of the Nova PTL is to ensure that the
 nova-core team has the right people on it at the right time.

 To that end, I have started doing some things in the last release cycle
 to help with managing the core team.  The first is starting to document
 core team expectations:

 https://wiki.openstack.org/wiki/Nova/CoreTeam

 The second is gathering metrics around the core activity of the team:
 code reviews:

 http://russellbryant.net/openstack-stats/nova-reviewers-30.txt
 http://russellbryant.net/openstack-stats/nova-reviewers-90.txt
 http://russellbryant.net/openstack-stats/nova-reviewers-180.txt

 The Nova project has seen an ongoing increase in contributions.  As a
 result, there have been some complaints about review times.  It has been
 a priority of mine to get a handle on this from a project management
 perspective.  The first step here was to start collecting metrics on
 review times, which you can find here:

 http://russellbryant.net/openstack-stats/nova-openreviews.html

 Using these metrics, I can also compare how the Nova project's review
 team is doing compared to other OpenStack projects.

 http://russellbryant.net/openstack-stats/all-openreviews.html

 Now that we have this information, we have been able to set goals and
 make changes based on real data.

 You can find the code for generating all of these stats here:

 http://git.openstack.org/cgit/openstack-infra/reviewstats

 As for the future, I think there are some obvious improvements that
 could be made.  The biggest is that I think there is room to add more
 people to the review team when the opportunity presents itself.  I would
 also like to have another discussion about the future of compute
 drivers, and whether maintainers of some drivers would rather have their
 own repository.  I expect to have a design summit session on this topic:

 http://summit.openstack.org/cfp/details/4


 * Sub-project Leadership

 One thing that is very apparent to me is that given the Nova project's
 size, I think there are too many things for one person to carry.  There
 are multiple great people in the Nova community that step up regularly
 to make things happen.  I think we should start looking at creating some
 official sub-project leadership roles.  Here are some ideas with some
 potential responsibilities:

  - python-novaclient lead
- have a vision for python-novaclient
- review all novaclient patches
- ensure that novaclient blueprints get reviewed and bugs are triaged
- build and lead a group of people interested in novaclient

  - nova bug triage lead
- ensure bugs are triaged
- ensure the highest priority bugs are discussed, either on the
  mailing list or in the weekly nova meeting
- generate metrics on nova bugs
- set goals for nova bug processing, and track our progress against
  the goals using the generated metrics
- build and lead a group of people interested in helping nova by
  doing bug triage

  - nova-drivers team
- (This actually already exists, but I think we could formalize
  responsibilities and make 

Re: [openstack-dev] Fwd: [Openstack-devel] PGP key signing party during the HK summit

2013-09-20 Thread Clint Byrum
Excerpts from Thomas Goirand's message of 2013-09-19 23:33:47 -0700:
 
 Hi,
 
 Has anyone thought about having a PGP key signing party during the
 summit? Guys from the Linux kernel thought it was useless, but after the
 hack of kernel.org, they started to understand it was useful, and now
 they do have a web of trust. As a package maintainer, I would very
 much like to have a signing event during the next HK summit, and collect
 signatures so that I can check the pgp signed tags, which to my very
 satisfaction, starts to appear for every package release (not sure if
 this comes from the fact I've been annoying everyone about it in this
 list, though that's a very good thing).

I have been to two such events and they are extremely beneficial for growing
the PGP web of trust.

http://www.cryptnet.net/fdp/crypto/keysigning_party/en/keysigning_party.html#overview

Given the size of the summit, I suggest the hash based method.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is Project ID always a number?

2013-09-20 Thread Steve Martinelli

Unless I'm mistaken, the project ID should be a UUID.


Thanks,

_
Steve Martinelli | A4-317 @ IBM Toronto Software Lab
Software Developer - OpenStack
Phone: (905) 413-2851
E-Mail: steve...@ca.ibm.com



From:   Kurt Griffiths kurt.griffi...@rackspace.com
To: OpenStack Dev openstack-dev@lists.openstack.org,
Date:   09/20/2013 01:18 PM
Subject:[openstack-dev] Is Project ID always a number?



Does anyone have a feel for what Project ID (AKA Tenant ID) looks like in
practice? Is it always a number, or do some deployments use UUIDs or
something else?

@kgriffs



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

inline: graycol.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Candidacy for Compute (Nova) PTL

2013-09-20 Thread Russell Bryant
Greetings,

I would like to run for the OpenStack Compute (Nova) PTL position.

I am the current Nova PTL.  I have been working on OpenStack since late
2011 and have been primarily been focused on Nova since then.  I would
love to continue in this position to help drive the Nova project
forward.

Quite a bit of work goes into the PTL position beyond specific technical
work:

https://wiki.openstack.org/wiki/PTLguide

Most of what I will focus on in this message are the things that I have
done and would like to do that go beyond technical topics.


* Havana

The Havana release is the first release where I served as the Nova PTL.
I feel that Havana has been a successful development cycle for us so
far.  You can find record of our progress toward the Havana release on
each of the milestone pages:

https://launchpad.net/nova/+milestone/havana-1
https://launchpad.net/nova/+milestone/havana-2
https://launchpad.net/nova/+milestone/havana-3
https://launchpad.net/nova/+milestone/havana-rc1

As the PTL, I led the creation of the design summit schedule for the
Nova track, as well as the majority of the blueprint handling for the
release roadmap.

For Icehouse, I expect this process to be largely the same, but I would
like to involve more people in prioritizing design summit sessions, as
well as reviewing blueprints.


* Code Review Process

The PTL of Nova is certainly not the only technical leader in
the project.  There is a team of technical leaders, the nova-core team,
responsible for processing the high volume of code review requests we
receive.  A key responsibility of the Nova PTL is to ensure that the
nova-core team has the right people on it at the right time.

To that end, I have started doing some things in the last release cycle
to help with managing the core team.  The first is starting to document
core team expectations:

https://wiki.openstack.org/wiki/Nova/CoreTeam

The second is gathering metrics around the core activity of the team:
code reviews:

http://russellbryant.net/openstack-stats/nova-reviewers-30.txt
http://russellbryant.net/openstack-stats/nova-reviewers-90.txt
http://russellbryant.net/openstack-stats/nova-reviewers-180.txt

The Nova project has seen an ongoing increase in contributions.  As a
result, there have been some complaints about review times.  It has been
a priority of mine to get a handle on this from a project management
perspective.  The first step here was to start collecting metrics on
review times, which you can find here:

http://russellbryant.net/openstack-stats/nova-openreviews.html

Using these metrics, I can also compare how the Nova project's review
team is doing compared to other OpenStack projects.

http://russellbryant.net/openstack-stats/all-openreviews.html

Now that we have this information, we have been able to set goals and
make changes based on real data.

You can find the code for generating all of these stats here:

http://git.openstack.org/cgit/openstack-infra/reviewstats

As for the future, I think there are some obvious improvements that
could be made.  The biggest is that I think there is room to add more
people to the review team when the opportunity presents itself.  I would
also like to have another discussion about the future of compute
drivers, and whether maintainers of some drivers would rather have their
own repository.  I expect to have a design summit session on this topic:

http://summit.openstack.org/cfp/details/4


* Sub-project Leadership

One thing that is very apparent to me is that given the Nova project's
size, I think there are too many things for one person to carry.  There
are multiple great people in the Nova community that step up regularly
to make things happen.  I think we should start looking at creating some
official sub-project leadership roles.  Here are some ideas with some
potential responsibilities:

 - python-novaclient lead
   - have a vision for python-novaclient
   - review all novaclient patches
   - ensure that novaclient blueprints get reviewed and bugs are triaged
   - build and lead a group of people interested in novaclient

 - nova bug triage lead
   - ensure bugs are triaged
   - ensure the highest priority bugs are discussed, either on the
 mailing list or in the weekly nova meeting
   - generate metrics on nova bugs
   - set goals for nova bug processing, and track our progress against
 the goals using the generated metrics
   - build and lead a group of people interested in helping nova by
 doing bug triage

 - nova-drivers team
   - (This actually already exists, but I think we could formalize
 responsibilities and make more use of it)
   - responsible for reviewing nova blueprints
   - ensure all blueprints have appropriate design documentation and fit
 within the overall project vision
   - regularly discuss blueprints with each other and the overall nova
 community via the mailing list and weekly meeting to ensure Nova
 has 

Re: [openstack-dev] [Nova] Candidacy for Compute (Nova) PTL

2013-09-20 Thread Anita Kuno
Please note that the +1's attached to the candidate announcement are not 
considered votes.


Voting begins starting September 27th.

I can appreciate that folks want to demonstrate their support for their 
candidate of choice.


Since we have 19 positions and I am expecting multiple candidate 
announcements for each, I will ask that subsequent emailers of the +1 
variety to express their support in other methods, including the 
upcoming election.


I need to ensure I don't miss important email traffic.

My thanks in advance for your understanding,
Anita.

On 13-09-20 01:26 PM, Boris Pavlovic wrote:

+1


On Fri, Sep 20, 2013 at 9:21 PM, Shake Chen shake.c...@gmail.com 
mailto:shake.c...@gmail.com wrote:


+1


On Sat, Sep 21, 2013 at 1:15 AM, Ravi Chunduru ravi...@gmail.com
mailto:ravi...@gmail.com wrote:

+1


On Fri, Sep 20, 2013 at 10:12 AM, Russell Bryant
rbry...@redhat.com mailto:rbry...@redhat.com wrote:

Greetings,

I would like to run for the OpenStack Compute (Nova) PTL
position.

I am the current Nova PTL.  I have been working on
OpenStack since late
2011 and have been primarily been focused on Nova since
then.  I would
love to continue in this position to help drive the Nova
project
forward.

Quite a bit of work goes into the PTL position beyond
specific technical
work:

https://wiki.openstack.org/wiki/PTLguide

Most of what I will focus on in this message are the
things that I have
done and would like to do that go beyond technical topics.


* Havana

The Havana release is the first release where I served as
the Nova PTL.
I feel that Havana has been a successful development cycle
for us so
far.  You can find record of our progress toward the
Havana release on
each of the milestone pages:

https://launchpad.net/nova/+milestone/havana-1
https://launchpad.net/nova/+milestone/havana-2
https://launchpad.net/nova/+milestone/havana-3
https://launchpad.net/nova/+milestone/havana-rc1

As the PTL, I led the creation of the design summit
schedule for the
Nova track, as well as the majority of the blueprint
handling for the
release roadmap.

For Icehouse, I expect this process to be largely the
same, but I would
like to involve more people in prioritizing design summit
sessions, as
well as reviewing blueprints.


* Code Review Process

The PTL of Nova is certainly not the only technical leader in
the project.  There is a team of technical leaders, the
nova-core team,
responsible for processing the high volume of code review
requests we
receive.  A key responsibility of the Nova PTL is to
ensure that the
nova-core team has the right people on it at the right time.

To that end, I have started doing some things in the last
release cycle
to help with managing the core team.  The first is
starting to document
core team expectations:

https://wiki.openstack.org/wiki/Nova/CoreTeam

The second is gathering metrics around the core activity
of the team:
code reviews:

http://russellbryant.net/openstack-stats/nova-reviewers-30.txt
http://russellbryant.net/openstack-stats/nova-reviewers-90.txt
http://russellbryant.net/openstack-stats/nova-reviewers-180.txt

The Nova project has seen an ongoing increase in
contributions.  As a
result, there have been some complaints about review
times.  It has been
a priority of mine to get a handle on this from a project
management
perspective.  The first step here was to start collecting
metrics on
review times, which you can find here:

http://russellbryant.net/openstack-stats/nova-openreviews.html

Using these metrics, I can also compare how the Nova
project's review
team is doing compared to other OpenStack projects.

http://russellbryant.net/openstack-stats/all-openreviews.html

Now that we have this information, we have been able to
set goals and
make changes based on real data.

You can find the code for generating all of these stats here:

http://git.openstack.org/cgit/openstack-infra/reviewstats

As for the future, I think there are some obvious
improvements that
could be 

[openstack-dev] [marconi] Agenda for Monday's meeting @ 1900 UTC

2013-09-20 Thread Kurt Griffiths
The Marconi project team holds a weekly meeting in #openstack-meeting-alt
on Mondays, 1600 UTC.

The next meeting is this coming Monday, Sept. 23. Everyone is welcome, but
please take a minute to review the wiki before attending for the first time:

  http://wiki.openstack.org/marconi

Proposed Agenda:

  * Review actions from last time
  * Review bugs
  * Status update on marconi-proxy
  * Audit and freeze HTTP v1 API
  * Open discussion (time permitting)

If you have additions to the agenda, please add them to the wiki and note
your IRC name so we can call on you during the meeting:

  http://wiki.openstack.org/Meetings/Marconi

Cheers,

Kurt G. (kgriffs)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Dan Wendlandt
I think the real problem here is that in Nova there are bug fixes that are
tiny and very important to a particular subset of the user population and
yet have been around for well over a month without getting a single core
review.

Take for example https://review.openstack.org/#/c/40298/ , which fixes an
important snapshot bug for the vmwareapi driver.  This was posted well over
a month ago on August 5th.  It is a solid patch, is 54 new/changed lines
including unit test enhancements.  The commit message clearly shows which
tempest tests it fixes.  It has been reviewed by many vmware reviewers with
+1s for a long time, but the patch just keeps having to be rebased as it
sits waiting for core reviewer attention.

To me, the high-level take away is that it is hard to get new contributors
excited about working on Nova when their well-written and well-targeted bug
fixes just sit there, getting no feedback and not moving closer to merging.
 The bug above was the developer's first patch to OpenStack and while he
hasn't complained a bit, I think the experience is far from the community
behavior that we need to encourages new, high-quality contributors from
diverse sources.  For Nova to succeed in its goals of being a platform
agnostic cloud layer, I think this is something we need a community
strategy to address and I'd love to see it as part of the discussion put
forward by those people nominating themselves as PTL.

Dan



On Fri, Sep 20, 2013 at 7:07 AM, Gary Kotton gkot...@vmware.com wrote:

 Hi,
 The following two patches are really important (they are really simple and
 have been around since beginning of August - they are rebased every couple
 of days):-
 - https://review.openstack.org/#/c/40298/ - Tempest snapshot fails
 - https://review.openstack.org/#/c/43994/ - Disk copy fails
 There are a number of patches that we need to base on top of these that
 are high/critical (depending on how one looks at it). They are namely:
 - https://review.openstack.org/#/c/46730/ - flavor root disk sizes are not
 honored
 - https://review.openstack.org/#/c/47503/ - disabling linked clone and
 cacheing of images
 - https://review.openstack.org/#/c/46231/ - VM resize
 Thanks
 Gary

 On 9/19/13 11:39 PM, Shawn Hartsock hartso...@vmware.com wrote:

 Greetings stackers!
 
 A quick mid-week update on the patches we're tracking for Havana-rc1.
 There was a bug in my vote counting code that I use to query votes. Some
 of the older patches were getting their votes counted wrong. Tracking the
 age of a submitted patchset (number of days since a patchset was
 posted) and the revision number helps spot these problems. I try to
 validate these reports by hand, but I do miss things on occasion. Let me
 know if I need to add or edit something.
 
 Ordered by priority:
 * High/Critical https://bugs.launchpad.net/bugs/1223709
 https://review.openstack.org/46027 readiness:ready for core
 * High/Critical https://bugs.launchpad.net/bugs/1216510
 https://review.openstack.org/43616 readiness:needs one more +2/approval
 * High/Critical https://bugs.launchpad.net/bugs/1226211
 https://review.openstack.org/46789 readiness:ready for core
 * High/Critical https://bugs.launchpad.net/bugs/1217541
 https://review.openstack.org/43621 readiness:needs review
 * High/High https://bugs.launchpad.net/bugs/1187853
 https://review.openstack.org/45349 readiness:ready for core
 * Medium/High https://bugs.launchpad.net/bugs/1190515
 https://review.openstack.org/33100 readiness:ready for core
 * High https://bugs.launchpad.net/bugs/1184807
 https://review.openstack.org/40298 readiness:ready for core
 * High https://bugs.launchpad.net/bugs/1214850
 https://review.openstack.org/43270 readiness:needs review
 * High https://bugs.launchpad.net/bugs/1226052
 https://review.openstack.org/46730 readiness:needs review
 * High https://bugs.launchpad.net/bugs/1226826
 https://review.openstack.org/47030 readiness:needs review
 * High https://bugs.launchpad.net/bugs/1225002
 https://review.openstack.org/41977 readiness:ready for core
 * High https://bugs.launchpad.net/bugs/1194018
 https://review.openstack.org/43641 readiness:ready for core
 * High https://bugs.launchpad.net/bugs/1171226
 https://review.openstack.org/43994 readiness:ready for core
 * Medium https://bugs.launchpad.net/bugs/1183654
 https://review.openstack.org/45203 readiness:needs revision
 * Medium https://bugs.launchpad.net/bugs/1223074
 https://review.openstack.org/45864 readiness:needs review
 * Medium https://bugs.launchpad.net/bugs/1199954
 https://review.openstack.org/46231 readiness:needs review
 * Medium https://bugs.launchpad.net/bugs/1222349
 https://review.openstack.org/45570 readiness:needs one more +2/approval
 * Medium https://bugs.launchpad.net/bugs/1216961
 https://review.openstack.org/43721 readiness:needs one more +2/approval
 * Medium https://bugs.launchpad.net/bugs/1215352
 https://review.openstack.org/43268 readiness:needs one more +2/approval
 * Medium 

[openstack-dev] [Marconi] PTL Candidacy

2013-09-20 Thread Kurt Griffiths
I would like to announce my candidacy for the Marconi PTL. At the Grizzly
summit I organized the Marconi project in an unconference session at the
Grizzly summit, with some gracious help from Mark Atwood, who then
connected me with Monty Taylor who got us set up on Gerrit and Launchpad.
I was also fortunate to connect early on with Flavio Percoco, who has been
my partner in crime ever since. The project would not have turned out
nearly as well without his many contributions, in terms of both ideas and
code.

Over the past year, I have strived to design and develop Marconi in the
open, inviting and respecting community feedback every step of the way.
I've had the pleasure of working with a really awesome team who shares my
values of openness and pragmatism. I look forward to leading the team to a
successful graduation of the project from incubation, and will continue to
promote a pragmatic, collaborative mindset within the OpenStack community.

See also the Marconi incubation wiki to learn a little about my background
and that of Marconi's contributors:

http://goo.gl/8r16xK

Cheers,

Kurt G. (kgriffs)



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is Project ID always a number?

2013-09-20 Thread Dolph Mathews
On Fri, Sep 20, 2013 at 12:43 PM, Steve Martinelli steve...@ca.ibm.comwrote:

 Unless I'm mistaken, the project ID should be a UUID.

In our current implementation, it happens to be a UUID4 expressed in hex:

  $ python -c import uuid; print uuid.uuid4().hex

I believe it was an auto-incrementing integer in diablo. There's nothing
blocking an alternative implementation from using something else. Generally
speaking, it should be URL-friendly as-is, be globally unique, and
somewhere less than 255 chars in length.




 Thanks,

 _
 Steve Martinelli | A4-317 @ IBM Toronto Software Lab
 Software Developer - OpenStack
 Phone: (905) 413-2851
 E-Mail: steve...@ca.ibm.com

 [image: Inactive hide details for Kurt Griffiths ---09/20/2013 01:18:51
 PM---Does anyone have a feel for what Project ID (AKA Tenant ID]Kurt
 Griffiths ---09/20/2013 01:18:51 PM---Does anyone have a feel for what
 Project ID (AKA Tenant ID) looks like in practice? Is it always a n

 From: Kurt Griffiths kurt.griffi...@rackspace.com
 To: OpenStack Dev openstack-dev@lists.openstack.org,
 Date: 09/20/2013 01:18 PM
 Subject: [openstack-dev] Is Project ID always a number?
 --



 Does anyone have a feel for what Project ID (AKA Tenant ID) looks like in
 practice? Is it always a number, or do some deployments use UUIDs or
 something else?

 @kgriffs



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
graycol.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] PTL Candidacy

2013-09-20 Thread Dolph Mathews
Hello! I'd like to nominate myself once more as PTL for Keystone.

Since becoming PTL for Keystone last release, I've gained a new perspective
on the community which has changed my understanding of the PTL's role
rather dramatically from what I thought I was getting myself into (by which
I mean, I had no clue what to expect!).

I'm now hoping to apply what I've learned towards making Icehouse a
success. Namely, my involvement has shifted towards supporting our
outstanding community of contributors in any way that I can so that they
can be as productive as possible. Never mind the PTL, Keystone's success
would not be possible without the community behind it.

As I'm sure many of you know, my primary interests in the project center
first on stability and user experience, for developers, deployers and end
users alike. That means I care a lot about solid documentation,
self-consistent APIs, helpful error messages, intuitive code and logical
tests.

New features usually take a reduced priority with me, but I'm super
enthused by the community's growing interest in federation, and I'm
particularly looking forward to help solve a few of those use cases during
Icehouse.

Thank you for making Havana a success! I'll see you in Icehouse,

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is Project ID always a number?

2013-09-20 Thread Kurt Griffiths
Thanks guys for the history on this. Very useful! I just wanted to make sure I 
didn't make any invalid assumptions when dealing with IDs in Marconi.

From: Dolph Mathews dolph.math...@gmail.commailto:dolph.math...@gmail.com
Reply-To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, September 20, 2013 1:05 PM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Is Project ID always a number?


On Fri, Sep 20, 2013 at 12:43 PM, Steve Martinelli 
steve...@ca.ibm.commailto:steve...@ca.ibm.com wrote:

Unless I'm mistaken, the project ID should be a UUID.

In our current implementation, it happens to be a UUID4 expressed in hex:

  $ python -c import uuid; print uuid.uuid4().hex

I believe it was an auto-incrementing integer in diablo. There's nothing 
blocking an alternative implementation from using something else. Generally 
speaking, it should be URL-friendly as-is, be globally unique, and somewhere 
less than 255 chars in length.



Thanks,

_
Steve Martinelli | A4-317 @ IBM Toronto Software Lab
Software Developer - OpenStack
Phone: (905) 413-2851tel:%28905%29%20413-2851
E-Mail: steve...@ca.ibm.commailto:steve...@ca.ibm.com

[Inactive hide details for Kurt Griffiths ---09/20/2013 01:18:51 PM---Does 
anyone have a feel for what Project ID (AKA Tenant ID]Kurt Griffiths 
---09/20/2013 01:18:51 PM---Does anyone have a feel for what Project ID (AKA 
Tenant ID) looks like in practice? Is it always a n

From: Kurt Griffiths 
kurt.griffi...@rackspace.commailto:kurt.griffi...@rackspace.com
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date: 09/20/2013 01:18 PM
Subject: [openstack-dev] Is Project ID always a number?





Does anyone have a feel for what Project ID (AKA Tenant ID) looks like in
practice? Is it always a number, or do some deployments use UUIDs or
something else?

@kgriffs



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

-Dolph
attachment: graycol.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Openstack-devel] PGP key signing party during the HK summit

2013-09-20 Thread Jeremy Stanley
On 2013-09-20 10:47:10 -0700 (-0700), Clint Byrum wrote:
[...]
 Also if we are auto-signing anything, the infra team can sign the
 key for the auto-signer, so we can also secure any mirrored copies of
 automatically built artifcats against server side tampering.

Yes, and to that end I've done a little brainstorming in updates to
https://launchpad.net/bugs/1118469 for a phased approach to possibly
implementing some of these improvements on the infra/release
automation side of things.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] question about stack updates, instance groups and wait conditions

2013-09-20 Thread Christopher Armstrong
Hello Simon! I've put responses below.

On Tue, Sep 17, 2013 at 7:57 AM, Simon Pasquier simon.pasqu...@bull.net
wrote:
 Hello,

 I'm testing stack updates with instance group and wait conditions and I'd
 like to get feedback from the Heat community.

 My template declares an instance group resource with size = N and a wait
 condition resource with count = N (N being passed as a parameter of the
 template). Each group's instance is calling cfn-signal (with a different
 id!) at the end of the user data script and my stack creates with no
error.

 Now when I update my stack to run N+X instances, the instance group gets
 updated with size=N+X but since the wait condition is deleted and
recreated,
 the count value should either be updated to X or my existing instances
 should re-execute cfn-signal.

This is a pretty interesting scenario; I don't think we have a very good
solution for it yet.

 To cope with this situation, I've found 2 options:
 1/ declare 2 parameters in my template: nb of instances (N for creation,
N+X
 for update) and count of wait conditions (N for creation, X for update).
See
 [1] for the details.
 2/ declare only one parameter in my template (the size of the group) and
 leverage cfn-hup on the existing instances to re-execute cfn-signal. See
[2]
 for the details.

 The solution 1 is not really user-friendly and I found that solution 2 is
a
 bit complicated. Does anybody know a simpler way to achieve the same
result?


I definitely think #1 is better than #2, but you're right, it's also not
very nice.

I'm kind of confused about your examples though, because you don't show
anything that depends on ComputeReady in your template. I guess I can
imagine some scenarios, but it's not very clear to me how this works. It'd
be nice to make sure the new autoscaling solution that we're working on
will support your case in a nice way, but I think we need some more
information about what you're doing. The only time this would have an
effect is if there's another resource depending on the ComputeReady *that's
also being updated at the same time*, because the only effect that a
dependency has is to wait until it is met before performing create, update,
or delete operations on other resources. So I think it would be nice to
understand your use case a little bit more before continuing discussion.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-20 Thread Christopher Armstrong
Hi Mike,

I have a *slightly* better idea of the kind of stuff you're talking about,
but I think it would really help if you could include some concrete
real-world use cases and describe why a holistic scheduler inside of Heat
is necessary for solving them.


On Fri, Sep 20, 2013 at 2:13 AM, Mike Spreitzer mspre...@us.ibm.com wrote:

 I have written a new outline of my thoughts, you can find it at
 https://docs.google.com/document/d/1RV_kN2Io4dotxZREGEks9DM0Ih_trFZ-PipVDdzxq_E

 It is intended to stand up better to independent study.  However, it is
 still just an outline.  I am still learning about stuff going on in
 OpenStack, and am learning and thinking faster than I can write.  Trying to
 figure out how to cope.

 Regards,
 Mike
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Does Heat support checkpointing for guest application

2013-09-20 Thread Steven Dake

On 09/13/2013 02:18 PM, Qing He wrote:


All,

I'm wondering if Heat provide service for checkpointing the guest 
application for HA/redundancy similar to what 
corosync/pacemaker/openais provided for bare medal applications.


Thanks,


Qing



Qing,

Heat is an orchestration framework, whereas corosync is a distributed 
data transfer service.  I think Marconi would better solve the problem 
you have.


http://wiki.openstack.org/marconi

Regards
-steve



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] cross-stack references

2013-09-20 Thread Steven Dake

On 09/18/2013 12:53 PM, Mike Spreitzer wrote:
My question is about stacks that are not nested.  Suppose, for 
example, that I create a stack that implements a shared service. 
 Later I create a separate stack that uses that shared service.  When 
creating that client stack, I would like to have a way of talking 
about its relationships with the service stack.


Thanks,
Mike


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
You could use nova scheduling features to collocate the data in one 
area.  Try using NovaSchedulerHints as a property to the 
OS::Nova::Server resource.  I am not entirely sure how to setup the nova 
scheduler hints, but I expect google would find some hints :)


Regards
-steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Questions about plans for heat wadls moving forward

2013-09-20 Thread Steven Dake

On 09/13/2013 01:21 PM, Anne Gentle wrote:




On Fri, Sep 13, 2013 at 1:53 PM, Mike Asthalter 
mike.asthal...@rackspace.com mailto:mike.asthal...@rackspace.com 
wrote:


Hi Anne,

I want to make sure I've understood the ramifications of your
statement about content sharing.

So for now, until the infrastructure team provides us with a
method to share content between repos, the only way to share the
content from the orchestration wadl with the api-ref
doc 
(https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docbkx/api-ref.xml)
is to manually copy the content from the orchestration wadl to the
original heat wadl and then use that for the shared content. So we
will not delete the original heat wadl until that new method of
content sharing is in place. Is this correct?


Hi Mike,
It sounds like the dev team is fine with deleting that original heat 
WADL and only maintaining one from here forward.


The way they will control Icehouse edits to the heat WADL that 
shouldn't yet be displayed to end users is to use the Work In 
Progress button on review.openstack.org 
http://review.openstack.org. When a patch is marked WIP, you can't 
merge it.


So, you can safely delete the original Heat WADL and then from your 
dev guides, if you want to include a WADL, you can point to the one in 
the api-site repository. We now have a mirror of the github.com 
http://github.com repository at git.openstack.org 
http://git.openstack.org that gives you access to the WADL in the 
api-site repository at all times. I can walk you through building the 
URL that points to the WADL file.




Anne,

Sorry for delay in response - I've been traveling.  I will submit a 
change to remove the wadl from the heat repo since  the api-site is 
finished.


Regards
-steve

What we also need to build is logic in the build jobs so that any time 
the api-site WADL is updated, your dev guide is also updated. This is 
done in the Jenkins job in 
https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/jenkins_job_builder/config/api-jobs.yaml. 
I can either submit this patch for you, or I'll ask Steve or Zane to 
do so.


Hope this helps -

Anne


Thanks!

Mike

From: Anne Gentle annegen...@justwriteclick.com
mailto:annegen...@justwriteclick.com
Reply-To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
Date: Thursday, September 12, 2013 11:32 PM
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Heat] Questions about plans for heat
wadls moving forward




On Thu, Sep 12, 2013 at 10:41 PM, Monty Taylor
mord...@inaugust.com mailto:mord...@inaugust.com wrote:



On 09/12/2013 04:33 PM, Steve Baker wrote:
 On 09/13/2013 08:28 AM, Mike Asthalter wrote:
 Hello,

 Can someone please explain the plans for our 2 wadls moving
forward:

   * wadl in original heat
 repo:

https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.0.wadl

%22https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.

   * wadl in api-site
 
repo:https://github.com/openstack/api-site/blob/master/api-ref/src/wadls/orchestration-api/src/v1/orchestration-api.wadl

 The original intention was to delete the heat wadl when the
api-site one
 became merged.


Sounds good.

 1. Is there a need to maintain 2 wadls moving forward, with the wadl
 in the original heat repo containing calls that may not be
 implemented, and the wadl in the api-site repo containing
implemented
 calls only?

 Anne Gentle advises as follows in regard to these 2 wadls:

 I'd like the WADL in api-site repo to be user-facing.
The other
 WADL can be truth if it needs to be a specification
that's not yet
 implemented. If the WADL in api-site repo is true and
implemented,
 please just maintain one going forward.


 2. If we maintain 2 wadls, what are the consequences
(gerrit reviews,
 docs out of sync, etc.)?

 3. If we maintain only the 1 orchestration wadl, how do we
want to
 pull in the wadl content to the api-ref doc


(https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docbkx/api-ref.xml


%22https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docb)
 from the orchestration wadl in the api-site repo: subtree merge, 
other?




Thanks Mike for asking these questions.

I've 

Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Russell Bryant
On 09/20/2013 04:11 PM, Dan Wendlandt wrote:
 Hi Russell, 
 
 Thanks for the detailed thoughts.  Comments below,
 
 Dan
 
 
 On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
 
 On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
  I think the real problem here is that in Nova there are bug fixes that
  are tiny and very important to a particular subset of the user
  population and yet have been around for well over a month without
  getting a single core review.
 
  Take for example https://review.openstack.org/#/c/40298/ , which fixes
  an important snapshot bug for the vmwareapi driver.  This was posted
  well over a month ago on August 5th.  It is a solid patch, is 54
  new/changed lines including unit test enhancements.  The commit
 message
  clearly shows which tempest tests it fixes.  It has been reviewed by
  many vmware reviewers with +1s for a long time, but the patch just
 keeps
  having to be rebased as it sits waiting for core reviewer attention.
 
  To me, the high-level take away is that it is hard to get new
  contributors excited about working on Nova when their well-written and
  well-targeted bug fixes just sit there, getting no feedback and not
  moving closer to merging.  The bug above was the developer's first
 patch
  to OpenStack and while he hasn't complained a bit, I think the
  experience is far from the community behavior that we need to
 encourages
  new, high-quality contributors from diverse sources.  For Nova to
  succeed in its goals of being a platform agnostic cloud layer, I think
  this is something we need a community strategy to address and I'd love
  to see it as part of the discussion put forward by those people
  nominating themselves as PTL.
 
 I've discussed this topic quite a bit in the past.  In short, my
 approach has been:
 
 1) develop metrics
 2) set goals
 3) track progress against those goals
 
 The numbers I've been using are here:
 
 http://russellbryant.net/openstack-stats/nova-openreviews.html
 
 
 Its great that you have dashboards like this, very cool.  The
 interesting thing here is that the patches I am talking about are not
 waiting on reviews in general, but rather core review.  They have plenty
 of reviews from non-core folks who provide feedback (and they keep
 getting +1'd again as they are rebased every few days).  Perhaps a good
 additional metric to track would be be items that have spent a lot of
 time without a negative review, but have not gotten any core reviews.  I
 think that is the root of the issue in the case of the reviews I'm
 talking about.  

The numbers I track do not reset the timer on any +1 (or +2, actually).
 I only resets when it gets a -1 or -2.  At that point, the review is
waiting for an update from a submitter.  Point is, getting a bunch of
+1s does not make it show up lower on the list.  Also, the 3rd list
(time since the last -1) does not reset on a rebase, so that's covered
in this tracking, too.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Dan Wendlandt
Hi Russell,

Thanks for the detailed thoughts.  Comments below,

Dan


On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com wrote:

 On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
  I think the real problem here is that in Nova there are bug fixes that
  are tiny and very important to a particular subset of the user
  population and yet have been around for well over a month without
  getting a single core review.
 
  Take for example https://review.openstack.org/#/c/40298/ , which fixes
  an important snapshot bug for the vmwareapi driver.  This was posted
  well over a month ago on August 5th.  It is a solid patch, is 54
  new/changed lines including unit test enhancements.  The commit message
  clearly shows which tempest tests it fixes.  It has been reviewed by
  many vmware reviewers with +1s for a long time, but the patch just keeps
  having to be rebased as it sits waiting for core reviewer attention.
 
  To me, the high-level take away is that it is hard to get new
  contributors excited about working on Nova when their well-written and
  well-targeted bug fixes just sit there, getting no feedback and not
  moving closer to merging.  The bug above was the developer's first patch
  to OpenStack and while he hasn't complained a bit, I think the
  experience is far from the community behavior that we need to encourages
  new, high-quality contributors from diverse sources.  For Nova to
  succeed in its goals of being a platform agnostic cloud layer, I think
  this is something we need a community strategy to address and I'd love
  to see it as part of the discussion put forward by those people
  nominating themselves as PTL.

 I've discussed this topic quite a bit in the past.  In short, my
 approach has been:

 1) develop metrics
 2) set goals
 3) track progress against those goals

 The numbers I've been using are here:

 http://russellbryant.net/openstack-stats/nova-openreviews.html


Its great that you have dashboards like this, very cool.  The interesting
thing here is that the patches I am talking about are not waiting on
reviews in general, but rather core review.  They have plenty of reviews
from non-core folks who provide feedback (and they keep getting +1'd again
as they are rebased every few days).  Perhaps a good additional metric to
track would be be items that have spent a lot of time without a negative
review, but have not gotten any core reviews.  I think that is the root of
the issue in the case of the reviews I'm talking about.




 There is also an aspect of karma involved in all of this that I think
 phttp://summit.openstack.org/cfp/details/4lays a big part when it comes
 to the vmware driver patches.  To get review attention, someone has to
 *want* to review it.  To get people to want to review your stuff, it can
 either be technology they are interested in, or they just want to help
 you out personally.

 There aren't many Nova developers that use the vmware driver, so you've
 got to work on building karma with the team.  That means contributing to
 other areas, which honestly, I haven't seen much of from this group.  I
 think that would go a long way.


I agree with you on the dynamics of review karma here (having dealt with
this same issue as a PTL of Quantum/Neutron).  My sense of of is going on
here is a bootstrapping issue.  If you have a developer who is brand new to
OpenStack, it would make sense that your first patch or two would be in an
area where you feel most comfortable (e.g., because you understand the
VMware APIs + constructs).  For new developers, who aren't pushing a big
new feature but are instead just fixing an existing bug, I would not
guessed that a huge amount of karma is needed for a review.  People like
garyk and arosen who have more Nova experience are already doing Nova work
outside of the VMware driver (instance groups, neutron / security groups
code, many reviews throughout Nova, not to mention their work in Neutron)
and I expect that to be the path others follow as well.  Nonetheless, I
like the suggestion of having these new developers also try to gain
experience outside of the vmware driver and provide value to the wider
community... is
https://bugs.launchpad.net/nova/+bugs?field.tag=low-hanging-fruit still the
best place for them to look?  Doesn't seem to be much there that isn't
already in progress.




 I think if you review the history of vmware patches, you'll see my name
 as a reviewer on many (or perhaps most) of them, so I hope nobody thinks
 that I personally am trying stall things here.  This is just based on my
 experience across all contributions.


Indeed, and in fact, when I first wrote the email, I had called out you,
Dan Smith, and Michael Still as having been very helpful with VMware API
review, but then I was worried that I had left people off the list that had
also been helpful but weren't immediately coming to mind.  I guess I was
damned if I did and damned if I didn't :)

My goal in sending the email was not to 

Re: [openstack-dev] [Heat] Does Heat support checkpointing for guest application

2013-09-20 Thread Qing He
Steven,
Thanks! Will look into it.

Qing

From: Steven Dake [mailto:sd...@redhat.com]
Sent: Friday, September 20, 2013 12:48 PM
To: OpenStack Development Mailing List
Cc: Qing He
Subject: Re: [openstack-dev] [Heat] Does Heat support checkpointing for guest 
application

On 09/13/2013 02:18 PM, Qing He wrote:
All,
I'm wondering if Heat provide service for checkpointing the guest application 
for HA/redundancy similar to what corosync/pacemaker/openais provided for bare 
medal applications.

Thanks,

Qing

Qing,

Heat is an orchestration framework, whereas corosync is a distributed data 
transfer service.  I think Marconi would better solve the problem you have.

http://wiki.openstack.org/marconi

Regards
-steve





___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-20 Thread Steven Dake

On 09/19/2013 04:35 AM, Mike Spreitzer wrote:
I'd like to try to summarize this discussion, if nothing else than to 
see whether I have correctly understood it.  There is a lot of 
consensus, but I haven't heard from Adrian Otto since he wrote some 
objections.  I'll focus on trying to describe the consensus; Adrian's 
concerns are already collected in a single message.  Or maybe this is 
already written in some one place?


The consensus is that there should be an autoscaling (AS) service that 
is accessible via its own API.  This autoscaling service can scale 
anything describable by a snippet of Heat template (it's not clear to 
me exactly what sort of syntax this is; is it written up anywhere?). 
 The autoscaling service is stimulated into action by a webhook call. 
 The user has the freedom to arrange calls on that webhook in any way 
she wants.  It is anticipated that a common case will be alarms raised 
by Ceilometer.  For more specialized or complicated logic, the user is 
free to wire up anything she wants to call the webhook.


An instance of the autoscaling service maintains an integer variable, 
which is the current number of copies of the thing being autoscaled. 
 Does the webhook call provide a new number, or +1/-1 signal, or ...?


There was some discussion of a way to indicate which individuals to 
remove, in the case of decreasing the multiplier.  I suppose that 
would be an option in the webhook, and one that will not be exercised 
by Ceilometer alarms.


(It seems to me that there is not much auto in this autoscaling 
service --- it is really a scaling service driven by an external 
controller.  This is not a criticism, I think this is a good factoring 
--- but maybe not the best naming.)


The autoscaling service does its job by multiplying the heat template 
snippet (the thing to be autoscaled) by the current number of copies 
and passing this derived template to Heat to make it so.  As the 
desired number of copies changes, the AS service changes the derived 
template that it hands to Heat.  Most commentators argue that the 
consistency and non-redundancy of making the AS service use Heat 
outweigh the extra path-length compared to a more direct solution.


Heat will have a resource type, analogous to 
AWS::AutoScaling::AutoScalingGroup, through which the template author 
can request usage of the AS service.


OpenStack in general, and Heat in particular, need to be much better 
at traceability and debuggability; the AS service should be good at 
these too.


Have I got this right?



Mike,

The key contention to a separate API is that Heat already provides all 
of this today.  It is unclear to me how separating a specially designed 
autoscaling service from Heat would be of big benefit because we still 
need the launch configuration and properties of the autoscaling group to 
be specified.  A separate service may specify this in REST API calls, 
whereas heat specifies it in a template, but really, this isn't much of 
a difference from a user's view.  The user still has to pass all of the 
same data set in some way.  Then there is the issue of duplicated code 
for at-least handling the creation and removal of the server instances 
themselves, and the bootstrapping that occurs in the process.


Your thread suggests we remove the auto from the scaling - these two 
concepts seem tightly integrated to me, and my personal opinion is doing 
so is just a way to work around the need to pass all of the necessary 
autoscaling parameters in API calls.  IMO there is no real benefit in a 
simple scaling service that is directed by a third party software 
component (in the proposed case Heat, activated on Ceilometer Alarms).   
It just feels like it doesn't do enough to warrant an entire OpenStack 
program.  There is significant overhead in each OS program added and I 
don't see the gain for the pain.


I think these are the main points of contention at this point, with no 
clear consensus.


An alternate point in favor of a separate autoscaling component not 
mentioned in your post is that an API produces a more composable[1] 
system which brings many advantages.


Regards
-steve

[1] http://en.wikipedia.org/wiki/Composability


Thanks,
Mike


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] question about stack updates, instance groups and wait conditions

2013-09-20 Thread Clint Byrum
Excerpts from Simon Pasquier's message of 2013-09-17 05:57:58 -0700:
 Hello,
 
 I'm testing stack updates with instance group and wait conditions and 
 I'd like to get feedback from the Heat community.
 
 My template declares an instance group resource with size = N and a wait 
 condition resource with count = N (N being passed as a parameter of the 
 template). Each group's instance is calling cfn-signal (with a different 
 id!) at the end of the user data script and my stack creates with no error.
 
 Now when I update my stack to run N+X instances, the instance group gets 
 updated with size=N+X but since the wait condition is deleted and 
 recreated, the count value should either be updated to X or my existing 
 instances should re-execute cfn-signal.

That is a bug, the count should be something that can be updated in-place.

https://bugs.launchpad.net/heat/+bug/1228362

Once that is fixed, there will be an odd interaction between the groups
though. Any new instances will add to the count, but removed instances
will not decrease it. I'm not sure how to deal with that particular quirk.

That said, rolling updates will likely produce some changes to the way
updates interact with wait conditions so that we can let instances and/or
monitoring systems feed back when an instance is ready. That will also
help deal with the problem you are seeing.

In the mean time, cfn-hup is exactly what you want, and I see no problem
with re-running cfn-signal after an update to signal that the update
has applied.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Joe Gordon
On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com wrote:

 On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
  I think the real problem here is that in Nova there are bug fixes that
  are tiny and very important to a particular subset of the user
  population and yet have been around for well over a month without
  getting a single core review.
 
  Take for example https://review.openstack.org/#/c/40298/ , which fixes
  an important snapshot bug for the vmwareapi driver.  This was posted
  well over a month ago on August 5th.  It is a solid patch, is 54
  new/changed lines including unit test enhancements.  The commit message
  clearly shows which tempest tests it fixes.  It has been reviewed by
  many vmware reviewers with +1s for a long time, but the patch just keeps
  having to be rebased as it sits waiting for core reviewer attention.


Personally I tend not to review many vmwareapi patches because without
seeing any public functional tests or being able to run the patch myself, I
am uncomfortable saying it 'looks good to me'. All I can do is make sure
the code looks pythonic and make no assessment on if the patch works or
not. With no shortage of patches to review I tend to review other patches
instead.

I while back Russell announced we would like all virt drivers have a public
functional testing system by the release of Icehouse (
http://lists.openstack.org/pipermail/openstack-dev/2013-July/011280.html).
Public functional testing would allow me to review vmwareapi patches with
almost the same level of confidence as with a driver that we gate on and
that I can trivially try out, such as libvirt.  Until then, if we put an
explicit comment in the release notes explicitly saying that vmwareapi is a
group C virt driver (https://wiki.openstack.org/wiki/HypervisorSupportMatrix --
These drivers have minimal testing and may or may not work at any given
time. Use them at your own risk) that would address my concerns and I
would be happy to +2 vmwareapi patches based just on if the code looks
correct and not on how well the patch works.



  To me, the high-level take away is that it is hard to get new
  contributors excited about working on Nova when their well-written and
  well-targeted bug fixes just sit there, getting no feedback and not
  moving closer to merging.  The bug above was the developer's first patch
  to OpenStack and while he hasn't complained a bit, I think the
  experience is far from the community behavior that we need to encourages
  new, high-quality contributors from diverse sources.  For Nova to
  succeed in its goals of being a platform agnostic cloud layer, I think
  this is something we need a community strategy to address and I'd love
  to see it as part of the discussion put forward by those people
  nominating themselves as PTL.

 I've discussed this topic quite a bit in the past.  In short, my
 approach has been:

 1) develop metrics
 2) set goals
 3) track progress against those goals

 The numbers I've been using are here:

 http://russellbryant.net/openstack-stats/nova-openreviews.html

 Right now we're running a bit behind the set goal of keeping the average
 under 5 days for the latest revision (1st set of numbers), and 7 days
 for the oldest revision since the last -1 (3rd set of numbers).
 However, Nova is now (and has been since I've been tracking this) below
 the average for all OpenStack projects:

 http://russellbryant.net/openstack-stats/all-openreviews.html

 Review prioritization is not something that I or anyone else can
 strictly control, but we can provide tools and guidelines to help.  You
 can find some notes on that here:

 https://wiki.openstack.org/wiki/Nova/CoreTeam#Review_Prioritization

 There is also an aspect of karma involved in all of this that I think
 phttp://summit.openstack.org/cfp/details/4lays a big part when it comes
 to the vmware driver patches.  To get review attention, someone has to
 *want* to review it.  To get people to want to review your stuff, it can
 either be technology they are interested in, or they just want to help
 you out personally.

 There aren't many Nova developers that use the vmware driver, so you've
 got to work on building karma with the team.  That means contributing to
 other areas, which honestly, I haven't seen much of from this group.  I
 think that would go a long way.

 I think if you review the history of vmware patches, you'll see my name
 as a reviewer on many (or perhaps most) of them, so I hope nobody thinks
 that I personally am trying stall things here.  This is just based on my
 experience across all contributions.

 I already put a session on the design summit schedule to discuss the
 future of drivers.  I'm open to alternative approaches for driver
 maintenance, including moving some of them (such as the vmware driver)
 into another tree where the developers focused on it can merge their
 code without waiting for nova-core review.

 http://summit.openstack.org/cfp/details/4

 

Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Monty Taylor


On 09/20/2013 01:24 PM, Dan Wendlandt wrote:
 
 
 
 On Fri, Sep 20, 2013 at 1:09 PM, Joe Gordon joe.gord...@gmail..com
 mailto:joe.gord...@gmail.com wrote:
 
 On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
 
 On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
  I think the real problem here is that in Nova there are bug
 fixes that
  are tiny and very important to a particular subset of the user
  population and yet have been around for well over a month without
  getting a single core review.
 
  Take for example https://review.openstack.org/#/c/40298/ ,
 which fixes
  an important snapshot bug for the vmwareapi driver.  This was
 posted
  well over a month ago on August 5th.  It is a solid patch, is 54
  new/changed lines including unit test enhancements.  The
 commit message
  clearly shows which tempest tests it fixes.  It has been
 reviewed by
  many vmware reviewers with +1s for a long time, but the patch
 just keeps
  having to be rebased as it sits waiting for core reviewer
 attention.
 
 
 Personally I tend not to review many vmwareapi patches because
 without seeing any public functional tests or being able to run the
 patch myself, I am uncomfortable saying it 'looks good to me'. All I
 can do is make sure the code looks pythonic and make no assessment
 on if the patch works or not. With no shortage of patches to review
 I tend to review other patches instead.
 
 I while back Russell announced we would like all virt drivers have a
 public functional testing system by the release of Icehouse
 
 (http://lists.openstack.org/pipermail/openstack-dev/2013-July/011280.html).
 Public functional testing would allow me to review vmwareapi patches
 with almost the same level of confidence as with a driver that we
 gate on and that I can trivially try out, such as libvirt.  Until
 then, if we put an explicit comment in the release notes explicitly
 saying that vmwareapi is a group C virt driver
 (https://wiki.openstack.org/wiki/HypervisorSupportMatrix -- These
 drivers have minimal testing and may or may not work at any given
 time. Use them at your own risk) that would address my concerns and
 I would be happy to +2 vmwareapi patches based just on if the code
 looks correct and not on how well the patch works.
 
 
 
 Hi Joe,
 
 I couldn't agree more.  In fact, the VMware team has been working hard
 to get a fully-automated CI infrastructure setup and integrated with
 upstream Gerrit.  We already run the tempest tests internally and have
 been manually posting tempest results for some patches.  I wouldn't want
 to speak for the dev owner, but I think within a very short time (before
 Havana) you will begin seeing automated reports for tempest tests on top
 of vSphere showing up on Gerrit.  I agree that this will really help
 core reviewers gain confidence that not only does the code look OK,
 but that it works well too.

WOOT!

  To me, the high-level take away is that it is hard to get new
  contributors excited about working on Nova when their
 well-written and
  well-targeted bug fixes just sit there, getting no feedback
 and not
  moving closer to merging.  The bug above was the developer's
 first patch
  to OpenStack and while he hasn't complained a bit, I think the
  experience is far from the community behavior that we need to
 encourages
  new, high-quality contributors from diverse sources.  For Nova to
  succeed in its goals of being a platform agnostic cloud layer,
 I think
  this is something we need a community strategy to address and
 I'd love
  to see it as part of the discussion put forward by those people
  nominating themselves as PTL.
 
 I've discussed this topic quite a bit in the past.  In short, my
 approach has been:
 
 1) develop metrics
 2) set goals
 3) track progress against those goals
 
 The numbers I've been using are here:
 
 http://russellbryant.net/openstack-stats/nova-openreviews.html
 
 Right now we're running a bit behind the set goal of keeping the
 average
 under 5 days for the latest revision (1st set of numbers), and 7
 days
 for the oldest revision since the last -1 (3rd set of numbers).
 However, Nova is now (and has been since I've been tracking
 this) below
 the average for all OpenStack projects:
 

 http://russellbryant.net/openstack-stats/all-openreviews.html
 http://russellbryant.net/openstack-stats/all-openreviews..html
 
 Review prioritization is not something that I or anyone 

[openstack-dev] [Neutron] PTL Candidacy

2013-09-20 Thread Mark McClain

Hi-

I writing to announce my candidacy for the OpenStack Networking (Neutron) PTL.

I am the current Neutron PTL.  Our team continued to grow during the Havana 
cycle and both existing and new contributors worked to deliver double the 
number of blueprints than the previous release.  Our vibrant ecosystem makes me 
excited for the future of Neutron and I would love to continue as the PTL.

Qualifications
---

I am a Neutron core developer with 13 years of commercial Python development 
experience.  During my career, I have developed and deployed network 
applications based on the same underlying libraries used throughout Neutron.  I 
started contributing to Neutron project during the Essex development cycle.  In 
Folsom, I was promoted to core and was the primary developer of the DHCP 
implementation and Neutron's network namespace library.  During Grizzly, I 
worked on the metadata service, database migrations, and LBaaS reference 
implementation.  

Havana Accomplishments


During the Havana cycle, I worked as a developer, core team member, and a 
technical lead.

- Planned and implemented the Quantum to Neutron name change.
- Most active reviewer on the Neutron team 
(http://russellbryant.net/openstack-stats/neutron-reviewers-180.txt)
- Organized the Networking track at the Havana Design Summit.
- Led bug triaging and sub-team assignment.
- Interfaced with vendors new to Neutron and helped in the integration of their 
plugins.
- Assisted members of the community to further their understanding of Neutron 
and improve Python development best practices.
- Promoted Neutron by delivering presentations at conferences and regional meet 
ups worldwide.


Icehouse
-

During the Icehouse development cycle, I'd like to see the team focus on:

- Continuing to grow the community of contributors and code reviewers.
- Improving documentation for both deployers and developers.
- Build upon the services added in Havana to extend and improve load balancing, 
firewalling, and VPN.
- Integrating plugins from vendors new to the community including FWaaS, LBaaS, 
ML2, VPNaaS plugins/drivers.
- More efficient Neutron system testing and gating including full Tempest 
testing.
- Further work to ease deploying at scale.
- Refactoring the API layer to leverage a common WSGI framework as other 
OpenStack projects.
- Improving database resource modeling and extension management.
- Unified network service management framework. 
- Continued support of the Horizon team to assist with Neutron integration.
- Defined migration path from nova-network to Quantum.

I'd love the opportunity to continue as the PTL and work with the Neutron team 
to fill in the gaps during the design summit in Hong Kong.

Thanks,
mark




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Dan Wendlandt
On Fri, Sep 20, 2013 at 1:09 PM, Joe Gordon joe.gord...@gmail.com wrote:

 On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.comwrote:

 On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
  I think the real problem here is that in Nova there are bug fixes that
  are tiny and very important to a particular subset of the user
  population and yet have been around for well over a month without
  getting a single core review.
 
  Take for example https://review.openstack.org/#/c/40298/ , which fixes
  an important snapshot bug for the vmwareapi driver.  This was posted
  well over a month ago on August 5th.  It is a solid patch, is 54
  new/changed lines including unit test enhancements.  The commit message
  clearly shows which tempest tests it fixes.  It has been reviewed by
  many vmware reviewers with +1s for a long time, but the patch just keeps
  having to be rebased as it sits waiting for core reviewer attention.


 Personally I tend not to review many vmwareapi patches because without
 seeing any public functional tests or being able to run the patch myself, I
 am uncomfortable saying it 'looks good to me'. All I can do is make sure
 the code looks pythonic and make no assessment on if the patch works or
 not. With no shortage of patches to review I tend to review other patches
 instead.

 I while back Russell announced we would like all virt drivers have a
 public functional testing system by the release of Icehouse (
 http://lists.openstack.org/pipermail/openstack-dev/2013-July/011280.html).
 Public functional testing would allow me to review vmwareapi patches with
 almost the same level of confidence as with a driver that we gate on and
 that I can trivially try out, such as libvirt.  Until then, if we put an
 explicit comment in the release notes explicitly saying that vmwareapi is a
 group C virt driver (
 https://wiki.openstack.org/wiki/HypervisorSupportMatrix -- These drivers
 have minimal testing and may or may not work at any given time. Use them at
 your own risk) that would address my concerns and I would be happy to +2
 vmwareapi patches based just on if the code looks correct and not on how
 well the patch works.



Hi Joe,

I couldn't agree more.  In fact, the VMware team has been working hard to
get a fully-automated CI infrastructure setup and integrated with upstream
Gerrit.  We already run the tempest tests internally and have been manually
posting tempest results for some patches.  I wouldn't want to speak for the
dev owner, but I think within a very short time (before Havana) you will
begin seeing automated reports for tempest tests on top of vSphere showing
up on Gerrit.  I agree that this will really help core reviewers gain
confidence that not only does the code look OK, but that it works well
too.

Dan









 
  To me, the high-level take away is that it is hard to get new
  contributors excited about working on Nova when their well-written and
  well-targeted bug fixes just sit there, getting no feedback and not
  moving closer to merging.  The bug above was the developer's first patch
  to OpenStack and while he hasn't complained a bit, I think the
  experience is far from the community behavior that we need to encourages
  new, high-quality contributors from diverse sources.  For Nova to
  succeed in its goals of being a platform agnostic cloud layer, I think
  this is something we need a community strategy to address and I'd love
  to see it as part of the discussion put forward by those people
  nominating themselves as PTL.

 I've discussed this topic quite a bit in the past.  In short, my
 approach has been:

 1) develop metrics
 2) set goals
 3) track progress against those goals

 The numbers I've been using are here:

 http://russellbryant.net/openstack-stats/nova-openreviews.html

 Right now we're running a bit behind the set goal of keeping the average
 under 5 days for the latest revision (1st set of numbers), and 7 days
 for the oldest revision since the last -1 (3rd set of numbers).
 However, Nova is now (and has been since I've been tracking this) below
 the average for all OpenStack projects:

 http://russellbryant.net/openstack-stats/all-openreviews.html

 Review prioritization is not something that I or anyone else can
 strictly control, but we can provide tools and guidelines to help.  You
 can find some notes on that here:

 https://wiki.openstack.org/wiki/Nova/CoreTeam#Review_Prioritization

 There is also an aspect of karma involved in all of this that I think
 phttp://summit.openstack.org/cfp/details/4lays a big part when it comes
 to the vmware driver patches.  To get review attention, someone has to
 *want* to review it.  To get people to want to review your stuff, it can
 either be technology they are interested in, or they just want to help
 you out personally.

 There aren't many Nova developers that use the vmware driver, so you've
 got to work on building karma with the team.  That means contributing to
 other areas, which 

[openstack-dev] [Glance][Swift] Glance using swiftclient without multithreading

2013-09-20 Thread Nikhil Komawar

Hi all,
 
It seems that glance uses swiftclient and misses out a multithreading option.
[https://github.com/openstack/glance/blob/master/glance/store/swift.py#L574] 
https://github.com/openstack/glance/blob/master/glance/store/swift.py#L574
[https://github.com/openstack/glance/blob/master/glance/store/swift.py#L651] 
https://github.com/openstack/glance/blob/master/glance/store/swift.py#L651
 
switclient from command line gives the option of downloading multiple segments 
using the same.
[https://github.com/openstack/python-swiftclient/blob/master/bin/swift#L202] 
https://github.com/openstack/python-swiftclient/blob/master/bin/swift#L202
 
I was wondering if we should have a bp/bug for this and what all implications 
do we think it might have on the download/upload process?
 
Thanks,
-Nikhil___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Dan Wendlandt
On Fri, Sep 20, 2013 at 1:37 PM, Russell Bryant rbry...@redhat.com wrote:

 On 09/20/2013 04:11 PM, Dan Wendlandt wrote:
  Hi Russell,
 
  Thanks for the detailed thoughts.  Comments below,
 
  Dan
 
 
  On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com
  mailto:rbry...@redhat.com wrote:
 
  On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
   I think the real problem here is that in Nova there are bug fixes
 that
   are tiny and very important to a particular subset of the user
   population and yet have been around for well over a month without
   getting a single core review.
  
   Take for example https://review.openstack.org/#/c/40298/ , which
 fixes
   an important snapshot bug for the vmwareapi driver.  This was
 posted
   well over a month ago on August 5th.  It is a solid patch, is 54
   new/changed lines including unit test enhancements.  The commit
  message
   clearly shows which tempest tests it fixes.  It has been reviewed
 by
   many vmware reviewers with +1s for a long time, but the patch just
  keeps
   having to be rebased as it sits waiting for core reviewer
 attention.
  
   To me, the high-level take away is that it is hard to get new
   contributors excited about working on Nova when their well-written
 and
   well-targeted bug fixes just sit there, getting no feedback and not
   moving closer to merging.  The bug above was the developer's first
  patch
   to OpenStack and while he hasn't complained a bit, I think the
   experience is far from the community behavior that we need to
  encourages
   new, high-quality contributors from diverse sources.  For Nova to
   succeed in its goals of being a platform agnostic cloud layer, I
 think
   this is something we need a community strategy to address and I'd
 love
   to see it as part of the discussion put forward by those people
   nominating themselves as PTL.
 
  I've discussed this topic quite a bit in the past.  In short, my
  approach has been:
 
  1) develop metrics
  2) set goals
  3) track progress against those goals
 
  The numbers I've been using are here:
 
  http://russellbryant.net/openstack-stats/nova-openreviews.html
 
 
  Its great that you have dashboards like this, very cool.  The
  interesting thing here is that the patches I am talking about are not
  waiting on reviews in general, but rather core review.  They have plenty
  of reviews from non-core folks who provide feedback (and they keep
  getting +1'd again as they are rebased every few days).  Perhaps a good
  additional metric to track would be be items that have spent a lot of
  time without a negative review, but have not gotten any core reviews.  I
  think that is the root of the issue in the case of the reviews I'm
  talking about.

 The numbers I track do not reset the timer on any +1 (or +2, actually).
  I only resets when it gets a -1 or -2.  At that point, the review is
 waiting for an update from a submitter.  Point is, getting a bunch of
 +1s does not make it show up lower on the list.  Also, the 3rd list
 (time since the last -1) does not reset on a rebase, so that's covered
 in this tracking, too.


I see, I misunderstood the labels.  One thing to consider adding would be
something measuring the patches that have gone the longest without any core
review, which (by my current understanding) isn't currently measured.

Again, I think its great that you have these charts, and I'm quite sure
that its because of your use to charts like this that helped you be so
helpful in spotting reviewers that are stalled in the vmware driver and
elsewhere.  Thanks again for your help on that front.

Dan






 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Dan Smith
 What criteria would be used to determine which drivers stay in-tree
 vs. maintained as forks? E.g. libvirt driver in, everyone else out?
 Open-platform drivers (libvirt, xen) in, closed-platform drivers
 (vmware, hyperv) out? Drivers for platforms with large (possibly
 non-OpenStack) production deployments (libvirt, xen, vmware, hyperv)
 in, drivers without (e.g. docker), out?

I think this is in response to demand, not necessarily desire by the
nova-core folks. IMHO, maintaining any sort of stable virt driver API
for out-of-tree drivers is something we should try to avoid if at all
possible. I think the potential option for having a driver moved out of
tree would be because the maintainers of that driver would prefer the
freedom of merging anything they want without waiting for reviews.

As was mentioned earlier in the thread, however, there is a goal to get
every driver that is in-tree to have functional testing by Icehouse.
This is unrelated to a move for maintenance reasons, and is something I
fully support. If we don't have functional testing on a driver, we
should consider it broken (and not supported) IMHO.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][libvirt] Should file injection work for boot from volume images?

2013-09-20 Thread Michael Still
Before https://review.openstack.org/#/c/46867/ if file injection of a
mandatory file fails, nova just silently ignores the failure, which is
clearly wrong. However, that review now can't land because its
revealed another failure in the file injection code via tempest, which
is...

Should file injection work for instances which are boot from volume?
Now that we actually notice injection failures we're now failing to
boot such instances as file injection for them doesn't work.

I'm undecided though -- should file injection work for boot from
volume at all? Or should we just skip file injection for instances
like this? I'd prefer to see us just support config drive and metadata
server for these instances, but perhaps I am missing something really
important.

Thoughts welcome.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Michael Still
On Sat, Sep 21, 2013 at 6:24 AM, Dan Wendlandt d...@nicira.com wrote:
 On Fri, Sep 20, 2013 at 1:09 PM, Joe Gordon joe.gord...@gmail.com wrote:
 On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com
 wrote:
 On 09/20/2013 02:02 PM, Dan Wendlandt wrote:

 I couldn't agree more.  In fact, the VMware team has been working hard to
 get a fully-automated CI infrastructure setup and integrated with upstream
 Gerrit.  We already run the tempest tests internally and have been manually
 posting tempest results for some patches.  I wouldn't want to speak for the
 dev owner, but I think within a very short time (before Havana) you will
 begin seeing automated reports for tempest tests on top of vSphere showing
 up on Gerrit.  I agree that this will really help core reviewers gain
 confidence that not only does the code look OK, but that it works well
 too.

How are you doing this? Joshua Hesketh has been working on integrating
our internal DB CI tests into upstream zuul, so I wonder there are
synergies that can be harnessed here.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Joe Gordon
On Sep 20, 2013 1:27 PM, Dan Wendlandt d...@nicira.com wrote:




 On Fri, Sep 20, 2013 at 1:09 PM, Joe Gordon joe.gord...@gmail.com wrote:

 On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com
wrote:

 On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
  I think the real problem here is that in Nova there are bug fixes that
  are tiny and very important to a particular subset of the user
  population and yet have been around for well over a month without
  getting a single core review.
 
  Take for example https://review.openstack.org/#/c/40298/ , which fixes
  an important snapshot bug for the vmwareapi driver.  This was posted
  well over a month ago on August 5th.  It is a solid patch, is 54
  new/changed lines including unit test enhancements.  The commit
message
  clearly shows which tempest tests it fixes.  It has been reviewed by
  many vmware reviewers with +1s for a long time, but the patch just
keeps
  having to be rebased as it sits waiting for core reviewer attention.


 Personally I tend not to review many vmwareapi patches because without
seeing any public functional tests or being able to run the patch myself, I
am uncomfortable saying it 'looks good to me'. All I can do is make sure
the code looks pythonic and make no assessment on if the patch works or
not. With no shortage of patches to review I tend to review other patches
instead.

 I while back Russell announced we would like all virt drivers have a
public functional testing system by the release of Icehouse (
http://lists.openstack.org/pipermail/openstack-dev/2013-July/011280.html).
Public functional testing would allow me to review vmwareapi patches with
almost the same level of confidence as with a driver that we gate on and
that I can trivially try out, such as libvirt.  Until then, if we put an
explicit comment in the release notes explicitly saying that vmwareapi is a
group C virt driver (https://wiki.openstack.org/wiki/HypervisorSupportMatrix --
These drivers have minimal testing and may or may not work at any given
time. Use them at your own risk) that would address my concerns and I
would be happy to +2 vmwareapi patches based just on if the code looks
correct and not on how well the patch works.



 Hi Joe,

 I couldn't agree more.  In fact, the VMware team has been working hard to
get a fully-automated CI infrastructure setup and integrated with upstream
Gerrit.  We already run the tempest tests internally and have been manually
posting tempest results for some patches.  I wouldn't want to speak for the
dev owner, but I think within a very short time (before Havana) you will
begin seeing automated reports for tempest tests on top of vSphere showing
up on Gerrit.  I agree that this will really help core reviewers gain
confidence that not only does the code look OK, but that it works well
too.

 Dan

Awesome, when that happens I hope to review more vmwareapi patches.  Part
of the trick will be in how I can see that after a nova patch is merged the
vmwareapi system will cover that case going forward.









 
  To me, the high-level take away is that it is hard to get new
  contributors excited about working on Nova when their well-written and
  well-targeted bug fixes just sit there, getting no feedback and not
  moving closer to merging.  The bug above was the developer's first
patch
  to OpenStack and while he hasn't complained a bit, I think the
  experience is far from the community behavior that we need to
encourages
  new, high-quality contributors from diverse sources.  For Nova to
  succeed in its goals of being a platform agnostic cloud layer, I think
  this is something we need a community strategy to address and I'd love
  to see it as part of the discussion put forward by those people
  nominating themselves as PTL.

 I've discussed this topic quite a bit in the past.  In short, my
 approach has been:

 1) develop metrics
 2) set goals
 3) track progress against those goals

 The numbers I've been using are here:

 http://russellbryant.net/openstack-stats/nova-openreviews.html

 Right now we're running a bit behind the set goal of keeping the average
 under 5 days for the latest revision (1st set of numbers), and 7 days
 for the oldest revision since the last -1 (3rd set of numbers).
 However, Nova is now (and has been since I've been tracking this) below
 the average for all OpenStack projects:

 http://russellbryant.net/openstack-stats/all-openreviews.html

 Review prioritization is not something that I or anyone else can
 strictly control, but we can provide tools and guidelines to help.  You
 can find some notes on that here:

 https://wiki.openstack.org/wiki/Nova/CoreTeam#Review_Prioritization

 There is also an aspect of karma involved in all of this that I think
 phttp://summit.openstack.org/cfp/details/4lays a big part when it comes
 to the vmware driver patches.  To get review attention, someone has to
 *want* to review it.  To get people to want to review your stuff, it can
 

Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread David Ripton

On 09/20/2013 04:11 PM, Dan Wendlandt wrote:


Its great that you have dashboards like this, very cool.  The
interesting thing here is that the patches I am talking about are not
waiting on reviews in general, but rather core review.  They have plenty
of reviews from non-core folks who provide feedback (and they keep
getting +1'd again as they are rebased every few days).  Perhaps a good
additional metric to track would be be items that have spent a lot of
time without a negative review, but have not gotten any core reviews.  I
think that is the root of the issue in the case of the reviews I'm
talking about.


I feel the pain.  Especially when you have a +2 but need to rebase 
before the second +2 comes in, and lose it.


I just sent a pull request to next-review to add --onlyplusone and 
--onlyplustwo to next-review, to give core reviewers an easy way to 
focus on already-somewhat-vetted reviews, and leave the new reviews to 
non-core reviewers.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Dan Wendlandt
On Fri, Sep 20, 2013 at 1:58 PM, Joe Gordon joe.gord...@gmail.com wrote:


 On Sep 20, 2013 1:27 PM, Dan Wendlandt d...@nicira.com wrote:
 
 
 
 
  On Fri, Sep 20, 2013 at 1:09 PM, Joe Gordon joe.gord...@gmail.com
 wrote:
 
  On Fri, Sep 20, 2013 at 11:52 AM, Russell Bryant rbry...@redhat.com
 wrote:
 
  On 09/20/2013 02:02 PM, Dan Wendlandt wrote:
   I think the real problem here is that in Nova there are bug fixes
 that
   are tiny and very important to a particular subset of the user
   population and yet have been around for well over a month without
   getting a single core review.
  
   Take for example https://review.openstack.org/#/c/40298/ , which
 fixes
   an important snapshot bug for the vmwareapi driver.  This was posted
   well over a month ago on August 5th.  It is a solid patch, is 54
   new/changed lines including unit test enhancements.  The commit
 message
   clearly shows which tempest tests it fixes.  It has been reviewed by
   many vmware reviewers with +1s for a long time, but the patch just
 keeps
   having to be rebased as it sits waiting for core reviewer attention.
 
 
  Personally I tend not to review many vmwareapi patches because without
 seeing any public functional tests or being able to run the patch myself, I
 am uncomfortable saying it 'looks good to me'. All I can do is make sure
 the code looks pythonic and make no assessment on if the patch works or
 not. With no shortage of patches to review I tend to review other patches
 instead.
 
  I while back Russell announced we would like all virt drivers have a
 public functional testing system by the release of Icehouse (
 http://lists.openstack.org/pipermail/openstack-dev/2013-July/011280.html).
 Public functional testing would allow me to review vmwareapi patches with
 almost the same level of confidence as with a driver that we gate on and
 that I can trivially try out, such as libvirt.  Until then, if we put an
 explicit comment in the release notes explicitly saying that vmwareapi is a
 group C virt driver (
 https://wiki.openstack.org/wiki/HypervisorSupportMatrix -- These drivers
 have minimal testing and may or may not work at any given time. Use them at
 your own risk) that would address my concerns and I would be happy to +2
 vmwareapi patches based just on if the code looks correct and not on how
 well the patch works.
 
 
 
  Hi Joe,
 
  I couldn't agree more.  In fact, the VMware team has been working hard
 to get a fully-automated CI infrastructure setup and integrated with
 upstream Gerrit.  We already run the tempest tests internally and have been
 manually posting tempest results for some patches.  I wouldn't want to
 speak for the dev owner, but I think within a very short time (before
 Havana) you will begin seeing automated reports for tempest tests on top of
 vSphere showing up on Gerrit.  I agree that this will really help core
 reviewers gain confidence that not only does the code look OK, but that
 it works well too.
 
  Dan

 Awesome, when that happens I hope to review more vmwareapi patches.  Part
 of the trick will be in how I can see that after a nova patch is merged the
 vmwareapi system will cover that case going forward.


Great.  By the way, all of this automated tempest testing is running nested
on top of a physical OpenStack on vSphere cloud we run internally.  That
cloud has the ability to host labs that are externally accessible.  So if
you're a core Nova reviewer (or other person who does a lot of Nova
reviews), I could definitely look into how we can get you access to a
devstack + vSphere environment for your personal use of reviewing + testing
patches.  Anything I can do to make a core reviewers life easier here, I'm
all for.  Feel free to reach out to me off-list.

Dan




 
 
 
 
 
 
 
 
 
  
   To me, the high-level take away is that it is hard to get new
   contributors excited about working on Nova when their well-written
 and
   well-targeted bug fixes just sit there, getting no feedback and not
   moving closer to merging.  The bug above was the developer's first
 patch
   to OpenStack and while he hasn't complained a bit, I think the
   experience is far from the community behavior that we need to
 encourages
   new, high-quality contributors from diverse sources.  For Nova to
   succeed in its goals of being a platform agnostic cloud layer, I
 think
   this is something we need a community strategy to address and I'd
 love
   to see it as part of the discussion put forward by those people
   nominating themselves as PTL.
 
  I've discussed this topic quite a bit in the past.  In short, my
  approach has been:
 
  1) develop metrics
  2) set goals
  3) track progress against those goals
 
  The numbers I've been using are here:
 
  http://russellbryant.net/openstack-stats/nova-openreviews.html
 
  Right now we're running a bit behind the set goal of keeping the
 average
  under 5 days for the latest revision (1st set of numbers), and 7 days
  for the oldest revision since 

Re: [openstack-dev] Client and Policy

2013-09-20 Thread Ben Nemec

On 2013-09-20 03:16, Flavio Percoco wrote:

On 19/09/13 17:10 -0400, Adam Young wrote:

On 09/19/2013 04:30 PM, Mark McLoughlin wrote:

To take the specific example of the policy API, if someone actively
wanted to help the process of moving it into a standalone library 
should

volunteer to help Flavio out as a maintainer:

  
 
 
 
ps://git.openstack.org/cgit/openstack/oslo-incubator/tree/MAINTAINERS


  == policy ==

  M: Flavio Percoco fla...@redhat.com
  S: Maintained
  F: policy.py


Would it make sense to explicitly add Keystone developers, or can we 
include the launchpad keystone-core group to this module?
If we want to keep it per user,  I'm willing to do so, and I think we 
have a couple of other likely candidates from Keystone:  I'll let then 
speak up for themselves.



I don't think it is possible to have per-file core reviewers.


Not from a Gerrit perspective, but the Oslo policy is that a maintainer 
+1 on the code they maintain is the equivalent of a +2, so only one core 
is needed to approve.


See 
https://github.com/openstack/oslo-incubator/blob/master/MAINTAINERS#L28


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Client and Policy

2013-09-20 Thread Monty Taylor


On 09/19/2013 01:30 PM, Mark McLoughlin wrote:
 On Thu, 2013-09-19 at 15:22 -0500, Dolph Mathews wrote:

 On Thu, Sep 19, 2013 at 2:59 PM, Adam Young ayo...@redhat.com wrote:
 I can submit a summit proposal.  I was thinking of making it
 more general than just the Policy piece.  Here is my proposed
 session.  Let me know if it rings true:
 
 
 Title: Extracting Shared Libraries from incubator
 
 Some of the security-sensitive code in OpenStack is coped into
 various projects from Oslo-Incubator.  If there is a CVE
 identified in one of these pieces, there is no rapid way to
 update them short of syncing code to all projects.  This
 meeting is to identify the pieces of Oslo-incubator that
 should be extracted into stand alone libraries.
 


 I believe the goal of oslo-incubator IS to spin out common code into
 standalone libraries in the long run, as appropriate.
 
 Indeed.
 
 https://wiki.openstack.org/wiki/Oslo
 
   Mission Statement:
 
 To produce a set of python libraries containing code shared by 
 OpenStack projects
 
 https://wiki.openstack.org/wiki/Oslo#Incubation
 
   Incubation shouldn't be seen as a long term option for any API - it 
   is merely a stepping stone to inclusion into a published Oslo
   library. 
 
 Some of the code would be best reviewed by members of other
 projects:  Network specific code by Neutron, Policy by
 Keystone, and so forth.  As part of the discussion, we will
 identify a code review process that gets the right reviewers
 for those subprojects.


 It sounds like the real goal is how do we get relevant/interested
 reviewers in front of oslo reviews without overloading them with
 noise? I'm sure that's a topic that Mark already has an opinion on,
 so I've opened this thread this to openstack-dev.
 
 To take the specific example of the policy API, if someone actively
 wanted to help the process of moving it into a standalone library should
 volunteer to help Flavio out as a maintainer:
 
   https://git.openstack.org/cgit/openstack/oslo-incubator/tree/MAINTAINERS
 
   == policy ==
 
   M: Flavio Percoco fla...@redhat.com
   S: Maintained
   F: policy.py
 
 
 Another aspect is how someone would go about helping do reviews on a
 specific API in oslo-incubator. That's a common need - e.g. for
 maintainers of virt drivers in Nova - and AIUI, these folks just
 subscribe to all gerrit notifications for the module and then use mail
 filters to make sure they see changes to the files they're interested
 in.

It is possible to subscribe to changes in a project in gerrit limited to
a subpath. In oslo-incubator, that makes a large amount of sense.

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Client and Policy

2013-09-20 Thread Monty Taylor


On 09/20/2013 02:55 PM, Ben Nemec wrote:
 On 2013-09-20 03:16, Flavio Percoco wrote:
 On 19/09/13 17:10 -0400, Adam Young wrote:
 On 09/19/2013 04:30 PM, Mark McLoughlin wrote:
 To take the specific example of the policy API, if someone actively
 wanted to help the process of moving it into a standalone library
 should
 volunteer to help Flavio out as a maintainer:


  
  
 ps://git.openstack.org/cgit/openstack/oslo-incubator/tree/MAINTAINERS

   == policy ==

   M: Flavio Percoco fla...@redhat.com
   S: Maintained
   F: policy.py

 Would it make sense to explicitly add Keystone developers, or can we
 include the launchpad keystone-core group to this module?
 If we want to keep it per user,  I'm willing to do so, and I think we
 have a couple of other likely candidates from Keystone:  I'll let
 then speak up for themselves.

 I don't think it is possible to have per-file core reviewers.
 
 Not from a Gerrit perspective, but the Oslo policy is that a maintainer
 +1 on the code they maintain is the equivalent of a +2, so only one core
 is needed to approve.
 
 See https://github.com/openstack/oslo-incubator/blob/master/MAINTAINERS#L28

What if we rethought the organization just a little bit. Instead of
having oslo-incubator from which we copy code, and then oslo.* that we
consume as libraries, what if:

- we split all oslo modules into their own repos from the start
- we make update.py a utility that groks copying from a directory that
contains a bunch of repos - so that a person wanting to use is might have:
  ~/src
  ~/src/oslo
  ~/src/oslo/oslo.db
  ~/src/oslo/oslo.policy
  and then when they run update.py ~/src/oslo ~/src/nova and get the
same results (the copying and name changing and whatnot)

That way, we can add per-module additional core easily like we can for
released oslo modules (like hacking and pbr have now)

Also, that would mean that moving from copying to releasing is more a
matter of just making a release than it is of doing the git magic to
split the repo out into a separate one and then adding the new repo to
gerrit.

Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Client and Policy

2013-09-20 Thread Morgan Fainberg
On Fri, Sep 20, 2013 at 3:20 PM, Monty Taylor mord...@inaugust.com wrote:


 What if we rethought the organization just a little bit. Instead of
 having oslo-incubator from which we copy code, and then oslo.* that we
 consume as libraries, what if:

 - we split all oslo modules into their own repos from the start
 - we make update.py a utility that groks copying from a directory that
 contains a bunch of repos - so that a person wanting to use is might have:
   ~/src
   ~/src/oslo
   ~/src/oslo/oslo.db
   ~/src/oslo/oslo.policy
   and then when they run update.py ~/src/oslo ~/src/nova and get the
 same results (the copying and name changing and whatnot)


I like this structure a little more than the current structure.  It feels
more like python modules.

If the bonus is to also allow more granularity on reviewing (e.g.
per-module cores), I think that there is another win to be had there.


 That way, we can add per-module additional core easily like we can for
 released oslo modules (like hacking and pbr have now)

 Also, that would mean that moving from copying to releasing is more a
 matter of just making a release than it is of doing the git magic to
 split the repo out into a separate one and then adding the new repo to
 gerrit.


I like this approach.  It does make the barrier to go from copy-release a
bit lower.  Less barrier is better (not that everything will go that route
immediately).

 Thoughts?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Should file injection work for boot from volume images?

2013-09-20 Thread Monty Taylor


On 09/20/2013 02:47 PM, Michael Still wrote:
 Before https://review.openstack.org/#/c/46867/ if file injection of a
 mandatory file fails, nova just silently ignores the failure, which is
 clearly wrong. However, that review now can't land because its
 revealed another failure in the file injection code via tempest, which
 is...
 
 Should file injection work for instances which are boot from volume?
 Now that we actually notice injection failures we're now failing to
 boot such instances as file injection for them doesn't work.
 
 I'm undecided though -- should file injection work for boot from
 volume at all? Or should we just skip file injection for instances
 like this? I'd prefer to see us just support config drive and metadata
 server for these instances, but perhaps I am missing something really
 important.

Well, first of all, I think file injection should DIAF everywhere.

That said, it may be no surprise that I think boot-from-volume should
just do config drive and metadata.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Dan Wendlandt
btw, thanks to the core devs who went and took a look at several of the
vmware reviews today.  It was like christmas for the team today :)


On Fri, Sep 20, 2013 at 2:25 PM, Michael Still mi...@stillhq.com wrote:

 On Sat, Sep 21, 2013 at 7:12 AM, Dan Wendlandt d...@nicira.com wrote:
  On Fri, Sep 20, 2013 at 2:05 PM, Michael Still mi...@stillhq.com
 wrote:

  How are you doing this? Joshua Hesketh has been working on integrating
  our internal DB CI tests into upstream zuul, so I wonder there are
  synergies that can be harnessed here.
 
  We're just using the standard stuff built by the OpenStack CI team for
  third-party testing: http://ci.openstack.org/third_party.html
 
  Is that what you were asking, or am I misunderstanding?

 Ahhh, so that's how our initial prototype was built as well, but the
 new zuul way is much nicer (he says in a handwavey way). I didn't do
 the work though, so I can't be too specific apart from saying you
 don't need to do any of the talking-to-gerrit bits any more -- its
 possible to just run up a zuul instance which hooks into the upstream
 one and that runs your tests. zuul handles detecting new reviews and
 writing results to gerrit for you.

 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stalled bug fixes for vmware driver

2013-09-20 Thread Russell Bryant
On 09/20/2013 07:00 PM, Dan Wendlandt wrote:
 btw, thanks to the core devs who went and took a look at several of the
 vmware reviews today.  It was like christmas for the team today :) 

And for the record, I did all of my reviews before this thread even
started, just as a part of my normal workflow.  I was working from the
havana-rc1 bug list.  :-)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] PTL Candidacy

2013-09-20 Thread Dan Wendlandt
+1

Mark has done a phenomenal job as Neutron PTL


On Fri, Sep 20, 2013 at 1:44 PM, Mark McClain mark.mccl...@dreamhost.comwrote:


 Hi-

 I writing to announce my candidacy for the OpenStack Networking (Neutron)
 PTL.

 I am the current Neutron PTL.  Our team continued to grow during the
 Havana cycle and both existing and new contributors worked to deliver
 double the number of blueprints than the previous release.  Our vibrant
 ecosystem makes me excited for the future of Neutron and I would love to
 continue as the PTL.

 Qualifications
 ---

 I am a Neutron core developer with 13 years of commercial Python
 development experience.  During my career, I have developed and deployed
 network applications based on the same underlying libraries used throughout
 Neutron.  I started contributing to Neutron project during the Essex
 development cycle.  In Folsom, I was promoted to core and was the primary
 developer of the DHCP implementation and Neutron's network namespace
 library.  During Grizzly, I worked on the metadata service, database
 migrations, and LBaaS reference implementation.

 Havana Accomplishments
 

 During the Havana cycle, I worked as a developer, core team member, and a
 technical lead.

 - Planned and implemented the Quantum to Neutron name change.
 - Most active reviewer on the Neutron team (
 http://russellbryant.net/openstack-stats/neutron-reviewers-180.txt)
 - Organized the Networking track at the Havana Design Summit.
 - Led bug triaging and sub-team assignment.
 - Interfaced with vendors new to Neutron and helped in the integration of
 their plugins.
 - Assisted members of the community to further their understanding of
 Neutron and improve Python development best practices.
 - Promoted Neutron by delivering presentations at conferences and regional
 meet ups worldwide.


 Icehouse
 -

 During the Icehouse development cycle, I'd like to see the team focus on:

 - Continuing to grow the community of contributors and code reviewers.
 - Improving documentation for both deployers and developers.
 - Build upon the services added in Havana to extend and improve load
 balancing, firewalling, and VPN.
 - Integrating plugins from vendors new to the community including FWaaS,
 LBaaS, ML2, VPNaaS plugins/drivers.
 - More efficient Neutron system testing and gating including full Tempest
 testing.
 - Further work to ease deploying at scale.
 - Refactoring the API layer to leverage a common WSGI framework as other
 OpenStack projects.
 - Improving database resource modeling and extension management.
 - Unified network service management framework.
 - Continued support of the Horizon team to assist with Neutron integration.
 - Defined migration path from nova-network to Quantum.

 I'd love the opportunity to continue as the PTL and work with the Neutron
 team to fill in the gaps during the design summit in Hong Kong.

 Thanks,
 mark




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nove] Launch an instance with IDE disk type instead of virtio disk type

2013-09-20 Thread Pattabi Ayyasami
Hi,

We have a KVM qcow2 image to be launched on a KVM host. From Dashboard, I don't 
find a way to specify the disk type for a new instance as IDE. The instance was 
launched with a virtio disk type.

virsh dumpxml kvm_vm_name
shows the following.


disk type='file' device='disk'
  driver name='qemu' type='qcow2' cache='none'/
  source 
file='/opt/stack/data/nova/instances/2f317b2e-f3b8-40cd-ba79-402231ccee51/disk'/
  target dev='vda' bus='virtio'/
  address type='pci' domain='0x' bus='0x00' slot='0x08' 
function='0x0'/
/disk

I want it to be
  target dev='vda' bus='ide'/

  address type='drive' controller='0' bus='0' unit='0'/


The libvirt.xml under data/nova/instances/image_id

disk type=file device=disk
  driver name=qemu type=qcow2 cache=none/
  source 
file=/opt/stack/data/nova/instances/2f317b2e-f3b8-40cd-ba79-402231ccee51/disk/
  target bus=virtio dev=vda/
/disk


I want it to be
  target bus=ide dev=vda/

I could manually change libvirt.xml and  virsh edit kvm_vm_name as mentioned 
above. But I want to be able to do it either from Dashboard GUI or commands 
such as glance or nova.

Does anyone have any pointers on workarounds / solution on this?

Thanks
Pattabi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Launch an instance with e1000 as the first network interface and virtio net as the second network interface

2013-09-20 Thread Pattabi Ayyasami
HI,

I have Nova with version 2.14.1.17 and glance with version 0.10.0.10.

I found that the default network interface port is virtio after I launch a new 
instance from Horizon. Is there any way to create two network interface ports, 
the first one as e1000 and the second one as virtio ? I want to be able to mix 
more than one types of network ports.

I do not know where to specify to create network ports with different types 
under Networks tab of the Horizon. If Horizon is not the right place to create 
them, which tool and how I can create them ?

Does anyone know how to do the above? Any hint or pointer would help.

Thanks

Pattabi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] PTL Candidacy

2013-09-20 Thread Justin Hammond
I agree. +1. Mark has been nothing but helpful to me and I have enjoyed all the 
chances I have had to work with him. Not sure if I can vote though.

From: Dan Wendlandt d...@nicira.commailto:d...@nicira.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Fri, 20 Sep 2013 18:03:44 -0700
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] PTL Candidacy

+1

Mark has done a phenomenal job as Neutron PTL


On Fri, Sep 20, 2013 at 1:44 PM, Mark McClain 
mark.mccl...@dreamhost.commailto:mark.mccl...@dreamhost.com wrote:

Hi-

I writing to announce my candidacy for the OpenStack Networking (Neutron) PTL.

I am the current Neutron PTL.  Our team continued to grow during the Havana 
cycle and both existing and new contributors worked to deliver double the 
number of blueprints than the previous release.  Our vibrant ecosystem makes me 
excited for the future of Neutron and I would love to continue as the PTL.

Qualifications
---

I am a Neutron core developer with 13 years of commercial Python development 
experience.  During my career, I have developed and deployed network 
applications based on the same underlying libraries used throughout Neutron.  I 
started contributing to Neutron project during the Essex development cycle.  In 
Folsom, I was promoted to core and was the primary developer of the DHCP 
implementation and Neutron's network namespace library.  During Grizzly, I 
worked on the metadata service, database migrations, and LBaaS reference 
implementation.

Havana Accomplishments


During the Havana cycle, I worked as a developer, core team member, and a 
technical lead.

- Planned and implemented the Quantum to Neutron name change.
- Most active reviewer on the Neutron team 
(http://russellbryant.net/openstack-stats/neutron-reviewers-180.txt)
- Organized the Networking track at the Havana Design Summit.
- Led bug triaging and sub-team assignment.
- Interfaced with vendors new to Neutron and helped in the integration of their 
plugins.
- Assisted members of the community to further their understanding of Neutron 
and improve Python development best practices.
- Promoted Neutron by delivering presentations at conferences and regional meet 
ups worldwide.


Icehouse
-

During the Icehouse development cycle, I'd like to see the team focus on:

- Continuing to grow the community of contributors and code reviewers.
- Improving documentation for both deployers and developers.
- Build upon the services added in Havana to extend and improve load balancing, 
firewalling, and VPN.
- Integrating plugins from vendors new to the community including FWaaS, LBaaS, 
ML2, VPNaaS plugins/drivers.
- More efficient Neutron system testing and gating including full Tempest 
testing.
- Further work to ease deploying at scale.
- Refactoring the API layer to leverage a common WSGI framework as other 
OpenStack projects.
- Improving database resource modeling and extension management.
- Unified network service management framework.
- Continued support of the Horizon team to assist with Neutron integration.
- Defined migration path from nova-network to Quantum.

I'd love the opportunity to continue as the PTL and work with the Neutron team 
to fill in the gaps during the design summit in Hong Kong.

Thanks,
mark




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.comhttp://www.nicira.com
twitter: danwendlandt
~~~
___ OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Should file injection work for boot from volume images?

2013-09-20 Thread Pádraig Brady
On 09/20/2013 10:47 PM, Michael Still wrote:
 Before https://review.openstack.org/#/c/46867/ if file injection of a
 mandatory file fails, nova just silently ignores the failure, which is
 clearly wrong. 

For reference, the original code you're adjusting is
https://review.openstack.org/#/c/18900
BTW, I'm not sure of your adjustments but that's beside the point
and best left for discussion at the above review.

 However, that review now can't land because its
 revealed another failure in the file injection code via tempest, which
 is...
 
 Should file injection work for instances which are boot from volume?

For consistency probably yes.

 Now that we actually notice injection failures we're now failing to
 boot such instances as file injection for them doesn't work.
 
 I'm undecided though -- should file injection work for boot from
 volume at all? Or should we just skip file injection for instances
 like this? I'd prefer to see us just support config drive and metadata
 server for these instances, but perhaps I am missing something really
 important.

Now I wouldn't put too much effort into new file injection mechanisms,
but in this case it might be easy enough to support injection to volumes.
In fact there was already an attempt made at:
https://review.openstack.org/#/c/33221/

thanks,
Pádraig.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev