Re: [openstack-dev] Code review study

2013-08-16 Thread Maru Newby

On Aug 15, 2013, at 12:50 PM, Joe Gordon joe.gord...@gmail.com wrote:

   • 
 On Thu, Aug 15, 2013 at 12:22 PM, Sam Harwell sam.harw...@rackspace.com 
 wrote:
 I like to take a different approach. If my commit message is going to take 
 more than a couple lines for people to understand the decisions I made, I go 
 and make an issue in the issue tracker before committing locally and then 
 reference that issue in the commit message. This helps in a few ways:
 
  
 
 1.   If I find a technical or grammatical error in the commit message, it 
 can be corrected.
 
 2.   Developers can provide feedback on the subject matter independently 
 of the implementation, as well as feedback on the implementation itself.
 
 3.   I like the ability to include formatting and hyperlinks in my 
 documentation of the commit.
 
  
 
 
 This pattern has one slight issue, which is:
  
   • Do not assume the reviewer has access to external web services/site.
 In 6 months time when someone is on a train/plane/coach/beach/pub 
 troubleshooting a problem  browsing GIT history, there is no guarantee they 
 will have access to the online bug tracker, or online blueprint documents. 
 The great step forward with distributed SCM is that you no longer need to be 
 online to have access to all information about the code repository. The 
 commit message should be totally self-contained, to maintain that benefit.

I'm not sure I agree with this.  It can't be true in all cases, so it can 
hardly be considered a rule.  A guideline, maybe - something to strive for.  
But not all artifacts of the development process are amenable to being stuffed 
into code or the commits associated with them.  A dvcs is great and all, but 
unless one is working in a silo, online resources are all but mandatory.


m.

 
 
 https://wiki.openstack.org/wiki/GitCommitMessages#Information_in_commit_messages
 
 
 
  
 
 Sam
 
  
 
 From: Christopher Yeoh [mailto:cbky...@gmail.com] 
 Sent: Thursday, August 15, 2013 7:12 AM
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] Code review study
 
  
 
  
 
 On Thu, Aug 15, 2013 at 11:42 AM, Robert Collins robe...@robertcollins.net 
 wrote:
 
 This may interest data-driven types here.
 
 https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
 
 Note specifically the citation of 200-400 lines as the knee of the review 
 effectiveness curve: that's lower than I thought - I thought 200 was clearly 
 fine - but no.
 
  
 
 Very interesting article. One other point which I think is pretty relevant is 
 point 4 about getting authors to annotate the code better (and for those who 
 haven't read it, they don't mean comments in the code but separately) because 
 it results in the authors picking up more bugs before they even submit the 
 code.
 
 So I wonder if its worth asking people to write more detailed commit logs 
 which include some reasoning about why some of the more complex changes were 
 done in a certain way and not just what is implemented or fixed. As it is 
 many of the commit messages are often very succinct so I think it would help 
 on the review efficiency side too.
 
  
 
 Chris
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 api remove security_groups extension (was Re: security_groups extension in nova api v3)

2013-08-16 Thread Alex Xu

On 2013?08?16? 14:34, Christopher Yeoh wrote:


On Fri, Aug 16, 2013 at 10:28 AM, Melanie Witt melw...@yahoo-inc.com 
mailto:melw...@yahoo-inc.com wrote:


On Aug 15, 2013, at 1:13 PM, Joe Gordon wrote:

 +1 from me as long as this wouldn't change anything for the EC2
API's security groups support, which I assume it won't.

Correct, it's unrelated to the ec2 api.

We discussed briefly in the nova meeting today and there was
consensus that removing the standalone associate/disassociate
actions should happen.

Now the question is whether to keep the server create piece and
not remove the extension entirely. The concern is about a delay in
the newly provisioned instance being associated with the desired
security groups. With the extension, the instance gets the desired
security groups before the instance is active (I think). Without
the extension, the client would receive the active instance and
then call neutron to associate it with the desired security groups.

Would such a delay in associating with security groups be a problem?


I think we should keep the capability to set the security group on 
instance creation, so those who care about this sort of race condition 
can avoid if they want to.




I am working v3 network. I plan to only support create new instance with 
port id, didn't support with
network id and fixed ip anymore. So that means user need create port 
from Neutron firstly, then
pass the port id into the request of creating instance. If we think this 
is ok, user can associate the
desired security groups when create port, and we can remove the 
securitygroup extension entirely.



+1 to removing the associate/disassociate actions though

Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-16 Thread Robert Collins
On 16 August 2013 20:15, Maru Newby ma...@redhat.com wrote:

 This pattern has one slight issue, which is:

   • Do not assume the reviewer has access to external web services/site.
 In 6 months time when someone is on a train/plane/coach/beach/pub 
 troubleshooting a problem  browsing GIT history, there is no guarantee they 
 will have access to the online bug tracker, or online blueprint documents. 
 The great step forward with distributed SCM is that you no longer need to be 
 online to have access to all information about the code repository. The 
 commit message should be totally self-contained, to maintain that benefit.

 I'm not sure I agree with this.  It can't be true in all cases, so it can 
 hardly be considered a rule.  A guideline, maybe - something to strive for.  
 But not all artifacts of the development process are amenable to being 
 stuffed into code or the commits associated with them.  A dvcs is great and 
 all, but unless one is working in a silo, online resources are all but 
 mandatory.

In a very strict sense you're right, but consider that for anyone
doing fast iterative development the need to go hit a website is a
huge slowdown : at least in most of the world :).

So - while I agree that it's something to strive for, I think we
should invert it and say 'not having everything in the repo is
something we should permit occasional exceptions to'.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live-snapshot/cloning of virtual machines

2013-08-16 Thread Daniel P. Berrange
On Wed, Aug 14, 2013 at 04:53:01PM -0700, Vishvananda Ishaya wrote:
 Hi Everyone,
 
 I have been trying for some time to get the code for the live-snapshot 
 blueprint[1]
 in. Going through the review process for the rpc and interface code[2] was 
 easy. I
 suspect the api-extension code[3] will also be relatively trivial to get in. 
 The
 main concern is with the libvirt driver implementation[4]. I'd like to 
 discuss the
 concerns and see if we can make some progress.
 
 Short Summary (tl;dr)
 =
 
 I propose we merge live-cloning as an experimental feature for havanna and 
 have the
 api extension disabled by default.
 
 Overview
 
 
 First of all, let me express the value of live snapshoting. The slowest part 
 of the
 vm provisioning process is generally booting of the OS. The advantage of live-
 snapshotting is that it allows the possibility of bringing up application 
 servers
 while skipping the overhead of vm (and application startup).

For Linux at least I think bootup time is a problem that is being solved by the
distros. It is possible to boot up many modern Linux distros in a couple of 
seconds
even in physical hardware - VMs can be even faster since they don't have such 
stupid
BIOS to worry about  have a restricted set of possible hardware. This is on a 
par
with, or better than, the overheads imposed by Nova itself in the boot up 
process.

Windows may be a different story, but I've not used it in years so don't know 
what
its boot performance is like.

 I recognize that this capability comes with some security concerns, so I 
 don't expect
 this feature to go in and be ready to for use in production right away. 
 Similarly,
 containers have a lot of the same benefit, but have had their own security 
 issues
 which are gradually being resolved. My hope is that getting this feature in 
 would
 allow people to start experimenting with live-booting so that we could 
 uncover some
 of these security issues.
 
 There are two specific concerns that have been raised regarding my patch. The 
 first
 concern is related to my use of libvirt. The second concern is related to the 
 security
 issues above. Let me address them separately.
 
 1. Libvirt Issues
 =
 
 The only feature I require from the hypervisor is to load memory/processor 
 state for
 a vm from a file. Qemu supports this directly. The only way that libvirt 
 exposes this
 functionality is via its restore command which is specifically for restoring 
 the
 previous state of an existing vm. Cloning, or restoring the memory state of 
 a
 cloned vm is considered unsafe (which I will address in the second point, 
 below).
 
 The result of the limited api is that I must include some hacks to make the 
 restore
 command actually allow me to restore the state of the new vm. I recognize 
 that this
 is using an undocumented libvirt api and isn't the ideal solution, but it 
 seemed
 better then avoiding libvirt and talking directly to qemu.
 
 This is obviously not ideal. It is my hope that this 0.1 version of the 
 feature will
 allow us to iteratively improve the live-snapshot/clone proccess and get the 
 security
 to a point where the libvirt maintainers would be willing to accept a patch 
 to directly
 expose an api to load memory from a file.

To characterize this as a libvirt issue is somewhat misleading. The reason why 
libvirt
does not explicitly allow this, is that from discussions with the upstream 
QEMU/KVM
developers, the recommendation/advise that this is not a safe operation and 
should not
be exposed to application developers.

The expectation is that the functionality in QEMU is only targetted for taking 
point in
time snapshots  allowing rollback of a VM to those snapshots, not creating 
clones of
active VMs.

 2. Security Concerns
 
 
 There are a number of security issues with loading state from another vm. 
 Here is a
 short list of things that need to be done just to make a cloned vm usable:
 
 a) mac address needs to be recreated
 b) entropy pool needs to be reset
 c) host name must be reset
 d) host keys bust be regenerated
 
 There are others, and trying to clone a running application as well may 
 expose other
 sensitive data, especially if users are snaphsoting vms and making them 
 public.
 
 The only issue that I address on the driver side is the mac addresses. This 
 is the
 minimum that needs to be done just to be able to access the vm over the 
 network. This
 is implemented by unplugging all network devices before the snapshot and 
 plugging new
 network devices in on clone. This isn't the most friendly thing to guest 
 applications,
 but it seems like the safest option for the first version of this feature.

This is not really as safe as you portray. When restoring from the snapshot the 
VM
will initially be running with virtual NIC with a different MAC address from 
the one
associated with the in memory OS kernel state. Even if you hotunplug the device 

Re: [openstack-dev] [libvirt] [nova] live-snapshot/cloning of virtual machines

2013-08-16 Thread Richard W.M. Jones
On Fri, Aug 16, 2013 at 11:05:19AM +0100, Daniel P. Berrange wrote:
 On Wed, Aug 14, 2013 at 04:53:01PM -0700, Vishvananda Ishaya wrote:
  Hi Everyone,
  
  I have been trying for some time to get the code for the live-snapshot 
  blueprint[1]
  in. Going through the review process for the rpc and interface code[2] was 
  easy. I
  suspect the api-extension code[3] will also be relatively trivial to get 
  in. The
  main concern is with the libvirt driver implementation[4]. I'd like to 
  discuss the
  concerns and see if we can make some progress.
  
  Short Summary (tl;dr)
  =
  
  I propose we merge live-cloning as an experimental feature for havanna and 
  have the
  api extension disabled by default.
  
  Overview
  
 
  First of all, let me express the value of live snapshoting. The
  slowest part of the vm provisioning process is generally booting
  of the OS.

Like Dan I'm dubious about this whole plan.  But this ^^ statement in
particular.  I would like to see hard data to back this up.

You should be able to boot an OS pretty quickly, and furthermore it's
(a) much safer for all the reasons Dan outlines, and (b) improvements
that you make to boot times help everyone.

[...]
  2. Security Concerns
  
  
  There are a number of security issues with loading state from another vm. 
  Here is a
  short list of things that need to be done just to make a cloned vm usable:
  
  a) mac address needs to be recreated
  b) entropy pool needs to be reset
  c) host name must be reset
  d) host keys bust be regenerated
  
  There are others, and trying to clone a running application as well may 
  expose other
  sensitive data, especially if users are snaphsoting vms and making them 
  public.

Are we talking about cloning VMs that you already trust, or cloning
random VMs and allowing random other users to use them?  These would
lead to very different solutions.  In the first case, you only care
about correctness, not security.  In the second case, you care about
security as well as correctness.

I highly doubt the second case is possible because scrubbing the disk
is going to take far too long for any supposed time-saving to matter.

As Dan says, even the first case is dubious because it won't be correct.

 The libguestfs project provide tools to perform offline cloning of
 VM disk images.  Its virt-sysprep knows how to delete alot (but by
 no means all possible) sensitive file data for common Linux 
 Windows OS. It still has to be combined with use of the
 virt-sparsify tool though, to ensure the deleted data is actually
 purged from the VM disk image as well as the filesystem, by
 releasing all unused VM disk sectors back to the host storage (and
 not all storage supports that).

Links to the tools that Dan mentions:

http://libguestfs.org/virt-sysprep.1.html
http://libguestfs.org/virt-sparsify.1.html

Note these tools can only be used on offline machines.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-top is 'top' for virtual machines.  Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] ack(), reject() and requeue() support in rpc ...

2013-08-16 Thread Flavio Percoco

On 14/08/13 17:08 -0300, Sandy Walsh wrote:

At Eric's request in https://review.openstack.org/#/c/41979/ I'm
bringing this to the ML for feedback.

Currently, oslo-common rpc behaviour is to always ack() a message no
matter what.


Hey,

I don't think we should keep adding new features to Oslo's rpc, I'd
rather think how this fits into oslo.messaging.


For billing purposes we can't afford to drop important notifications
(like *.exists). We only want to ack() if no errors are raised by the
consumer, otherwise we want to requeue the message.

Now, once we introduce this functionality, we will also need to support
.reject() semantics.

The use-case we've seen for this is:
1. grab notification
2. write to disk
3. do some processing on that notification, which raises an exception.
4. the event is requeued and steps 2-3 repeat very quickly. Lots of
duplicate records. In our case we've blown out our database.


Although I see some benefits from abstracting this, I'm not sure
whether we *really* need this in Oslo messaging. My main concern is
that acknowledgement is not supported by all back-ends and this can
turn out being a design flaw for apps depending on methods like ack()
/ reject().

Have you guys thought about re-sending the failed message on a
different topic / queue?

This is what Celery does to retry tasks on failures, for example.


FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-16 Thread Victor Sergeyev
Hello All.

Glance cores (Mark Washenberger, Flavio Percoco, Iccha Sethi) have some
questions about Oslo DB code, and why is it so important to use it instead
of custom implementation and so on. As there were a lot of questions it was
really hard to answer on all this questions in IRC. So we decided that
mailing list is better place for such things.

List of main questions:

1. What includes oslo DB code?
2. Why is it safe to replace custom implementation by Oslo DB code?
3. Why oslo DB code is better than custom implementation?
4. Why oslo DB code won’t slow up project development progress?
5. What we are going actually to do in Glance?
6. What is the current status?

Answers:

1. What includes oslo DB code?

Currently Oslo code improves different aspects around DB:
-- Work with SQLAlchemy models, engine and session
-- Lot of tools for work with SQLAlchemy
-- Work with unique keys
-- Base test case for work with database
-- Test migrations against different backends
-- Sync DB Models with actual schemas in DB (add test that they are
equivalent)


2. Why is it safe to replace custom implementation by Oslo DB code?

Oslo module, as base openstack module, takes care about code quality.
Usually, common code more readable (most of flake8 checks enabled in Oslo)
and have better test coverage.  Also it was tested in different use-cases
(in production also) in an other projects so bugs in Oslo code were already
fixed. So we can be sure, that we use high-quality code.


3. Why oslo DB code is better than custom implementation?

There are some arguments pro Oslo database code

-- common code collects useful features from different projects
Different utils, for work with database, common test class, module for
database migration, and  other features are already in Oslo db code. Patch
on automatic retry db.api query if db connection lost on review at the
moment. If we use Oslo db code we should not care, how to port these (and
others - in the future) features to Glance - it will came to all projects
automaticly when it will came to Oslo.

-- unified project work with database
As it was already said,  It can help developers work with database in a
same way in different projects. It’s useful if developer work with db in a
few projects - he use same base things and got no surprises from them.

-- it’s will reduce time for running tests.
Maybe it’s minor feature, but it’s also can be important. We can removed
some tests for base `DB` classes (such as session, engines, etc)  and
replaced for work with DB to mock calls.


4. Why oslo DB code won’t slow up project development progress?

Oslo code for work with database already in such projects as Nova, Neutron,
Celiometer and Ironic. AFAIK, these projects development speed doesn’t
decelerated (please fix me, If I’m wrong). Work with database level already
improved and tested in Oslo project, so we can concentrate on work with
project features. All features, that already came to oslo code will be
available in Glance, but if you want to add some specific feature to
project *just now* you will be able to do it in project code.


5. What we are going actually to do in Glance?

-- Improve test coverage of DB API layer
We are going to increase test coverage of glance/db/sqlalchemy/api module
and fix bugs, if found.

-- Run DB API tests on all backends
-- Use Oslo migrations base test case for test migrations against different
backends
There are lot of different things in SQl backends. For example work with
casting.
In current SQLite we are able to store everything in column (with any
type). Mysql will try to convert value to required type, and postgresql
will raise IntegrityError.
If we will improve this feature, we will be sure, that all Glance DB
migrations will run correctly on all backends.

-- Use Oslo code for SA models, engine and session
-- Use Oslo SA utils
Using common code for work with database was already discussed and approved
for all projects. So we are going to implement common code for work with
database instead of Glance implementation.

-- Fix work with session and transactions
Our work items in Glance:
- don't pass session instances to public DB methods
- use explicit transactions only when necessary
- fix incorrect usage of sessions throughout the DB-related code

-- Optimize methods
When we will have tests for all functions in glance/db/sqlalchemy/api
module it’s will be safe to refactor api methods. It will make these
functions more clean, readable and faster.

The main ideas are:
- identify and remove unused methods
- consolidate duplicate methods when possible
- ensure SQLAlchemy objects are not leaking out of the API
- ensure related methods are grouped together and named consistently

-- Add missing unique constraints
We should provide missed unique constraints, based on database queries from
glance.db.sqlalchemy.api module. It’s will reduce data duplication and
became one more step to Glance database normalization.

-- Sync models definitions with DB 

Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Davanum Srinivas
Boris,

+1 to getting started on oslo.db

-- dims


On Fri, Aug 16, 2013 at 9:52 AM, Boris Pavlovic bo...@pavlovic.me wrote:

 Hi all,

 We (OpenStack contributors) done a really huge and great work around DB
 code in Grizzly and Havana to unify it, put all common parts into
 oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
 unique keys, and to use  this code in different projects instead of custom
 implementations. (well done!)

 oslo-incubator db code is already used by: Nova, Neutron, Cinder, Ironic,
 Ceilometer.

 In this moment we finished work around Glance:
 https://review.openstack.org/#/c/36207/

 And working around Heat and Keystone.

 So almost all projects use this code (or planing to use it)

 Probably it is the right time to start work around moving oslo.db code to
 separated lib.

 We (Roman, Viktor and me) will be glad to help to make oslo.db lib:

 E.g. Here are two drafts:
 1) oslo.db lib code: https://github.com/malor/oslo.db
 2) And here is this lib in action: https://review.openstack.org/#/c/42159/


 Thoughts?


 Best regards,
 Boris Pavlovic
 --
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread David Ripton

On 08/16/2013 09:52 AM, Boris Pavlovic wrote:


We (OpenStack contributors) done a really huge and great work around DB
code in Grizzly and Havana to unify it, put all common parts into
oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
unique keys, and to use  this code in different projects instead of
custom implementations. (well done!)

oslo-incubator db code is already used by: Nova, Neutron, Cinder,
Ironic, Ceilometer.

In this moment we finished work around Glance:
https://review.openstack.org/#/c/36207/

And working around Heat and Keystone.

So almost all projects use this code (or planing to use it)

Probably it is the right time to start work around moving oslo.db code
to separated lib.

We (Roman, Viktor and me) will be glad to help to make oslo.db lib:

E.g. Here are two drafts:
1) oslo.db lib code: https://github.com/malor/oslo.db
2) And here is this lib in action: https://review.openstack.org/#/c/42159/


Thoughts?


+1.  Having to manually paste code from oslo-incubator into other 
projects is error-prone.  Of course it's important to get the library 
versioning right and do releases, but that's a small cost imposed on 
just the oslo-db folks to make using this code easier for everyone else.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Michael Basnight
On Aug 16, 2013, at 6:52 AM, Boris Pavlovic bo...@pavlovic.me wrote:

 Hi all, 
 
 We (OpenStack contributors) done a really huge and great work around DB code 
 in Grizzly and Havana to unify it, put all common parts into oslo-incubator, 
 fix bugs, improve handling of sqla exceptions, provide unique keys, and to 
 use  this code in different projects instead of custom implementations. (well 
 done!)
 
 oslo-incubator db code is already used by: Nova, Neutron, Cinder, Ironic, 
 Ceilometer. 
 
 In this moment we finished work around Glance: 
 https://review.openstack.org/#/c/36207/
 
 And working around Heat and Keystone.
 
 So almost all projects use this code (or planing to use it)
 
 Probably it is the right time to start work around moving oslo.db code to 
 separated lib.
 
 We (Roman, Viktor and me) will be glad to help to make oslo.db lib:
 
 E.g. Here are two drafts:
 1) oslo.db lib code: https://github.com/malor/oslo.db
 2) And here is this lib in action: https://review.openstack.org/#/c/42159/
 
 
 Thoughts? 
 

Excellent. Ill file a blueprint for Trove today! We need to upgrade to this. ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Shake Chen
+1

What about the keystone status in oslo?


On Fri, Aug 16, 2013 at 10:40 PM, David Ripton drip...@redhat.com wrote:

 On 08/16/2013 09:52 AM, Boris Pavlovic wrote:

  We (OpenStack contributors) done a really huge and great work around DB
 code in Grizzly and Havana to unify it, put all common parts into
 oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
 unique keys, and to use  this code in different projects instead of
 custom implementations. (well done!)

 oslo-incubator db code is already used by: Nova, Neutron, Cinder,
 Ironic, Ceilometer.

 In this moment we finished work around Glance:
 https://review.openstack.org/#**/c/36207/https://review.openstack.org/#/c/36207/

 And working around Heat and Keystone.

 So almost all projects use this code (or planing to use it)

 Probably it is the right time to start work around moving oslo.db code
 to separated lib.

 We (Roman, Viktor and me) will be glad to help to make oslo.db lib:

 E.g. Here are two drafts:
 1) oslo.db lib code: 
 https://github.com/malor/oslo.**dbhttps://github.com/malor/oslo.db
 2) And here is this lib in action: https://review.openstack.org/#**
 /c/42159/ https://review.openstack.org/#/c/42159/


 Thoughts?


 +1.  Having to manually paste code from oslo-incubator into other projects
 is error-prone.  Of course it's important to get the library versioning
 right and do releases, but that's a small cost imposed on just the oslo-db
 folks to make using this code easier for everyone else.

 --
 David Ripton   Red Hat   drip...@redhat.com


 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Shake Chen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Lance D Bragstad

I believe there are reviews in Keystone for bring this in:

https://review.openstack.org/#/c/38029/
https://review.openstack.org/#/c/38030/
https://blueprints.launchpad.net/keystone/+spec/use-common-oslo-db-code


Best Regards,

Lance Bragstad
Software Engineer - OpenStack
Cloud Solutions and OpenStack Development
T/L 553-5409, External 507-253-5409
ldbra...@us.ibm.com, Bld 015-2/C118



From:   Shake Chen shake.c...@gmail.com
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
Date:   08/16/2013 09:54 AM
Subject:Re: [openstack-dev] Proposal oslo.db lib



+1

What about the keystone status in oslo?


On Fri, Aug 16, 2013 at 10:40 PM, David Ripton drip...@redhat.com wrote:
  On 08/16/2013 09:52 AM, Boris Pavlovic wrote:

   We (OpenStack contributors) done a really huge and great work around DB
   code in Grizzly and Havana to unify it, put all common parts into
   oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
   unique keys, and to use  this code in different projects instead of
   custom implementations. (well done!)

   oslo-incubator db code is already used by: Nova, Neutron, Cinder,
   Ironic, Ceilometer.

   In this moment we finished work around Glance:
   https://review.openstack.org/#/c/36207/

   And working around Heat and Keystone.

   So almost all projects use this code (or planing to use it)

   Probably it is the right time to start work around moving oslo.db code
   to separated lib.

   We (Roman, Viktor and me) will be glad to help to make oslo.db lib:

   E.g. Here are two drafts:
   1) oslo.db lib code: https://github.com/malor/oslo.db
   2) And here is this lib in action:
   https://review.openstack.org/#/c/42159/


   Thoughts?

  +1.  Having to manually paste code from oslo-incubator into other
  projects is error-prone.  Of course it's important to get the library
  versioning right and do releases, but that's a small cost imposed on just
  the oslo-db folks to make using this code easier for everyone else.

  --
  David Ripton   Red Hat   drip...@redhat.com


  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Shake Chen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
inline: graycol.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] proposing Alex Gaynor for core on openstack/requirements

2013-08-16 Thread Doug Hellmann
I'd like to propose Alex Gaynor for core status on the requirements project.

Alex is a core Python and PyPy developer, has strong ties throughout the
wider Python community, and has been watching and reviewing requirements
changes for a little while now. I think it would be extremely helpful to
have him on the team.

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] proposing Alex Gaynor for core on openstack/requirements

2013-08-16 Thread Mark McClain
+1

mark

On Aug 16, 2013, at 11:04 AM, Doug Hellmann doug.hellm...@dreamhost.com wrote:

 I'd like to propose Alex Gaynor for core status on the requirements project.
 
 Alex is a core Python and PyPy developer, has strong ties throughout the 
 wider Python community, and has been watching and reviewing requirements 
 changes for a little while now. I think it would be extremely helpful to have 
 him on the team.
 
 Doug
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] Savanna incubation intention

2013-08-16 Thread Sergey Lukjanov
Hi folks,

I’m glad to announce Savanna intention to apply for the incubation during 
Icehouse release. In this email I would like to provide an update on our 
current status and nearest plans and, as well as start the conversation to 
solicit the feedback on Savanna from the community.

Let’s start with the current state of Savanna project. All our code and 
bugs/specs are hosted at OpenStack Gerrit and Launchpad correspondingly. Unit 
tests and all pep8/hacking checks are run at OpenStack Jenkins and we have 
integration tests running at our own Jenkins server for each patch set. We have 
great Sphinx-based docs published at readthedocs - http://savanna.rtfd.org, it 
consists of dev, admin and user guides and descriptions of REST API, plugins 
SPI and etc. Savanna is integrated with Nova, Keystone, Glance, Cinder and 
Swift now, and we already using diskimage-builder to create a prebuilt images 
for Hadoop clusters.

We have an amazing team working on Savanna - about twenty engineers from 
Mirantis, Red Hat and Hortonworks (according to authors git stat). We have been 
holding weekly IRC meetings for the last 6 months and discussing architectural 
questions there and in openstack mailing lists as well. As for the code 
reviews, we’ve  established the same approach as other OpenStack projects. 
Change requests cannot be merged without the review from the main contributors 
for the corresponding component and this ensures high standard for all code 
that lands in master.

Currently we are actively working on two main directions - Elastic Data 
Processing (https://wiki.openstack.org/wiki/Savanna/EDP) and scalable 
architecture. Our next major 0.3 release is planned for October timeframe and 
will be based on OpenStack Havana codebase. It will contain basic EDP 
functionality, Savanna distributed design, Neutron support and, of course, 
updated OpenStack Dashboard plugin with all new features.

Let’s take a look at our future plans. We would like to integrate with other 
OpenStack components, such as Heat and Ceilometer and to adjust our release 
cycle in Icehouse. Code hardening, useful CLI implementation and enhanced 
functionality of EDP are also the things to be done and pay attention for.

So you are welcome to comment and leave your feedback on how to make Savanna 
better and become the integrated project.

Thank you!

P.S. Some links:
http://wiki.openstack.org/wiki/Savanna
http://wiki.openstack.org/wiki/Savanna/Roadmap
https://launchpad.net/savanna
https://savanna.readthedocs.org
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda
review stats: 
http://jenkins.savanna.mirantis.com/view/Infra/job/savanna-reviewstats/Savanna_Review_Stats/index.html

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] proposing Alex Gaynor for core on openstack/requirements

2013-08-16 Thread Russell Bryant
On 08/16/2013 11:04 AM, Doug Hellmann wrote:
 I'd like to propose Alex Gaynor for core status on the requirements project.
 
 Alex is a core Python and PyPy developer, has strong ties throughout the
 wider Python community, and has been watching and reviewing requirements
 changes for a little while now. I think it would be extremely helpful to
 have him on the team.

Sounds like a great addition to me.  +1 from me, fwiw

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] proposing Alex Gaynor for core on openstack/requirements

2013-08-16 Thread Monty Taylor
+1

On 08/16/2013 11:04 AM, Doug Hellmann wrote:
 I'd like to propose Alex Gaynor for core status on the requirements project.
 
 Alex is a core Python and PyPy developer, has strong ties throughout the
 wider Python community, and has been watching and reviewing requirements
 changes for a little while now. I think it would be extremely helpful to
 have him on the team.
 
 Doug
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Help consuming trusts

2013-08-16 Thread Steven Hardy
Hi,

I'm looking for help, ideally some code or curl examples, figuring out why
I can't consume trusts in the manner specified in the documentation:

https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md

I've been working on getting Heat integrated with the trusts functionality,
and the first step was to add keystoneclient support:

https://review.openstack.org/#/c/39899/

All works fine in terms of the actual operations on the OS-TRUST path, I
can create, list, get, delete trusts with no issues.

However I'm strugging to actually *use* the trust, i.e obtain a
trust-scoped token using the trust ID, I always seem to get the opaque
Authorization failed. The request you have made requires authentication.
message, despite the requests on authentication looking as per the API docs.

Are there any curl examples or test code I can refer to?

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-08-16 Thread Christopher Armstrong
On Thu, Aug 15, 2013 at 6:39 PM, Randall Burt randall.b...@rackspace.comwrote:


 On Aug 15, 2013, at 6:20 PM, Angus Salkeld asalk...@redhat.com wrote:

  On 15/08/13 17:50 -0500, Christopher Armstrong wrote:

  2. There should be a new custom-built API for doing exactly what the
  autoscaling service needs on an InstanceGroup, named something
 unashamedly
  specific -- like instance-group-adjust.
 
  Pros: It'll do exactly what it needs to do for this use case; very
 little
  state management in autoscale API; it lets Heat do all the orchestration
  and only give very specific delegation to the external autoscale API.
 
  Cons: The API grows an additional method for a specific use case.
 
  I like this one above:
  adjust(new_size, victim_list=['i1','i7'])
 
  So if you are reducing the new_size we look in the victim_list to
  choose those first. This should cover Clint's use case as well.
 
  -Angus

 We could just support victim_list=[1, 7], since these groups are
 collections of identical
 resources. Simple indexing should be sufficient, I would think.

 Perhaps separating the stimulus from the actions to take would let us
 design/build toward different policy implementations. Initially, we could
 have a HeatScalingPolicy that works with the signals that a scaling group
 can handle. When/if AS becomes an API outside of Heat, we can implement a
 fairly simple NovaScalingPolicy that includes the args to pass to nova boot.



I don't agree with using indices. I'd rather use the actual resource IDs.
For one, indices can change out from under you. Also, figuring out the
index of the instance you want to kill is probably an additional step most
of the time you actually care about destroying specific instances.



  3. the autoscaling API should update the Size Property of the
  InstanceGroup resource in the stack that it is placed in. This would
  require the ability to PATCH a specific piece of a template (an
 operation
  isomorphic to update-stack).

 I think a PATCH semantic for updates would be generally useful in terms of
 quality of life for API users. Not having to pass the complete state and
 param values for trivial updates would be quite nice regardless of its
 implications to AS.


Agreed.



-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Monty Taylor


On 08/16/2013 09:52 AM, Boris Pavlovic wrote:
 Hi all, 
 
 We (OpenStack contributors) done a really huge and great work around DB
 code in Grizzly and Havana to unify it, put all common parts into
 oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
 unique keys, and to use  this code in different projects instead of
 custom implementations. (well done!)
 
 oslo-incubator db code is already used by: Nova, Neutron, Cinder,
 Ironic, Ceilometer. 
 
 In this moment we finished work around Glance: 
 https://review.openstack.org/#/c/36207/
 
 And working around Heat and Keystone.
 
 So almost all projects use this code (or planing to use it)
 
 Probably it is the right time to start work around moving oslo.db code
 to separated lib.
 
 We (Roman, Viktor and me) will be glad to help to make oslo.db lib:
 
 E.g. Here are two drafts:
 1) oslo.db lib code: https://github.com/malor/oslo.db
 2) And here is this lib in action:
 https://review.openstack.org/#/c/42159/
 https://review.openstack..org/#/c/42159/

+1

Great job Boris!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-16 Thread Monty Taylor


On 08/16/2013 09:31 AM, Victor Sergeyev wrote:
 Hello All.
 
 Glance cores (Mark Washenberger, Flavio Percoco, Iccha Sethi) have some
 questions about Oslo DB code, and why is it so important to use it
 instead of custom implementation and so on. As there were a lot of
 questions it was really hard to answer on all this questions in IRC. So
 we decided that mailing list is better place for such things.

There is another main point - which is at the last summit, we talked
about various legit database things that need to be done to support CD
and rolling deploys. The list is not small, and it's a task that's
important. Needing to implement it in all of the projects separately is
kind of an issue, whereas if the projects are all using the database the
same way, then the database team can engineer the same mechanisms for
doing rolling schema changes, and then operators can have a consistent
expectation when they're running a cloud.

 List of main questions:
 
 1. What includes oslo DB code?  
 2. Why is it safe to replace custom implementation by Oslo DB code? 
 3. Why oslo DB code is better than custom implementation?
 4. Why oslo DB code won’t slow up project development progress?
 5. What we are going actually to do in Glance?
 6. What is the current status?
 
 Answers:
 
 1. What includes oslo DB code?
 
 Currently Oslo code improves different aspects around DB:
 -- Work with SQLAlchemy models, engine and session
 -- Lot of tools for work with SQLAlchemy 
 -- Work with unique keys
 -- Base test case for work with database
 -- Test migrations against different backends
 -- Sync DB Models with actual schemas in DB (add test that they are
 equivalent)
 
 
 2. Why is it safe to replace custom implementation by Oslo DB code? 
 
 Oslo module, as base openstack module, takes care about code quality.
 Usually, common code more readable (most of flake8 checks enabled in
 Oslo) and have better test coverage.  Also it was tested in different
 use-cases (in production also) in an other projects so bugs in Oslo code
 were already fixed. So we can be sure, that we use high-quality code.
 
 
 3. Why oslo DB code is better than custom implementation?
 
 There are some arguments pro Oslo database code 
 
 -- common code collects useful features from different projects
 Different utils, for work with database, common test class, module for
 database migration, and  other features are already in Oslo db code.
 Patch on automatic retry db.api query if db connection lost on review at
 the moment. If we use Oslo db code we should not care, how to port these
 (and others - in the future) features to Glance - it will came to all
 projects automaticly when it will came to Oslo. 
 
 -- unified project work with database
 As it was already said,  It can help developers work with database in a
 same way in different projects. It’s useful if developer work with db in
 a few projects - he use same base things and got no surprises from them. 
 
 -- it’s will reduce time for running tests.
 Maybe it’s minor feature, but it’s also can be important. We can removed
 some tests for base `DB` classes (such as session, engines, etc)  and
 replaced for work with DB to mock calls.
 
 
 4. Why oslo DB code won’t slow up project development progress?
 
 Oslo code for work with database already in such projects as Nova,
 Neutron, Celiometer and Ironic. AFAIK, these projects development speed
 doesn’t decelerated (please fix me, If I’m wrong). Work with database
 level already improved and tested in Oslo project, so we can concentrate
 on work with project features. All features, that already came to oslo
 code will be available in Glance, but if you want to add some specific
 feature to project *just now* you will be able to do it in project code.
 
 
 5. What we are going actually to do in Glance?
 
 -- Improve test coverage of DB API layer
 We are going to increase test coverage of glance/db/sqlalchemy/api
 module and fix bugs, if found. 
 
 -- Run DB API tests on all backends
 -- Use Oslo migrations base test case for test migrations against
 different backends
 There are lot of different things in SQl backends. For example work with
 casting.
 In current SQLite we are able to store everything in column (with any
 type). Mysql will try to convert value to required type, and postgresql
 will raise IntegrityError. 
 If we will improve this feature, we will be sure, that all Glance DB
 migrations will run correctly on all backends.
 
 -- Use Oslo code for SA models, engine and session
 -- Use Oslo SA utils
 Using common code for work with database was already discussed and
 approved for all projects. So we are going to implement common code for
 work with database instead of Glance implementation.
 
 -- Fix work with session and transactions
 Our work items in Glance:
 - don't pass session instances to public DB methods
 - use explicit transactions only when necessary
 - fix incorrect usage of sessions throughout the DB-related code
 
 -- 

Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Julien Danjou
On Fri, Aug 16 2013, Boris Pavlovic wrote:

 Thoughts?

Way to go.

-- 
Julien Danjou
/* Free Software hacker * freelance consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-16 Thread Eric Windisch
On Fri, Aug 16, 2013 at 9:31 AM, Victor Sergeyev vserge...@mirantis.com wrote:
 Hello All.

 Glance cores (Mark Washenberger, Flavio Percoco, Iccha Sethi) have some
 questions about Oslo DB code, and why is it so important to use it instead
 of custom implementation and so on. As there were a lot of questions it was
 really hard to answer on all this questions in IRC. So we decided that
 mailing list is better place for such things.

 List of main questions:

 1. What includes oslo DB code?
 2. Why is it safe to replace custom implementation by Oslo DB code?

Just to head off these two really quick. The database code in Oslo as
initially submitted was actually based largely from that in Glance,
merging in some of the improvements made in Nova. There might have
been some divergence since then, but migrating over shouldn't be
terribly difficult. While it isn't necessary for Glance to switch
over, it would be somewhat ironic if it didn't.

The database code in Oslo primarily keeps base models and various
things we can easily share, reuse, and improve across projects. I
suppose a big part of this is the session management which has been
moved out of api.py and into its own module of session.py. This
session management code is probably what you'll most have to decide is
worthwhile bringing in and if Glance really has such unique
requirements that it needs to bother with maintaining this code on its
own.

-- 
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reminder: Oslo project meeting

2013-08-16 Thread Mark McLoughlin
On Tue, 2013-08-13 at 22:09 +0100, Mark McLoughlin wrote:
 Hi
 
 We're having an IRC meeting on Friday to sync up again on the messaging
 work going on:
 
   https://wiki.openstack.org/wiki/Meetings/Oslo
   https://etherpad.openstack.org/HavanaOsloMessaging
 
 Feel free to add other topics to the wiki
 
 See you on #openstack-meeting at 1400 UTC

Logs here:

http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-08-16-14.00.html

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-16 Thread Flavio Percoco

On 16/08/13 11:42 -0400, Monty Taylor wrote:



On 08/16/2013 09:31 AM, Victor Sergeyev wrote:

Hello All.

Glance cores (Mark Washenberger, Flavio Percoco, Iccha Sethi) have some
questions about Oslo DB code, and why is it so important to use it
instead of custom implementation and so on. As there were a lot of
questions it was really hard to answer on all this questions in IRC. So
we decided that mailing list is better place for such things.


There is another main point - which is at the last summit, we talked
about various legit database things that need to be done to support CD
and rolling deploys. The list is not small, and it's a task that's
important. Needing to implement it in all of the projects separately is
kind of an issue, whereas if the projects are all using the database the
same way, then the database team can engineer the same mechanisms for
doing rolling schema changes, and then operators can have a consistent
expectation when they're running a cloud.




Just to be clear, AFAIK, the concerns were around how / when to migrate
Glance and not about why we should share database code.



List of main questions:

1. What includes oslo DB code?
2. Why is it safe to replace custom implementation by Oslo DB code?
3. Why oslo DB code is better than custom implementation?
4. Why oslo DB code won’t slow up project development progress?
5. What we are going actually to do in Glance?
6. What is the current status?

Answers:

1. What includes oslo DB code?

Currently Oslo code improves different aspects around DB:
-- Work with SQLAlchemy models, engine and session
-- Lot of tools for work with SQLAlchemy
-- Work with unique keys
-- Base test case for work with database
-- Test migrations against different backends
-- Sync DB Models with actual schemas in DB (add test that they are
equivalent)


2. Why is it safe to replace custom implementation by Oslo DB code?

Oslo module, as base openstack module, takes care about code quality.
Usually, common code more readable (most of flake8 checks enabled in
Oslo) and have better test coverage.  Also it was tested in different
use-cases (in production also) in an other projects so bugs in Oslo code
were already fixed. So we can be sure, that we use high-quality code.




This is the point I was most worried about - and I'm still are. The
migration to Oslo's db code started a bit late in Glance and no code
has been merged yet. As for Glance, there still seems to be a lot of
work ahead on this matter.


That being said, thanks a lot for the email and for explaining all
those details.
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-16 Thread Maru Newby

On Aug 16, 2013, at 2:12 AM, Robert Collins robe...@robertcollins.net wrote:

 On 16 August 2013 20:15, Maru Newby ma...@redhat.com wrote:
 
 This pattern has one slight issue, which is:
 
  • Do not assume the reviewer has access to external web services/site.
 In 6 months time when someone is on a train/plane/coach/beach/pub 
 troubleshooting a problem  browsing GIT history, there is no guarantee 
 they will have access to the online bug tracker, or online blueprint 
 documents. The great step forward with distributed SCM is that you no 
 longer need to be online to have access to all information about the code 
 repository. The commit message should be totally self-contained, to 
 maintain that benefit.
 
 I'm not sure I agree with this.  It can't be true in all cases, so it can 
 hardly be considered a rule.  A guideline, maybe - something to strive for.  
 But not all artifacts of the development process are amenable to being 
 stuffed into code or the commits associated with them.  A dvcs is great and 
 all, but unless one is working in a silo, online resources are all but 
 mandatory.
 
 In a very strict sense you're right, but consider that for anyone
 doing fast iterative development the need to go hit a website is a
 huge slowdown : at least in most of the world :).

You're suggesting that it's possible to do _fast_ iterative development on a 
distributed system of immense and largely undocumented complexity (like 
openstack)?  I'd like to be working on the code you're working on!  ;) 


m.

 
 So - while I agree that it's something to strive for, I think we
 should invert it and say 'not having everything in the repo is
 something we should permit occasional exceptions to'.
 
 -Rob
 
 -- 
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Jay Pipes

On 08/16/2013 09:52 AM, Boris Pavlovic wrote:

Hi all,

We (OpenStack contributors) done a really huge and great work around DB
code in Grizzly and Havana to unify it, put all common parts into
oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
unique keys, and to use  this code in different projects instead of
custom implementations. (well done!)

oslo-incubator db code is already used by: Nova, Neutron, Cinder,
Ironic, Ceilometer.

In this moment we finished work around Glance:
https://review.openstack.org/#/c/36207/

And working around Heat and Keystone.

So almost all projects use this code (or planing to use it)

Probably it is the right time to start work around moving oslo.db code
to separated lib.

We (Roman, Viktor and me) will be glad to help to make oslo.db lib:

E.g. Here are two drafts:
1) oslo.db lib code: https://github.com/malor/oslo.db
2) And here is this lib in action: https://review.openstack.org/#/c/42159/


Thoughts?


++

Are you going to create a separate Launchpad project for the library and 
track bugs against it separately? Or are you going to use the oslo 
project in Launchpad for that?


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live-snapshot/cloning of virtual machines

2013-08-16 Thread Vishvananda Ishaya
On Fri, Aug 16, 2013 at 3:05 AM, Daniel P. Berrange berra...@redhat.comwrote:

 On Wed, Aug 14, 2013 at 04:53:01PM -0700, Vishvananda Ishaya wrote:
  Hi Everyone,
 
  I have been trying for some time to get the code for the live-snapshot
 blueprint[1]
  in. Going through the review process for the rpc and interface code[2]
 was easy. I
  suspect the api-extension code[3] will also be relatively trivial to get
 in. The
  main concern is with the libvirt driver implementation[4]. I'd like to
 discuss the
  concerns and see if we can make some progress.
 
  Short Summary (tl;dr)
  =
 
  I propose we merge live-cloning as an experimental feature for havanna
 and have the
  api extension disabled by default.
 
  Overview
  
 
  First of all, let me express the value of live snapshoting. The slowest
 part of the
  vm provisioning process is generally booting of the OS. The advantage of
 live-
  snapshotting is that it allows the possibility of bringing up
 application servers
  while skipping the overhead of vm (and application startup).

 For Linux at least I think bootup time is a problem that is being solved
 by the
 distros. It is possible to boot up many modern Linux distros in a couple
 of seconds
 even in physical hardware - VMs can be even faster since they don't have
 such stupid
 BIOS to worry about  have a restricted set of possible hardware. This is
 on a par
 with, or better than, the overheads imposed by Nova itself in the boot up
 process.

 Windows may be a different story, but I've not used it in years so don't
 know what
 its boot performance is like.

  I recognize that this capability comes with some security concerns, so I
 don't expect
  this feature to go in and be ready to for use in production right away.
 Similarly,
  containers have a lot of the same benefit, but have had their own
 security issues
  which are gradually being resolved. My hope is that getting this feature
 in would
  allow people to start experimenting with live-booting so that we could
 uncover some
  of these security issues.
 
  There are two specific concerns that have been raised regarding my
 patch. The first
  concern is related to my use of libvirt. The second concern is related
 to the security
  issues above. Let me address them separately.
 
  1. Libvirt Issues
  =
 
  The only feature I require from the hypervisor is to load
 memory/processor state for
  a vm from a file. Qemu supports this directly. The only way that libvirt
 exposes this
  functionality is via its restore command which is specifically for
 restoring the
  previous state of an existing vm. Cloning, or restoring the memory
 state of a
  cloned vm is considered unsafe (which I will address in the second
 point, below).
 
  The result of the limited api is that I must include some hacks to make
 the restore
  command actually allow me to restore the state of the new vm. I
 recognize that this
  is using an undocumented libvirt api and isn't the ideal solution, but
 it seemed
  better then avoiding libvirt and talking directly to qemu.
 
  This is obviously not ideal. It is my hope that this 0.1 version of the
 feature will
  allow us to iteratively improve the live-snapshot/clone proccess and get
 the security
  to a point where the libvirt maintainers would be willing to accept a
 patch to directly
  expose an api to load memory from a file.

 To characterize this as a libvirt issue is somewhat misleading. The reason
 why libvirt
 does not explicitly allow this, is that from discussions with the upstream
 QEMU/KVM
 developers, the recommendation/advise that this is not a safe operation
 and should not
 be exposed to application developers.

 The expectation is that the functionality in QEMU is only targetted for
 taking point in
 time snapshots  allowing rollback of a VM to those snapshots, not
 creating clones of
 active VMs.


Thanks for the clarification here. I wasn't aware that this requirement
came from qemu
upstream.



  2. Security Concerns
  
 
  There are a number of security issues with loading state from another
 vm. Here is a
  short list of things that need to be done just to make a cloned vm
 usable:
 
  a) mac address needs to be recreated
  b) entropy pool needs to be reset
  c) host name must be reset
  d) host keys bust be regenerated
 
  There are others, and trying to clone a running application as well may
 expose other
  sensitive data, especially if users are snaphsoting vms and making them
 public.
 
  The only issue that I address on the driver side is the mac addresses.
 This is the
  minimum that needs to be done just to be able to access the vm over the
 network. This
  is implemented by unplugging all network devices before the snapshot and
 plugging new
  network devices in on clone. This isn't the most friendly thing to guest
 applications,
  but it seems like the safest option for the first version of this
 feature.

 This is not really as safe 

Re: [openstack-dev] Migrating to testr parallel in tempest

2013-08-16 Thread Ben Nemec

On 2013-08-14 16:10, Matthew Treinish wrote:

On Wed, Aug 14, 2013 at 11:05:35AM -0500, Ben Nemec wrote:

On 2013-08-13 16:39, Clark Boylan wrote:
On Tue, Aug 13, 2013 at 1:25 PM, Matthew Treinish
mtrein...@kortar.org wrote:

Hi everyone,

So for the past month or so I've been working on getting tempest
to work stably
with testr in parallel. As part of this you may have noticed the
testr-full
jobs that get run on the zuul check queue. I was using that job
to debug some
of the more obvious race conditions and stability issues with
running tempest
in parallel. After a bunch of fixes to tempest and finding some
real bugs in
some of the projects things seem to have smoothed out.

So I pushed the testr-full run to the gate queue earlier today.
I'll be keeping
track of the success rate of this job vs the serial job and use
this as the
determining factor before we push this live to be the default
for all tempest
runs. So assuming that the success rate matches up well enough
with serial job
on the gate queue then I will push out the change that will
migrate all the
voting jobs to run in parallel hopefully either Friday afternoon
or early next
week. Also, if anyone has any input on what threshold they feel
is good enough
for this I'd welcome any input on that. For example, do we want
to ensure
a = 1:1 match for job success? Or would something like 90% as
stable as the
serial job be good enough considering the speed advantage. (The
parallel runs
take about half as much time as a full serial run, the parallel
job normally
finishes in ~25-30min) Since this affects almost every project I
don't want to
define this threshold without input from everyone.

After there is some more data for the gate queue's parallel job
I'll have some
pretty graphite graphs that I can share comparing the success
trends between
the parallel and serial jobs.

So at this point we're in the home stretch and I'm asking for
everyone's help
in getting this merged. So, if everyone who is reviewing and
pushing commits
could watch the results from these non-voting jobs and if things
fail on the
parallel job but not the serial job please investigate the
failure and open a
bug if necessary. If it turns out to be a bug in tempest please
link it against
this blueprint:

https://blueprints.launchpad.net/tempest/+spec/speed-up-tempest

so that I'll give it the attention it deserves. I'd hate to get
this close to
getting this merged and have a bit of racy code get merged at
the last second
and block us for another week or two.

I feel that we need to get this in before the H3 rush starts up
as it will help
everyone get through the extra review load faster.

Getting this in before the H3 rush would be very helpful. When we made
the switch with Nova's unittests we fixed as many of the test bugs
that we could find, merged the change to switch the test runner, then
treated all failures as very high priority bugs that received
immediate attention. Getting this in before H3 will give everyone a
little more time to debug any potential new issues exposed by Jenkins
or people running the tests locally.

I think we should be bold here and merge this as soon as we have good
numbers that indicate the trend is for these tests to pass. Graphite
can give us the pass to fail ratios over time, as long as these trends
are similar for both the old nosetest jobs and the new testr job I say
we go for it. (Disclaimer: most of the projecst I work on are not
affected by the tempest jobs; however, I am often called upon to help
sort out issues in the gate).

I'm inclined to agree.  It's not as if we don't have transient
failures now, and if we're looking at a 50% speedup in
recheck/verify times then as long as the new version isn't
significantly less stable it should be a net improvement.

Of course, without hard numbers we're kind of discussing in a vacuum
here.



I also would like to get this in sooner rather than later and fix the 
bugs as
they come in. But, I'm wary of doing this because there isn't a proven 
success
history yet. No one likes gate resets, and I've only been running it on 
the

gate queue for a day now.

So here is the graphite graph that I'm using to watch parallel vs 
serial in the

gate queue:
https://tinyurl.com/pdfz93l


Okay, so what are the y-axis units on this?  Because just guessing I 
would say that it's percentage of failing runs, in which case it looks 
like we're already within the 95% as accurate range (it never dips below 
-.05).  Am I reading it right?




On that graph the blue and yellow shows the number of jobs that 
succeeded
grouped together in per hour buckets. (yellow being parallel and blue 
serial)


Then the red line is showing failures, a horizontal bar means that 
there is no
difference in the number of failures between serial and parallel. When 
it dips
negative it is showing a failure in parallel that wasn't on serial a 
serial run
at the same time. When it goes positive it showing a failure on serial 
that
doesn't occur on 

Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Ben Nemec

On 2013-08-16 11:58, Jay Pipes wrote:

On 08/16/2013 09:52 AM, Boris Pavlovic wrote:

Hi all,

We (OpenStack contributors) done a really huge and great work around 
DB

code in Grizzly and Havana to unify it, put all common parts into
oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
unique keys, and to use  this code in different projects instead of
custom implementations. (well done!)

oslo-incubator db code is already used by: Nova, Neutron, Cinder,
Ironic, Ceilometer.

In this moment we finished work around Glance:
https://review.openstack.org/#/c/36207/

And working around Heat and Keystone.

So almost all projects use this code (or planing to use it)

Probably it is the right time to start work around moving oslo.db code
to separated lib.

We (Roman, Viktor and me) will be glad to help to make oslo.db lib:

E.g. Here are two drafts:
1) oslo.db lib code: https://github.com/malor/oslo.db
2) And here is this lib in action: 
https://review.openstack.org/#/c/42159/



Thoughts?


++

Are you going to create a separate Launchpad project for the library
and track bugs against it separately? Or are you going to use the oslo
project in Launchpad for that?


At the moment all of the oslo.* projects are just grouped under the 
overall Oslo project in LP.  Unless there's a reason to do otherwise I 
would expect that to be true of oslo.db too.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] as-update-policy implementation details

2013-08-16 Thread Zane Bitter

On 15/08/13 19:14, Chan, Winson C wrote:

I updated the implementation section of 
https://wiki.openstack.org/wiki/Heat/Blueprints/as-update-policy on instance 
naming to support UpdatePolicy where in the case of the LaunchConfiguration 
change, all the instances need to be replaced and to support 
MinInstancesInService, the handle_update should create new instances first 
before deleting old ones in a batch per MaxBatchSize (i.e., group capacity of 2 
with MaxBatchSize=2 and MinInstancesInService=2).  Please review as I may not 
understand the original motivation for the existing scheme in instance naming.  
Thanks.


Yeah, I don't think the naming is that important any more. Note that 
physical_resource_name() (i.e. the name used in Nova) now includes a 
randomised component on the end (stackname-resourcename-uniqueid).


So they'll probably look a bit like:

MyStack-MyASGroup--MyASGroup-1-

because the instances are now resources inside a nested stack (whose 
name is of the same form).


If we still were subclassing Instance in the autoscaling code to 
override other stuff, I'd suggest overriding physical-resource-name to 
return something like:


MyStack-MyASGroup-

(i.e. forget about numbering instances at all), but we're not 
subclassing any more, so I'm not sure if it's worth it.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gate breakage process - Let's fix! (related but not specific to neutron)

2013-08-16 Thread Maru Newby
Neutron has been in and out of the gate for the better part of the past month, 
and it didn't slow the pace of development one bit.  Most Neutron developers 
kept on working as if nothing was wrong, blithely merging changes with no 
guarantees that they weren't introducing new breakage.  New bugs were indeed 
merged, greatly increasing the time and effort required to get Neutron back in 
the gate.  I don't think this is sustainable, and I'd like to make a suggestion 
for how to minimize the impact of gate breakage.

For the record, I don't think consistent gate breakage in one project should be 
allowed to hold up the development of other projects.  The current approach of 
skipping tests or otherwise making a given job non-voting for innocent projects 
should continue.  It is arguably worth taking the risk of relaxing gating for 
those innocent projects rather than halting development unnecessarily.

However, I don't think it is a good idea to relax a broken gate for the 
offending project.  So if a broken job/test is clearly Neutron related, it 
should continue to gate Neutron, effectively preventing merges until the 
problem is fixed.  This would both raise the visibility of breakage beyond the 
person responsible for fixing it, and prevent additional breakage from slipping 
past were the gating to be relaxed.

Thoughts?


m.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate breakage process - Let's fix! (related but not specific to neutron)

2013-08-16 Thread Alex Gaynor
I'd strongly agree with that, a project must always be gated by any tests
for it, even if they don't gate for other projects. I'd also argue that any
time there's a non-gating test (for any project) it needs a formal
explanation of why it's not gating yet, what the plan to get it to gating
is, and on what timeframe it's expected to be.

Alex


On Fri, Aug 16, 2013 at 11:25 AM, Maru Newby ma...@redhat.com wrote:

 Neutron has been in and out of the gate for the better part of the past
 month, and it didn't slow the pace of development one bit.  Most Neutron
 developers kept on working as if nothing was wrong, blithely merging
 changes with no guarantees that they weren't introducing new breakage.  New
 bugs were indeed merged, greatly increasing the time and effort required to
 get Neutron back in the gate.  I don't think this is sustainable, and I'd
 like to make a suggestion for how to minimize the impact of gate breakage.

 For the record, I don't think consistent gate breakage in one project
 should be allowed to hold up the development of other projects.  The
 current approach of skipping tests or otherwise making a given job
 non-voting for innocent projects should continue.  It is arguably worth
 taking the risk of relaxing gating for those innocent projects rather than
 halting development unnecessarily.

 However, I don't think it is a good idea to relax a broken gate for the
 offending project.  So if a broken job/test is clearly Neutron related, it
 should continue to gate Neutron, effectively preventing merges until the
 problem is fixed.  This would both raise the visibility of breakage beyond
 the person responsible for fixing it, and prevent additional breakage from
 slipping past were the gating to be relaxed.

 Thoughts?


 m.





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
I disapprove of what you say, but I will defend to the death your right to
say it. -- Evelyn Beatrice Hall (summarizing Voltaire)
The people's good is the highest law. -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Scheduler sub-group meeting on 8/20

2013-08-16 Thread Dugger, Donald D
Turns out I'll be traveling that day so won't be able to run the meeting.  If 
there's anyone who wants to volunteer to lead the meeting speak now, otherwise 
we can just cancel next week.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-16 Thread Mark Washenberger
I would prefer to pick and choose which parts of oslo common db code to
reuse in glance. Most parts there look great and very useful. However, some
parts seem like they would conflict with several goals we have.

1) To improve code sanity, we need to break away from the idea of having
one giant db api interface
2) We need to improve our position with respect to new, non SQL drivers
- mostly, we need to focus first on removing business logic (especially
authz) from database driver code
- we also need to break away from the strict functional interface,
because it limits our ability to express query filters and tends to lump
all filter handling for a given function into a single code block (which
ends up being defect-rich and confusing as hell to reimplement)
3) It is unfortunate, but I must admit that Glance's code in general is
pretty heavily coupled to the database code and in particular the schema.
Basically the only tool we have to manage that problem until we can fix it
is to try to be as careful as possible about how we change the db code and
schema. By importing another project, we lose some of that control. Also,
even with the copy-paste model for oslo incubator, code in oslo does have
some of its own reasons to change, so we could potentially end up in a
conflict where glance db migrations (which are operationally costly) have
to happen for reasons that don't really matter to glance.

So rather than framing this as glance needs to use oslo common db code, I
would appreciate framing it as glance database code should have features
X, Y, and Z, some of which it can get by using oslo code. Indeed, I
believe in IRC we discussed the idea of writing up a wiki listing these
feature improvements, which would allow a finer granularity for evaluation.
I really prefer that format because it feels more like planning and less
like debate :-)

 I have a few responses inline below.

On Fri, Aug 16, 2013 at 6:31 AM, Victor Sergeyev vserge...@mirantis.comwrote:

 Hello All.

 Glance cores (Mark Washenberger, Flavio Percoco, Iccha Sethi) have some
 questions about Oslo DB code, and why is it so important to use it instead
 of custom implementation and so on. As there were a lot of questions it was
 really hard to answer on all this questions in IRC. So we decided that
 mailing list is better place for such things.

 List of main questions:

 1. What includes oslo DB code?
 2. Why is it safe to replace custom implementation by Oslo DB code?
 3. Why oslo DB code is better than custom implementation?
 4. Why oslo DB code won’t slow up project development progress?
 5. What we are going actually to do in Glance?
 6. What is the current status?

 Answers:

 1. What includes oslo DB code?

 Currently Oslo code improves different aspects around DB:
 -- Work with SQLAlchemy models, engine and session
 -- Lot of tools for work with SQLAlchemy

-- Work with unique keys
 -- Base test case for work with database
 -- Test migrations against different backends
 -- Sync DB Models with actual schemas in DB (add test that they are
 equivalent)


 2. Why is it safe to replace custom implementation by Oslo DB code?

 Oslo module, as base openstack module, takes care about code quality.
 Usually, common code more readable (most of flake8 checks enabled in Oslo)
 and have better test coverage.  Also it was tested in different use-cases
 (in production also) in an other projects so bugs in Oslo code were already
 fixed. So we can be sure, that we use high-quality code.


Alas, while testing and static style analysis are important, they are not
the only relevant aspects of code quality. Architectural choices are also
relevant. The best reusable code places few requirements on the code that
reuses it architecturally--in some cases it may make sense to refactor oslo
db code so that glance can reuse the correct parts.




 3. Why oslo DB code is better than custom implementation?

 There are some arguments pro Oslo database code

 -- common code collects useful features from different projects
 Different utils, for work with database, common test class, module for
 database migration, and  other features are already in Oslo db code. Patch
 on automatic retry db.api query if db connection lost on review at the
 moment. If we use Oslo db code we should not care, how to port these (and
 others - in the future) features to Glance - it will came to all projects
 automaticly when it will came to Oslo.

 -- unified project work with database
 As it was already said,  It can help developers work with database in a
 same way in different projects. It’s useful if developer work with db in a
 few projects - he use same base things and got no surprises from them.


I'm not very motivated by this argument. I rarely find novelty that
challenging to understand when working with a project, personally. Usually
I'm much more stumped when code is heavily coupled to other modules or too
many responsibilities are lumped together in one module. In general, 

Re: [openstack-dev] Gate breakage process - Let's fix! (related but not specific to neutron)

2013-08-16 Thread Monty Taylor


On 08/16/2013 02:25 PM, Maru Newby wrote:
 Neutron has been in and out of the gate for the better part of the
 past month, and it didn't slow the pace of development one bit.  Most
 Neutron developers kept on working as if nothing was wrong, blithely
 merging changes with no guarantees that they weren't introducing new
 breakage.  New bugs were indeed merged, greatly increasing the time
 and effort required to get Neutron back in the gate.  I don't think
 this is sustainable, and I'd like to make a suggestion for how to
 minimize the impact of gate breakage.
 
 For the record, I don't think consistent gate breakage in one project
 should be allowed to hold up the development of other projects.  The
 current approach of skipping tests or otherwise making a given job
 non-voting for innocent projects should continue.  It is arguably
 worth taking the risk of relaxing gating for those innocent projects
 rather than halting development unnecessarily.
 
 However, I don't think it is a good idea to relax a broken gate for
 the offending project.  So if a broken job/test is clearly Neutron
 related, it should continue to gate Neutron, effectively preventing
 merges until the problem is fixed.  This would both raise the
 visibility of breakage beyond the person responsible for fixing it,
 and prevent additional breakage from slipping past were the gating to
 be relaxed.

I do not know the exact implementation that would work here, but I do
think it's worth discussing further. Essentially, a neutron bug killing
the gate for a nova dev isn't necessarily going to help - because the
nova dev doesn't necessarily have the background to fix it.

I want to be very careful that we don't wind up with an assymetrical
gate though...

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate breakage process - Let's fix! (related but not specific to neutron)

2013-08-16 Thread Clint Byrum
Excerpts from Maru Newby's message of 2013-08-16 11:25:07 -0700:
 Neutron has been in and out of the gate for the better part of the past 
 month, and it didn't slow the pace of development one bit.  Most Neutron 
 developers kept on working as if nothing was wrong, blithely merging changes 
 with no guarantees that they weren't introducing new breakage.  New bugs were 
 indeed merged, greatly increasing the time and effort required to get Neutron 
 back in the gate.  I don't think this is sustainable, and I'd like to make a 
 suggestion for how to minimize the impact of gate breakage.
 
 For the record, I don't think consistent gate breakage in one project should 
 be allowed to hold up the development of other projects.  The current 
 approach of skipping tests or otherwise making a given job non-voting for 
 innocent projects should continue.  It is arguably worth taking the risk of 
 relaxing gating for those innocent projects rather than halting development 
 unnecessarily.
 
 However, I don't think it is a good idea to relax a broken gate for the 
 offending project.  So if a broken job/test is clearly Neutron related, it 
 should continue to gate Neutron, effectively preventing merges until the 
 problem is fixed.  This would both raise the visibility of breakage beyond 
 the person responsible for fixing it, and prevent additional breakage from 
 slipping past were the gating to be relaxed.
 
 Thoughts?
 

I think this is a cultural problem related to the code review discussing
from earlier in the week.

We are not looking at finding a defect and reverting as a good thing where
high fives should be shared all around. Instead, you broke the gate
seems to mean you are a bad developer. I have been a bad actor here too,
getting frustrated with the gate-breaker and saying the wrong thing.

The problem really is you _broke_ the gate. It should be the gate has
found a defect, hooray!. It doesn't matter what causes the gate to stop,
it is _always_ a defect. Now, it is possible the defect is in tempest,
or jenkins, or HP/Rackspace's clouds where the tests run. But it is
always a defect that what worked before, does not work now.

Defects are to be expected. None of us can write perfect code. We should
be happy to revert commits and go forward with an enabled gate while
the team responsible for the commit gathers information and works to
correct the issue.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live-snapshot/cloning of virtual machines

2013-08-16 Thread Russell Bryant
On 08/16/2013 01:17 PM, Vishvananda Ishaya wrote:
 
 
 
 On Fri, Aug 16, 2013 at 3:05 AM, Daniel P. Berrange berra...@redhat.com
 mailto:berra...@redhat.com wrote:

 I don't think it is a good idea to add a feature which is considered to
 be unsupportable by the developers of the virt platform.
 
 
 You make excellent points. I'm not totally convinced that this feature
 is the right
 long-term direction, but I still think it is interesting. To be fair,
 I'm not convinced that
 virtual machines as a whole are the right long-term direction. I'm still
 looking for a way
 for people experiment with this and see what use-cases that come out of it.
 
 Over the past three years OpenStack has been a place where we can
 iterate quickly and
 try new things. Multihost nova-network was an experiment of mine that
 turned into the
 most common deployment strategy for a long time.
 
 Maybe we've grown up to the point where we have to be more careful and
 not introduce
 these kind of features and the maintenance cost of introducing
 experimental features is
 too great. If that is the community consensus, then I'm happy keep the
 live snapshot stuff
 in a branch on github for people to experiment with.

My feeling after following this discussion is that it's probably best to
keep baking in another branch (github or whatever).  The biggest reason
is because of the last comment quoted from Daniel Berrange above.  I
feel that like that is a pretty big deal.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-16 Thread Jay Pipes

On 08/16/2013 02:41 PM, Mark Washenberger wrote:

I think the issue here for glance is whether or not oslo common code
makes it easier or harder to make other planned improvements. In
particular, using openstack.common.db.api will make it harder to
refactor away from a giant procedural interface for the database driver.


And towards what? A giant object-oriented interface for the database driver?

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Migrating to testr parallel in tempest

2013-08-16 Thread Matthew Treinish
On Fri, Aug 16, 2013 at 01:03:57PM -0500, Ben Nemec wrote:
 Getting this in before the H3 rush would be very helpful. When we made
 the switch with Nova's unittests we fixed as many of the test bugs
 that we could find, merged the change to switch the test runner, then
 treated all failures as very high priority bugs that received
 immediate attention. Getting this in before H3 will give everyone a
 little more time to debug any potential new issues exposed by Jenkins
 or people running the tests locally.
 
 I think we should be bold here and merge this as soon as we have good
 numbers that indicate the trend is for these tests to pass. Graphite
 can give us the pass to fail ratios over time, as long as these trends
 are similar for both the old nosetest jobs and the new testr job I say
 we go for it. (Disclaimer: most of the projecst I work on are not
 affected by the tempest jobs; however, I am often called upon to help
 sort out issues in the gate).
 
 I'm inclined to agree.  It's not as if we don't have transient
 failures now, and if we're looking at a 50% speedup in
 recheck/verify times then as long as the new version isn't
 significantly less stable it should be a net improvement.
 
 Of course, without hard numbers we're kind of discussing in a vacuum
 here.
 
 
 I also would like to get this in sooner rather than later and fix
 the bugs as
 they come in. But, I'm wary of doing this because there isn't a
 proven success
 history yet. No one likes gate resets, and I've only been running
 it on the
 gate queue for a day now.
 
 So here is the graphite graph that I'm using to watch parallel vs
 serial in the
 gate queue:
 https://tinyurl.com/pdfz93l
 
 Okay, so what are the y-axis units on this?  Because just guessing I
 would say that it's percentage of failing runs, in which case it
 looks like we're already within the 95% as accurate range (it never
 dips below -.05).  Am I reading it right?

Yeah I'm not sure what scale it is using either. I'm not sure it's percent,
or if it is then it's not grouping things over a long period of time to
calculate the percentage. I just know by manually correlating with what
I saw by watching zuul is that -0.02 was one failure, -0.03 should be 2
failures.

This graph might be easier to read:

http://tinyurl.com/n27lytl 

For this one I told graphite to do a total of events grouped at 1 hour
intervals. This time the y-axis is the number of runs. This plots the
differences between serial and parallel results. So as before, above 0 on the
y-axis means that many more jobs passed in that hour. I split out a line for 
success, failure, and aborted.

The aborted number is actually pretty important. I noticed that if there is a
gate reset (or a bunch of them) when the queue is pretty deep the testr runs are
often finished before the job at the head of the queue fails. So they get marked
as failures but the full jobs never finish and get marked as aborted. The good 
example of this is between late Aug 14 and early Aug 15 on the plot. That is 
when
when there was an intermittent test failure with horizon. Which was fixed by a
revert the next morning.

All this exercise has really shown me though is that graphing the results isn't
exactly straightforward or helpful unless everything we're measuring is gating.

So as things sit now we've found about ~5 more races and/or flaky tests while
running tempest in parallel. 2 have fixes in progress:
https://review.openstack.org/#/c/42169/
https://review.openstack.org/#/c/42351/

Then I have open bugs for the remaining 3 here:
https://bugs.launchpad.net/tempest/+bug/1213212
https://bugs.launchpad.net/tempest/+bug/1213209
https://bugs.launchpad.net/tempest/+bug/1213215

I haven't seen any other repeating failures besides these 3, and no one else has
opened a bug regarding a parallel failure. (although I doubt anyone is paying
attention to the fails, I know I wouldn't :) ) So there may be more that are
happening more infrequently that are being hidden by these 3.

At this point I'm not sure it is ready yet with the frequency I've seen the
testr run fail. But, at the same time the longer we wait the more bugs that can
be introduced. Maybe there is some middle ground like marking the parallel job
as voting on the check queue.

-Matt Treinish



 
 
 On that graph the blue and yellow shows the number of jobs that
 succeeded
 grouped together in per hour buckets. (yellow being parallel and
 blue serial)
 
 Then the red line is showing failures, a horizontal bar means that
 there is no
 difference in the number of failures between serial and parallel.
 When it dips
 negative it is showing a failure in parallel that wasn't on serial
 a serial run
 at the same time. When it goes positive it showing a failure on
 serial that
 doesn't occur on parallel at the same time. But, because the
 serial runs take
 longer the failures happen at an offset. So if the plot shows
 parallel fails
 followed closely by a serial failure than that is probably on 

Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-16 Thread Doug Hellmann
On Tue, Aug 13, 2013 at 2:36 PM, Thomas Maddox
thomas.mad...@rackspace.comwrote:

  Hello!

  I was having some chats yesterday with both Julien and Doug regarding
 some thoughts that occurred to me while digging through CM and Doug
 suggested that I bring them up on the dev list for everyones benefit and
 discussion.

  My bringing this up is intended to help myself and others get a better
 understanding of why it's this way, whether we're on the correct course,
 and, if not, how we get to it. I'm not expecting anything to change quickly
 or necessarily at all from this. Ultimately the question I'm asking is: are
 we addressing the correct use cases with the correct API calls; being able
 to expect certain behavior without having to know the internals? For
 context, this is mostly using the SQLAlchemy implementation for these
 questions, but the API questions apply overall.

  My concerns:

- Driving get_resources() with the Meter table instead of the Resource
table. This is mainly because of the additional filtering available in the
Meter table, which allows us to satisfy a use case like *getting a
list of resources a user had during a period of time to get meters to
compute billing with*. The semantics are tripping me up a bit; the
question this boiled down to for me was: *why use a resource query to
get meters to show usage by a tenant*? I was curious about why we
needed the timestamp filtering when looking at Resources, and why we would
use Resource as a way to get at metering data, rather than a Meter request
itself? This was answered by resources being the current vector to get at
metering data for a tenant in terms of resources, if I understood 
 correctly.



- With this implementation, we have to do aggregation to get at the
discrete Resources (via the Meter table) rather than just filtering the
already distinct ones in the Resource table.

 Querying first for resources and then getting the statistics is an
artifact of the design of the V1 API, where both the resource id and meter
name were part of the statistics API URL. After the groupby feature lands
in the V2 statistics API, we won't have to make the separate query any more
to satisfy the billing requirement.

However, that's just one example use case. Sometimes people do want to know
something about the resources that have existed besides the aggregated
samples for billing. The challenge with querying for resources is that the
metadata for a given resource has the potential to change over time. The
resource table holds the most current metadata, but the meter table has all
of the samples and all of the versions of the metadata, so we have to look
there to filter on metadata that might change (especially if we're trying
to answer questions about what resources had specific characteristics
during a time range).


- This brought up some confusion with the API for me with the major
use cases I can think of:
   - As a new consumer of this API, I would think that *
   /resource/resource_id* would get me details for a resource, e.g.
   current state, when it was created, last updated/used timestamp, who 
 owns
   it; not the attributes from the first sample to come through about it

 It should be returning the attributes for the *last* sample to be seen, so
that the metadata and other settings are the most recent values.


-
   - I would think that *
   /meter/?q.field=resource_idq.value=resource_id* ought to get me
   a list of meter(s) details for a specific resource, e.g. name, unit, and
   origin; but not a huge mixture of samples.

 The meters associated with a resource are provided as part of the response
to the resources query, so no separate call is needed.


-
   -
  - Additionally */meter/?q.field=user_idq.value=user_id* would
  get me a list of all meters that are currently related to the user

 Yes, we're in the process of replacing the term meter with sample. Bad
choice of name that will require a deprecation period.


-
   -
   - The ultimate use case, for billing queries, I would think that 
 */meter/meter_id/statistics?time
   filtersuser(resource_id)* would get me the measurements for
   that meter to bill for.


-

 If I understand correctly, one main intent driving this is wanting to
 avoid end users having to write a bunch of API requests themselves from the
 billing side and instead just drill down from payloads for each resource to
 get the billing information for their customers. It also looks like there's
 a BP to add grouping functionality to statistics queries to allow us this
 functionality easily (this one, I think:
 https://blueprints.launchpad.net/ceilometer/+spec/api-group-by).

  I'm new to this project, so I'm trying to get a handle on how we got
 here and maybe offer some outside perspective, if it's needed or wanted. =]

  Thank you all in advance for your time with 

Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Clint Byrum
Excerpts from Ben Nemec's message of 2013-08-16 11:10:09 -0700:
 On 2013-08-16 11:58, Jay Pipes wrote:
  On 08/16/2013 09:52 AM, Boris Pavlovic wrote:
  Hi all,
  
  We (OpenStack contributors) done a really huge and great work around 
  DB
  code in Grizzly and Havana to unify it, put all common parts into
  oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
  unique keys, and to use  this code in different projects instead of
  custom implementations. (well done!)
  
  oslo-incubator db code is already used by: Nova, Neutron, Cinder,
  Ironic, Ceilometer.
  
  In this moment we finished work around Glance:
  https://review.openstack.org/#/c/36207/
  
  And working around Heat and Keystone.
  
  So almost all projects use this code (or planing to use it)
  
  Probably it is the right time to start work around moving oslo.db code
  to separated lib.
  
  We (Roman, Viktor and me) will be glad to help to make oslo.db lib:
  
  E.g. Here are two drafts:
  1) oslo.db lib code: https://github.com/malor/oslo.db
  2) And here is this lib in action: 
  https://review.openstack.org/#/c/42159/
  
  
  Thoughts?
  
  ++
  
  Are you going to create a separate Launchpad project for the library
  and track bugs against it separately? Or are you going to use the oslo
  project in Launchpad for that?
 
 At the moment all of the oslo.* projects are just grouped under the 
 overall Oslo project in LP.  Unless there's a reason to do otherwise I 
 would expect that to be true of oslo.db too.

Has that decision been re-evaluated recently?

I feel like bug trackers are more useful when they are more focused. But
perhaps there are other reasons behind using a shared bug tracker.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-16 Thread Jay Pipes

On 08/16/2013 04:00 PM, Clint Byrum wrote:

Excerpts from Ben Nemec's message of 2013-08-16 11:10:09 -0700:

On 2013-08-16 11:58, Jay Pipes wrote:

On 08/16/2013 09:52 AM, Boris Pavlovic wrote:

Hi all,

We (OpenStack contributors) done a really huge and great work around
DB
code in Grizzly and Havana to unify it, put all common parts into
oslo-incubator, fix bugs, improve handling of sqla exceptions, provide
unique keys, and to use  this code in different projects instead of
custom implementations. (well done!)

oslo-incubator db code is already used by: Nova, Neutron, Cinder,
Ironic, Ceilometer.

In this moment we finished work around Glance:
https://review.openstack.org/#/c/36207/

And working around Heat and Keystone.

So almost all projects use this code (or planing to use it)

Probably it is the right time to start work around moving oslo.db code
to separated lib.

We (Roman, Viktor and me) will be glad to help to make oslo.db lib:

E.g. Here are two drafts:
1) oslo.db lib code: https://github.com/malor/oslo.db
2) And here is this lib in action:
https://review.openstack.org/#/c/42159/


Thoughts?


++

Are you going to create a separate Launchpad project for the library
and track bugs against it separately? Or are you going to use the oslo
project in Launchpad for that?


At the moment all of the oslo.* projects are just grouped under the
overall Oslo project in LP.  Unless there's a reason to do otherwise I
would expect that to be true of oslo.db too.


Has that decision been re-evaluated recently?

I feel like bug trackers are more useful when they are more focused. But
perhaps there are other reasons behind using a shared bug tracker.


+1

The alternative (relying on users to tag bugs consistently) is error-prone.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Pipeline Retry Semantics ...

2013-08-16 Thread Doug Hellmann
I added a couple of comments in the wiki page. We should have at least one
summit session about this, I think, unless we work it out before then.


On Thu, Aug 15, 2013 at 12:20 PM, Sandy Walsh sandy.wa...@rackspace.comwrote:

 Recently I've been focused on ensuring we don't drop notifications in
 CM. But problems still exist downstream, after we've captured the raw
 event.

 From the efforts going on with the Ceilometer sample pipeline, the new
 dispatcher model and the upcoming trigger pipeline, the discussion
 around retry semantics has being coming up a lot.

 In other words What happens when step 4 of a 10 step pipeline fails?

 As we get more into processing billing events, we really need to have a
 solid understanding of how we prevent double-counting or dropping events.

 I've started writing down some thoughts here:
 https://wiki.openstack.org/wiki/DuplicateWorkCeilometer

 It's a little scattered and I'd like some help tuning it.

 Hopefully it'll help grease the skids for the Icehouse Summit talks.

 Thanks!
 -S

 cc/ Josh, I think the State Management team can really help out here.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] endpoint registration

2013-08-16 Thread Doug Hellmann
If you're saying that you want to register URLs without version info
embedded in them, and let the client work that part out by talking to the
service in question (or getting a version number from the caller), then
yes, please.


On Fri, Aug 16, 2013 at 1:47 AM, Robert Collins
robe...@robertcollins.netwrote:

 We're just reworking our endpoint registration on cloud bring up to be
 driven by APIs, per the principled separation of concerns I outlined
 previously.

 One thing I note is that the keystone intialisation is basically full
 of magic constants like
 http://$CONTROLLER_PUBLIC_ADDRESS:8004/v1/%(tenant_id)s

 Now, I realise that when you have a frontend haproxy etc, the endpoint
 changes - but the suffix - v1/%(tenant_id)s in this case - is, AFAICT,
 internal neutron/cinder/ etc knowledge, as is the service type
 ('network' etc).

 Rather than copying those into everyones deploy scripts, I'm wondering
 if we could put that into neutronclient etc - either as a query
 function (neutron --endpoint-suffix - 'v1/%(tenant_id)s) or perhaps
 something that will register with keystone when told to?

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] endpoint registration

2013-08-16 Thread Robert Collins
On 17 August 2013 08:27, Doug Hellmann doug.hellm...@dreamhost.com wrote:
 If you're saying that you want to register URLs without version info
 embedded in them, and let the client work that part out by talking to the
 service in question (or getting a version number from the caller), then
 yes, please.


That too. But primarily I don't want to be chasing devstack updates
forever because of copied code around this.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-16 Thread Jay Pipes

On 08/16/2013 04:37 PM, Doug Hellmann wrote:

On Fri, Aug 16, 2013 at 4:15 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

On 08/16/2013 03:52 PM, Doug Hellmann wrote:

However, that's just one example use case. Sometimes people do
want to
know something about the resources that have existed besides the
aggregated samples for billing. The challenge with querying for
resources is that the metadata for a given resource has the
potential to
change over time. The resource table holds the most current
metadata,
but the meter table has all of the samples and all of the
versions of
the metadata, so we have to look there to filter on metadata
that might
change (especially if we're trying to answer questions about what
resources had specific characteristics during a time range).


This is wasteful, IMO. We could change the strategy to say that a
resource is immutable once it is received by Ceilometer. And if the
metadata about that resource changes somehow (an example of this
would be useful) in the future, then a new resource record with a
unique ID would be generated and its ID shoved into the meter table
instead of storing a redundant denormalized data in the
meter.resource_metadata field, which AFAICT, is a VARCHAR(1000) field.


To be clear, when I said resource I meant something like an instance,
not owned by ceilometer (rather than a row in the resource table).

As Julien pointed out, the existing SQL driver is based on the schema of
the Mongo driver where rather than doing a mapreduce operation every
time we want to find the most current resource data, it is stored
separately. It's quite likely that someone could improve the SQL driver
to not require the resource table at all, as you suggest.\\


Actually, that's the opposite of what I'm suggesting :) I'm suggesting 
getting rid of the resource_metadata column in the meter table and using 
the resource table in joins...


-jay


Anything that can reduce storage space in the base fact table
(meter) per row will lead to increased performance...

Best,
-jay



_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-16 Thread Doug Hellmann
On Fri, Aug 16, 2013 at 4:43 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/16/2013 04:37 PM, Doug Hellmann wrote:

 On Fri, Aug 16, 2013 at 4:15 PM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 On 08/16/2013 03:52 PM, Doug Hellmann wrote:

 However, that's just one example use case. Sometimes people do
 want to
 know something about the resources that have existed besides the
 aggregated samples for billing. The challenge with querying for
 resources is that the metadata for a given resource has the
 potential to
 change over time. The resource table holds the most current
 metadata,
 but the meter table has all of the samples and all of the
 versions of
 the metadata, so we have to look there to filter on metadata
 that might
 change (especially if we're trying to answer questions about what
 resources had specific characteristics during a time range).


 This is wasteful, IMO. We could change the strategy to say that a
 resource is immutable once it is received by Ceilometer. And if the
 metadata about that resource changes somehow (an example of this
 would be useful) in the future, then a new resource record with a
 unique ID would be generated and its ID shoved into the meter table
 instead of storing a redundant denormalized data in the
 meter.resource_metadata field, which AFAICT, is a VARCHAR(1000) field.


 To be clear, when I said resource I meant something like an instance,
 not owned by ceilometer (rather than a row in the resource table).

 As Julien pointed out, the existing SQL driver is based on the schema of
 the Mongo driver where rather than doing a mapreduce operation every
 time we want to find the most current resource data, it is stored
 separately. It's quite likely that someone could improve the SQL driver
 to not require the resource table at all, as you suggest.\\


 Actually, that's the opposite of what I'm suggesting :) I'm suggesting
 getting rid of the resource_metadata column in the meter table and using
 the resource table in joins...


Ah, I see. That would be another good approach.

Doug



 -jay

  Anything that can reduce storage space in the base fact table
 (meter) per row will lead to increased performance...

 Best,
 -jay



 __**___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**__org
 
 mailto:OpenStack-dev@lists.**openstack.orgOpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/__**cgi-bin/mailman/listinfo/__**
 openstack-devhttp://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 





 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] General Question about CentOS

2013-08-16 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Is OpenStack supported on CentOS running Python 2.6?

Thanks,

Mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] General Question about CentOS

2013-08-16 Thread Clark Boylan
On Fri, Aug 16, 2013 at 2:51 PM, Miller, Mark M (EB SW Cloud - RD -
Corvallis) mark.m.mil...@hp.com wrote:
 Is OpenStack supported on CentOS running Python 2.6?

I can't speak to what features are supported and whether or not it is
practical for real deployments, but we do all upstream Python 2.6 unit
testing on CentOS6.4 slaves. At the very least I would expect
unittests to work properly on CentOS.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] General Question about CentOS

2013-08-16 Thread Bob Ball
I'm running the unit tests and can confirm they do work.

I'm currently developing support for xenserver-core on CentOS 6.4 and many of 
the tempest tests pass, and I'm working through the failures that exist.

I haven't encountered anything yet which is caused by CentOS so I imagine it 
will all work.

Bob

From: Clark Boylan [clark.boy...@gmail.com]
Sent: 16 August 2013 23:08
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] General Question about CentOS

On Fri, Aug 16, 2013 at 2:51 PM, Miller, Mark M (EB SW Cloud - RD -
Corvallis) mark.m.mil...@hp.com wrote:
 Is OpenStack supported on CentOS running Python 2.6?

I can't speak to what features are supported and whether or not it is
practical for real deployments, but we do all upstream Python 2.6 unit
testing on CentOS6.4 slaves. At the very least I would expect
unittests to work properly on CentOS.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] proposing Alex Gaynor for core on openstack/requirements

2013-08-16 Thread Clark Boylan
On Fri, Aug 16, 2013 at 8:04 AM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:
 I'd like to propose Alex Gaynor for core status on the requirements project.

 Alex is a core Python and PyPy developer, has strong ties throughout the
 wider Python community, and has been watching and reviewing requirements
 changes for a little while now. I think it would be extremely helpful to
 have him on the team.

 Doug

+1 from me.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] - python-neutronclient build failing for latest code reviews

2013-08-16 Thread Ronak Shah
Hi,
I can see on following link that many of the latest code reviews are
reporting build failure at the same point?

https://review.openstack.org/#/q/status:open+project:openstack/python-neutronclient,n,z

The backtrace looks liike:


ft46.1: tests.unit.test_shell.ShellTest.test_auth_StringException:
Traceback (most recent call last):
  File 
/home/jenkins/workspace/gate-python-neutronclient-python26/tests/unit/test_shell.py,
line 71, in setUp
_shell = openstack_shell.NeutronShell('2.0')
  File 
/home/jenkins/workspace/gate-python-neutronclient-python26/neutronclient/shell.py,
line 244, in __init__
command_manager=commandmanager.CommandManager('neutron.cli'), )
  File 
/home/jenkins/workspace/gate-python-neutronclient-python26/.tox/py26/lib/python2.6/site-packages/cliff/app.py,
line 72, in __init__
self._set_streams(stdin, stdout, stderr)
  File 
/home/jenkins/workspace/gate-python-neutronclient-python26/.tox/py26/lib/python2.6/site-packages/cliff/app.py,
line 89, in _set_streams
self.stdin = stdin or codecs.getreader(encoding)(sys.stdin)
  File 
/home/jenkins/workspace/gate-python-neutronclient-python26/.tox/py26/lib64/python2.6/codecs.py,
line 984, in getreader
return lookup(encoding).streamreader
TypeError: lookup() argument 1 must be string, not None


Does anyone already looking into it?

Thanks,
Ronak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] General Question about CentOS

2013-08-16 Thread Yufang Zhang
My team has deployed hundreds of compute nodes on CentOS-5.4(with python26
installed and Xen as hypervisor ) based on Folsom. It does work on our
production system :)


2013/8/17 Miller, Mark M (EB SW Cloud - RD - Corvallis) 
mark.m.mil...@hp.com

   Is OpenStack supported on CentOS running Python 2.6?

 ** **

 Thanks,

 ** **

 Mark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] General Question about CentOS

2013-08-16 Thread Shake Chen
Now in Centos 6.x ,the Python is 2.6.6, the Openstack can run it. you can
check the RDO

http://openstack.redhat.com/Quickstart


On Sat, Aug 17, 2013 at 8:05 AM, Yufang Zhang yufang521...@gmail.comwrote:

 My team has deployed hundreds of compute nodes on CentOS-5.4(with python26
 installed and Xen as hypervisor ) based on Folsom. It does work on our
 production system :)


 2013/8/17 Miller, Mark M (EB SW Cloud - RD - Corvallis) 
 mark.m.mil...@hp.com

   Is OpenStack supported on CentOS running Python 2.6?

 ** **

 Thanks,

 ** **

 Mark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Shake Chen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - python-neutronclient build failing for latest code reviews

2013-08-16 Thread Henry Gessau
I asked on #openstack-infra and clarkb immediately identified it as a
problem with cliff, and saw that the cliff folks have apparently already
fixed it in cliff 1.4.3, which is now on the openstack.org pypi mirror so
new gate jobs should start passing now.

On Fri, Aug 16, at 7:34 pm, Ronak Shah ro...@nuagenetworks.net wrote:

 Hi,
 I can see on following link that many of the latest code reviews are
 reporting build failure at the same point?
 
 https://review.openstack.org/#/q/status:open+project:openstack/python-neutronclient,n,z
 
 The backtrace looks liike:
 
 
 ft46.1: tests.unit.test_shell.ShellTest.test_auth_StringException: Traceback 
 (most recent call last):
   File 
 /home/jenkins/workspace/gate-python-neutronclient-python26/tests/unit/test_shell.py,
  line 71, in setUp
 _shell = openstack_shell.NeutronShell('2.0')
   File 
 /home/jenkins/workspace/gate-python-neutronclient-python26/neutronclient/shell.py,
  line 244, in __init__
 command_manager=commandmanager.CommandManager('neutron.cli'), )
   File 
 /home/jenkins/workspace/gate-python-neutronclient-python26/.tox/py26/lib/python2.6/site-packages/cliff/app.py,
  line 72, in __init__
 self._set_streams(stdin, stdout, stderr)
   File 
 /home/jenkins/workspace/gate-python-neutronclient-python26/.tox/py26/lib/python2.6/site-packages/cliff/app.py,
  line 89, in _set_streams
 self.stdin = stdin or codecs.getreader(encoding)(sys.stdin)
   File 
 /home/jenkins/workspace/gate-python-neutronclient-python26/.tox/py26/lib64/python2.6/codecs.py,
  line 984, in getreader
 return lookup(encoding).streamreader
 TypeError: lookup() argument 1 must be string, not None
 
 
 Does anyone already looking into it?
 
 Thanks,
 Ronak
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] devstack exercise test failed at euca-register

2013-08-16 Thread XINYU ZHAO
without proxy , the test case is PASS.  with proxy set in localrc,
euca-register will fail with a 400 code. it is weird that even 127.0.0.1 is
already included in  no_proxy and it turned out that the api was never
through proxy.
Here I did a capture of both with and without proxy scenario, doing a
comparison will see that they are basically the same except the former
received 400 bad request code:


POST /services/Cloud/ HTTP/1.1

Host: 127.0.0.1:8773

Accept-Encoding: identity

Content-Length: 296

Content-Type: application/x-www-form-urlencoded; charset=UTF-8

User-Agent: Boto/2.10.0 (linux2)



AWSAccessKeyId=3cfbdaae44a94dc59959d0d88bfc4f9cAction=RegisterImageArchitecture=i386ImageLocation=testbucket%2Fbundle.img.manifest.xmlSignatureMethod=HmacSHA256SignatureVersion=2Timestamp=2013-08-17T01%3A24%3A51ZVersion=2009-11-30Signature=jk8G7EpYn2mcjxQFT%2B53Lgg4usdxviKwpvXfLnxYrHI%3DHTTP/1.1
400 Bad Request

Content-Type: text/xml

Content-Length: 207

Date: Sat, 17 Aug 2013 01:24:51 GMT



?xml version=1.0?
ResponseErrorsErrorCodeS3ResponseError/CodeMessageUnknown error
occured./Message/Error/ErrorsRequestIDreq-d2138d8f-6363-4b65-b793-a2bb2d12baee/RequestID/Response





Without proxy:

POST /services/Cloud/ HTTP/1.1

Host: 127.0.0.1:8773

Accept-Encoding: identity

Content-Length: 296

Content-Type: application/x-www-form-urlencoded; charset=UTF-8

User-Agent: Boto/2.10.0 (linux2)



AWSAccessKeyId=b8a07080b7394dfea0954dcd13a95acaAction=RegisterImageArchitecture=i386ImageLocation=testbucket%2Fbundle.img.manifest.xmlSignatureMethod=HmacSHA256SignatureVersion=2Timestamp=2013-08-17T01%3A47%3A42ZVersion=2009-11-30Signature=IV4heXI0GGp2a7gg90ZratX%2F2RxPbmqK6al26g72azM%3DHTTP/1.1
200 OK

Content-Type: text/xml

Content-Length: 198

Date: Sat, 17 Aug 2013 01:47:43 GMT



RegisterImageResponse xmlns=http://ec2.amazonaws.com/doc/2009-11-30/;
  requestIdreq-6ea23353-5902-4ac3-b298-13bd841d9409/requestId
  imageIdami-0001/imageId
/RegisterImageResponse




On Fri, Aug 16, 2013 at 9:38 AM, XINYU ZHAO xyzje...@gmail.com wrote:

 bump.
 any input is appreciated.


 On Thu, Aug 15, 2013 at 5:04 PM, XINYU ZHAO xyzje...@gmail.com wrote:

 Updated every project to the latest. but each time i ran devstack, the
 exercise test failed at the same place bundle.sh
 Any hints?

 In console.log

 Uploaded image as testbucket/bundle.img.manifest.xml
 ++ euca-register testbucket/bundle.img.manifest.xml
 ++ cut -f2
 + AMI='S3ResponseError: Unknown error occured.'
 + die_if_not_set 57 AMI 'Failure registering testbucket/bundle.img'
 + local exitcode=0
 ++ set +o
 ++ grep xtrace
 + FXTRACE='set -o xtrace'
 + set +o xtrace
 + timeout 15 sh -c 'while euca-describe-images | grep S3ResponseError: 
 Unknown error occured. | grep -q available; do sleep 1; done'
 grep: Unknown: No such file or directory
 grep: error: No such file or directory
 grep: occured.: No such file or directory
 close failed in file object destructor:
 sys.excepthook is missing
 lost sys.stderr
 + euca-deregister S3ResponseError: Unknown error occured.
 Only 1 argument (image_id) permitted
 + die 65 'Failure deregistering S3ResponseError: Unknown error occured.'
 + local exitcode=1
 + set +o xtrace
 [Call Trace]
 /opt/stack/new/devstack/exercises/bundle.sh:65:die
 [ERROR] /opt/stack/new/devstack/exercises/bundle.sh:65 Failure deregistering 
 S3ResponseError: Unknown error occured.



 Here is what recorded in n-api log.

 2013-08-15 15:44:20.331 27003 DEBUG nova.utils [-] Reloading cached file 
 /etc/nova/policy.json read_cached_file /opt/stack/new/nova/nova/utils.py:814
 2013-08-15 15:44:20.363 DEBUG nova.api.ec2 
 [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] action: RegisterImage 
 __call__ /opt/stack/new/nova/nova/api/ec2/__init__.py:325
 2013-08-15 15:44:20.364 DEBUG nova.api.ec2 
 [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] arg: Architecture   
  val: i386 __call__ /opt/stack/new/nova/nova/api/ec2/__init__.py:328
 2013-08-15 15:44:20.364 DEBUG nova.api.ec2 
 [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] arg: ImageLocation  
  val: testbucket/bundle.img.manifest.xml __call__ 
 /opt/stack/new/nova/nova/api/ec2/__init__.py:328
 2013-08-15 15:44:20.370 CRITICAL nova.api.ec2 
 [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] Unexpected 
 S3ResponseError raised
 2013-08-15 15:44:20.370 CRITICAL nova.api.ec2 
 [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] Environment: 
 {CONTENT_TYPE: application/x-www-form-urlencoded; charset=UTF-8, 
 SCRIPT_NAME: /services/Cloud, REQUEST_METHOD: POST, HTTP_HOST: 
 127.0.0.1:8773, PATH_INFO: /, SERVER_PROTOCOL: HTTP/1.0, 
 HTTP_USER_AGENT: Boto/2.10.0 (linux2), RAW_PATH_INFO: 
 /services/Cloud/, REMOTE_ADDR: 127.0.0.1, REMOTE_PORT: 44294, 
 wsgi.url_scheme: http, SERVER_NAME: 127.0.0.1, SERVER_PORT: 
 8773, GATEWAY_INTERFACE: CGI/1.1, HTTP_ACCEPT_ENCODING: identity}
 2013-08-15 15:44:20.371 DEBUG nova.api.ec2.faults 
 [req-5599cc0f-35b5-4451-9c96-88b48cc4600e demo demo] EC2 error 

Re: [openstack-dev] Launchpad bug tracker defects (was: Proposal oslo.db lib)

2013-08-16 Thread Clint Byrum
Excerpts from Thierry Carrez's message of 2013-08-16 13:55:46 -0700:
 Jay Pipes wrote:
  Are you going to create a separate Launchpad project for the library
  and track bugs against it separately? Or are you going to use the oslo
  project in Launchpad for that?
 
  At the moment all of the oslo.* projects are just grouped under the
  overall Oslo project in LP.  Unless there's a reason to do otherwise I
  would expect that to be true of oslo.db too.
 
  Has that decision been re-evaluated recently?
 
  I feel like bug trackers are more useful when they are more focused. But
  perhaps there are other reasons behind using a shared bug tracker.
  
  +1
  
  The alternative (relying on users to tag bugs consistently) is error-prone.
 
 The reason is that it's actually difficult to get a view of all oslo
 bugs due to Launchpad shortcomings (a project can only be in one project
 group). So keeping them in a single project simplifies the work of
 people that look after all of Oslo.
 
 This should be fixed in the future with a task tracker that handles
 project groups sanely, and then there is no reason at all to use the
 same project for different repositories.
 

I know this sounds like a crazy idea, but have we looked at investing any
time in adding this feature to Launchpad?

TripleO has the same problem. We look at bugs for:

tripleo
diskimage-builder
os-apply-config
os-collect-config
os-refresh-config

Now, having all of those in one project is simply not an option, as they
are emphatically different things. Part of TripleO is allowing users to
swap pieces out for others, so having clear lines between components is
critical.

I remember similar problems working on juju, juju-jitsu, charm-tools,
and juju-core.

Seems like it would be worth a small investment in Launchpad vs. having
to switch to another tracker.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev