[openstack-dev] Large Ephemeral Disk Restrictions

2013-08-20 Thread Aditi Raveesh
Hi,

Currently in openstack, the ephemeral disk is not migrated on resize. This
has the potential for data loss.
We can change the migrations to include the ephemeral disks by default.
We can have a restriction on migrating large instances by the total disk
size (root+ephemeral).
We can also provide an option to exclude ephemeral disks during migrations
if the user only cares about the root disk.
Blueprint:
https://blueprints.launchpad.net/nova/+spec/ephemeral-disk-restrictions

Any thoughts/suggestions?

Thanks,
Aditi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Community Training project needs you

2013-08-20 Thread Sean Roberts
We need your help! This is a community driven project to provide the user group 
community access to OpenStack training materials. We cannot make this work 
without your help. We are looking for individuals willing to commit to 
contributing a page at a time. All the details are here 
http://docs.openstack.org/trunk/openstack-training/
Contact me at @sarob or email for any additional information.

~sean
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oauth and keystone

2013-08-20 Thread Yongsheng Gong
Hi,

I have seen the code about oauth in the keystone but I cannot find the
document about how to setup keystone and other openstack services to enable
oauth.

Can anyone tell me how to setup an env like this?

Thanks
Yong Sheng Gong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] proposing Alex Gaynor for core on openstack/requirements

2013-08-20 Thread Gareth
I like this guy; his review is really helpful for my swift patch set.

Thanks again, Alex.


On Wed, Aug 21, 2013 at 1:45 AM, Alex Gaynor  wrote:

> Thanks everyone, I look forward to continuing to help out!
>
> Alex
>
>
> On Tue, Aug 20, 2013 at 9:49 AM, Doug Hellmann <
> doug.hellm...@dreamhost.com> wrote:
>
>> Without any objections, I've added Alex Gaynor to the requirements-core
>> team.
>>
>> Welcome, Alex!
>>
>>
>> On Sat, Aug 17, 2013 at 2:46 PM, Jeremy Stanley wrote:
>>
>>> On 2013-08-16 11:04:14 -0400 (-0400), Doug Hellmann wrote:
>>> > I'd like to propose Alex Gaynor for core status on the
>>> > requirements project.
>>> [...]
>>>
>>> Agreed, I for one welcome his continued assistance.
>>> --
>>> Jeremy Stanley
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> "I disapprove of what you say, but I will defend to the death your right
> to say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
> "The people's good is the highest law." -- Cicero
> GPG Key fingerprint: 125F 5C67 DFE9 4084
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Gareth

*Cloud Computing, OpenStack, Fitness, Basketball*
*OpenStack contributor*
*Company: UnitedStack *
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] PEP8 public and internal interfaces for keystoneclient

2013-08-20 Thread Monty Taylor


On 08/20/2013 09:26 PM, Brant Knudson wrote:
> 
> We've been too lax about delineating public and internal interfaces in
> python-keystoneclient. This makes changes and reviews difficult because
> we don't know what we can change without breaking applications. For
> example, we thought we could rename a part, but then it broke somebody
> (Horizon?)[0].
> 
> We've got a lot of changes we'd like to make to the python-keystoneclient
> to share authentication, so this problem is only going to get worse if
> we don't get it under control.
> 
> We don't have to come up with a convention for public/internal because
> PEP8 defines it for us[1], and we're supposedly enforcing PEP8.
> 
> If we were to strictly interpret python-keystoneclient against PEP8 then
> everything would be internal because the keystoneclient package and
> everything in it is internal since it has no docs. If we consider the
> usage doc in doc/source[2] as public documentation, then there's a few
> classes and packages that could be considered public, too.
> 
> Since we're in undefined territory here, I think we need to be more
> conservative. A very conservative approach would be to just consider
> everything in keystoneclient to be public unless it's explicitly
> internal (prefixed with _ or it's documentation says it's internal)..
> This essentially makes everything but a few internal methods public.
> We don't want everything to be public because changes will be more
> difficult to make than it should be.
> 
> I propose that we bring keystoneclient into compliance with PEP8.
> Let's start assuming everything is public unless it's explicitly
> internal. Then make the following changes:
> 0) Rename all those things that should be internal to prefix
> with _, while preserving backwards compatibility by leaving the old
> symbol there marked as deprecated.
> 1) Document public APIs with docstrings.
> 2) Change modules to explicitly declare public APIs in __all__.

I think, honestly, that we need to start using __all__ in all of the
client libs. I agree with everything above.

> This will be in a point release. Then in the next major release we
> yank out the deprecated function.
> 
> [0] https://bugs.launchpad.net/python-keystoneclient/+bug/1211998
> [1] http://www.python.org/dev/peps/pep-0008/#public-and-internal-interfaces
> [2]
> https://github.com/openstack/python-keystoneclient/blob/master/doc/source/using-api.rst
> 
>  - Brant
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] PEP8 public and internal interfaces for keystoneclient

2013-08-20 Thread Brant Knudson
We've been too lax about delineating public and internal interfaces in
python-keystoneclient. This makes changes and reviews difficult because
we don't know what we can change without breaking applications. For
example, we thought we could rename a part, but then it broke somebody
(Horizon?)[0].

We've got a lot of changes we'd like to make to the python-keystoneclient
to share authentication, so this problem is only going to get worse if
we don't get it under control.

We don't have to come up with a convention for public/internal because
PEP8 defines it for us[1], and we're supposedly enforcing PEP8.

If we were to strictly interpret python-keystoneclient against PEP8 then
everything would be internal because the keystoneclient package and
everything in it is internal since it has no docs. If we consider the
usage doc in doc/source[2] as public documentation, then there's a few
classes and packages that could be considered public, too.

Since we're in undefined territory here, I think we need to be more
conservative. A very conservative approach would be to just consider
everything in keystoneclient to be public unless it's explicitly
internal (prefixed with _ or it's documentation says it's internal).
This essentially makes everything but a few internal methods public.
We don't want everything to be public because changes will be more
difficult to make than it should be.

I propose that we bring keystoneclient into compliance with PEP8.
Let's start assuming everything is public unless it's explicitly
internal. Then make the following changes:
0) Rename all those things that should be internal to prefix
with _, while preserving backwards compatibility by leaving the old
symbol there marked as deprecated.
1) Document public APIs with docstrings.
2) Change modules to explicitly declare public APIs in __all__.

This will be in a point release. Then in the next major release we
yank out the deprecated function.

[0] https://bugs.launchpad.net/python-keystoneclient/+bug/1211998
[1] http://www.python.org/dev/peps/pep-0008/#public-and-internal-interfaces
[2]
https://github.com/openstack/python-keystoneclient/blob/master/doc/source/using-api.rst

 - Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] import task update

2013-08-20 Thread Joshua Harlow
Thx Brian,

Sounds good. I will do my best to show-up.

>From looking at the commit @
https://github.com/flwang/glance/commit/f1c0a94fb7b1a829bfe1828c61abb8b2bd2
fb917#L4R265 there does seem to be quite a bit of similarity (which isn't
a bad thing, means we are just thinking on the same lines).
https://github.com/stackforge/taskflow/blob/master/taskflow/persistence/bac
kends/sqlalchemy/models.py#L147 looks a lot like the storage model in that
commit (even some of the same attributes, haha) ;)

Also if you are thinking that the distributed stuff might be overkill (idk
if it is for your use-case) taskflow can also be setup to just run the
tasks in your own backend without pulling in the celery components.

Jump on #openstack-state-management if u want to also :-)

On 8/20/13 6:10 PM, "Brian Rosmaita"  wrote:

>Joshua, thanks for reaching out.  The informal consensus was that
>taskflow was overkill for what we're doing in some ways, and maybe not
>mature enough in others.  But maybe we should reconsider (especially if
>you're willing to help out!).  If any Glance people aren't familiar with
>the current state of taskflow, Jessica Lucci has posted a quick 27 min
>video walkthrough:
>  http://www.youtube.com/watch?v=SJLc3U-KYxQ
>
>if you watch, here are the docs she uses:
>  https://wiki.openstack.org/wiki/TaskFlow
>  https://github.com/stackforge/taskflow
>  https://wiki.openstack.org/wiki/Celery
>  https://wiki.openstack.org/wiki/DistributedTaskManagement
>  https://code.launchpad.net/taskflow
>
>Maybe we can have a quick discussion at the weekly Glance meeting on
>Thursday (20:00 UTC this week in #openstack-meeting-alt on freenode).
>
>From: Joshua Harlow [harlo...@yahoo-inc.com]
>Sent: Tuesday, August 20, 2013 6:18 PM
>To: OpenStack Development Mailing List; Brian Rosmaita
>Subject: Re: [openstack-dev] [Glance] import task update
>
>Very cool, would u consider trying to use taskflow for some of this. It
>seems to fit (or could fit) part of the bill nicely.
>
>I'd be up to working with you guys to make this happen, if you guys want
>to discuss more u know where to find me (on IRC, ha).
>
>On 8/20/13 2:44 PM, "Brian Rosmaita"  wrote:
>
>>In light of today's IRC meeting in #openstack-glance (notes in etherpad
>>https://etherpad.openstack.org/LG39UnQA7z), I've updated the tasks api
>>document and the import document:
>>
>>   https://wiki.openstack.org/wiki/Glance-tasks-api
>>   https://wiki.openstack.org/wiki/Glance-tasks-import
>>
>>cheers,
>>brian
>>
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] import task update

2013-08-20 Thread Brian Rosmaita
Joshua, thanks for reaching out.  The informal consensus was that taskflow was 
overkill for what we're doing in some ways, and maybe not mature enough in 
others.  But maybe we should reconsider (especially if you're willing to help 
out!).  If any Glance people aren't familiar with the current state of 
taskflow, Jessica Lucci has posted a quick 27 min video walkthrough:
  http://www.youtube.com/watch?v=SJLc3U-KYxQ

if you watch, here are the docs she uses:
  https://wiki.openstack.org/wiki/TaskFlow
  https://github.com/stackforge/taskflow
  https://wiki.openstack.org/wiki/Celery
  https://wiki.openstack.org/wiki/DistributedTaskManagement
  https://code.launchpad.net/taskflow

Maybe we can have a quick discussion at the weekly Glance meeting on Thursday 
(20:00 UTC this week in #openstack-meeting-alt on freenode).

From: Joshua Harlow [harlo...@yahoo-inc.com]
Sent: Tuesday, August 20, 2013 6:18 PM
To: OpenStack Development Mailing List; Brian Rosmaita
Subject: Re: [openstack-dev] [Glance] import task update

Very cool, would u consider trying to use taskflow for some of this. It
seems to fit (or could fit) part of the bill nicely.

I'd be up to working with you guys to make this happen, if you guys want
to discuss more u know where to find me (on IRC, ha).

On 8/20/13 2:44 PM, "Brian Rosmaita"  wrote:

>In light of today's IRC meeting in #openstack-glance (notes in etherpad
>https://etherpad.openstack.org/LG39UnQA7z), I've updated the tasks api
>document and the import document:
>
>   https://wiki.openstack.org/wiki/Glance-tasks-api
>   https://wiki.openstack.org/wiki/Glance-tasks-import
>
>cheers,
>brian
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Vishvananda Ishaya

On Aug 20, 2013, at 4:15 PM, Mike Perez  wrote:

> On Tue, Aug 20, 2013 at 2:52 PM, Vishvananda Ishaya  
> wrote:
> 
> On Aug 20, 2013, at 2:44 PM, Mike Perez  wrote:
>> For #1 and #2, really this sounds like another thing doing this along with 
>> Ceilometer. I would really like to leave this in Ceilometer and not have 
>> each project get more complex in having to keep track of this on their own. 
>> I start having fears of discrepancy bugs of what projects' audit say and 
>> what Ceilometer audit says.
>> 
>> Have Ceilometer do audits, keep temporary logs for specified time, and leave 
>> it up to the ops user to collect and archive the information that's 
>> important to them.
>>  
>> To answer your original question, I say just get rid of the column and do a 
>> hard delete. We didn't have Ceilometer then, so we no longer need to keep 
>> track in each project.
>> 
>> Migration path of course should be thought of for the users that need this 
>> information archived if we decide to get rid of the columns.
> 
> This was actually discussed during the summit session. The plan at that time 
> was:
> 
> a) bring back unique constraints by improving soft delete
> b) switch to archiving via shadow tables
> c) remove archiving and use ceilometer for all of the necessary parts.
> 
> c) is going ot take a while. There are still quite a few places in nova, for 
> example, that depend on accessing deleted records.
> 
> We realized that c) was not acheivable in a single release so decided to do 
> a) so we could have unique constraints until the other issues were solved.
> 
> So ultimately I think we are debating things which we already have a plan for.
> 
> Vish
> 
> I guess I'm still failing to see why a, b and then c as the proposed path. 
> I'm mainly curious because the change is being proposed in Cinder and I still 
> can't make sense of why. [1] I'm not saying this idea is wrong, I just don't 
> understand the use case yet.
> 
> For existing environments, why can't we just stop doing soft deletes and have 
> audits happen through Ceilometer as the agreed end goal. We can keep around 
> the delete column for deprecation reasons and allow time for ops to take that 
> information and store it how they need it.

For projects that don't have a bunch of legacy code depending on soft deletes, 
I don't see any reason why you couldn't go straight to c)

Vish

> 
> [1] - https://review.openstack.org/#/c/40660/
> 
> -Mike Perez
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Chris Behrens

On Aug 20, 2013, at 3:29 PM, Vishvananda Ishaya  wrote:

>>> c) is going ot take a while. There are still quite a few places in nova,
>>> for example, that depend on accessing deleted records.
>> 
>> Do you have a list of these places?
> 
> No. I believe Joe Gordon did an initial look long ago. Off the top of my head 
> I remember flavors and the simple-usage extension use them.


Yeah, flavors is a problem still, I think.  Although we've moved towards fixing 
most of it.

Unfortunately the API supports showing some amount of deleted instances if you 
specify 'changes-since'.  Although since I don't think 'some amount' is really 
quantified, we may be able to ignore that.  We should make that go away in v3… 
as long as there is some way for someone to see instances that can be reclaimed 
(soft delete state which is different than DB soft-delete)

There are some periodic tasks that look at deleted records in order to sync 
things.  The one that stands out to me is '_cleanup_running_deleted_instances'.

- Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-20 Thread Jay Pipes

On 08/19/2013 08:27 AM, Sandy Walsh wrote:



On 08/19/2013 05:08 AM, Julien Danjou wrote:

On Sun, Aug 18 2013, Jay Pipes wrote:


I'm proposing that in these cases, a *new* resource would be added to the
resource table (and its ID inserted in meter) table with the new
flavor/instance's metadata.


Ah I see. Considering we're storing metadata as a serialized string
(whereas it's a dict), isn't there a chance we fail?
I'm not sure about the idempotence of the JSON serialization on dicts.


Yeah, using a json blob should only be for immutable data.


Well, to be perfectly frank, fields that store JSON blobs in a RDBMS 
should be reserved for:


a) Data that never needs to be used in a search filter
b) Data that never needs to aggregated in a group by

If any part of a JSON blob doesn't meet the above, it should be removed 
from the JSON blob and put into its own fields in a table (or 
alternately, put into something like the Trait model)


> I'm assuming

metadata can change so we'd need idempotence. I could easily see two
pipelines altering metadata fields. Last write wins. :(


I actually don't think metadata about resources does change, or at 
least, if it does change, then it describes a new resource.


As an example, if an instance resource is resized from an m1.tiny to an 
m2.xlarge, is it still really the same resource? I would say "no", it 
isn't...at least as far as CM should be concerned, since it consumes an 
entirely different pattern of metered usages now.


Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Mike Perez
On Tue, Aug 20, 2013 at 2:52 PM, Vishvananda Ishaya
wrote:

>
> On Aug 20, 2013, at 2:44 PM, Mike Perez  wrote:
>
> For #1 and #2, really this sounds like another thing doing this along with
> Ceilometer. I would really like to leave this in Ceilometer and not have
> each project get more complex in having to keep track of this on their own.
> I start having fears of discrepancy bugs of what projects' audit say and
> what Ceilometer audit says.
>
> Have Ceilometer do audits, keep temporary logs for specified time, and
> leave it up to the ops user to collect and archive the information that's
> important to them.
>
> To answer your original question, I say just get rid of the column and do
> a hard delete. We didn't have Ceilometer then, so we no longer need to keep
> track in each project.
>
> Migration path of course should be thought of for the users that need this
> information archived if we decide to get rid of the columns.
>
>
> This was actually discussed during the summit session. The plan at that
> time was:
>
> a) bring back unique constraints by improving soft delete
> b) switch to archiving via shadow tables
> c) remove archiving and use ceilometer for all of the necessary parts.
>
> c) is going ot take a while. There are still quite a few places in nova,
> for example, that depend on accessing deleted records.
>
> We realized that c) was not acheivable in a single release so decided to
> do a) so we could have unique constraints until the other issues were
> solved.
>
> So ultimately I think we are debating things which we already have a plan
> for.
>
> Vish
>

I guess I'm still failing to see why a, b and then c as the proposed path.
I'm mainly curious because the change is being proposed in Cinder and I
still can't make sense of why. [1] I'm not saying this idea is wrong, I
just don't understand the use case yet.

For existing environments, why can't we just stop doing soft deletes and
have audits happen through Ceilometer as the agreed end goal. We can keep
around the delete column for deprecation reasons and allow time for ops to
take that information and store it how they need it.

[1] - https://review.openstack.org/#/c/40660/

-Mike Perez
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] now with pbr

2013-08-20 Thread John Dickinson
A quick note to let everyone know that https://review.openstack.org/#/c/28892/ 
has merged and now means that Swift is using pbr for version numbers.

This has a couple of consequences that need to be made public:

1) In order to migrate to using pbr, we consumed the 1.9.2 version. I have 
therefore bumped the current milestone target to 1.9.3.

2) Anyone building packages for Swift will need to update their build scripts. 
You should read http://docs.openstack.org/developer/pbr/packagers.html. If you 
have already built packages for other projects that are using pbr, these are 
familiar to you.

Unless something comes up (eg like a security issue), the next release of Swift 
will be the one included in the OpenStack Havana release and will most likely 
be version 1.10.0.

If you do manage Swift packages, please go ahead and start looking at whatever 
changes you need to make so that you won't have issues when the Havana release 
happens.

--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Vishvananda Ishaya

On Aug 20, 2013, at 3:27 PM, Jay Pipes  wrote:

> On 08/20/2013 05:52 PM, Vishvananda Ishaya wrote:
>> 
>> On Aug 20, 2013, at 2:44 PM, Mike Perez > > wrote:
>> 
>>> On Tue, Aug 20, 2013 at 1:59 PM, Jay Pipes >> > wrote:
>>> 
>>> 
>>>We should take a look at look at the various entities in the
>>>various database schemata and ask the following questions:
>>> 
>>>1) Do we care about archival of the entity?
>>> 
>>>2) Do we care about audit history of changes to the entity?
>>> 
>>> 
>>> For #1 and #2, really this sounds like another thing doing this along
>>> with Ceilometer. I would really like to leave this in Ceilometer and
>>> not have each project get more complex in having to keep track of this
>>> on their own. I start having fears of discrepancy bugs of what
>>> projects' audit say and what Ceilometer audit says.
>>> 
>>> Have Ceilometer do audits, keep temporary logs for specified time, and
>>> leave it up to the ops user to collect and archive the information
>>> that's important to them.
>>> To answer your original question, I say just get rid of the column and
>>> do a hard delete. We didn't have Ceilometer then, so we no longer need
>>> to keep track in each project.
>>> 
>>> Migration path of course should be thought of for the users that need
>>> this information archived if we decide to get rid of the columns.
>> 
>> This was actually discussed during the summit session. The plan at that
>> time was:
>> 
>> a) bring back unique constraints by improving soft delete
>> b) switch to archiving via shadow tables
>> c) remove archiving and use ceilometer for all of the necessary parts.
>> 
>> c) is going ot take a while. There are still quite a few places in nova,
>> for example, that depend on accessing deleted records.
>> 
>> We realized that c) was not acheivable in a single release so decided to
>> do a) so we could have unique constraints until the other issues were
>> solved.
>> 
>> So ultimately I think we are debating things which we already have a
>> plan for.
> 
> Also, another follow-up question: was the decision to go with the above steps 
> applied for all entities in the database(s) or was there to be a decision for 
> each entity instead of a one-size-fits-all plan?

I think the decision was to use the same method for everything, although I 
think I remember a recent email from Boris (whose team has took on a lot of 
this work), saying that they are not going to continue with the shadow table 
approach. So if we are finished with a) in nova during Havana, it might be good 
to put effort into c) for Icehouse.

Vish

> 
> Thanks,
> -jay
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Vishvananda Ishaya

On Aug 20, 2013, at 3:16 PM, Jay Pipes  wrote:

> On 08/20/2013 05:52 PM, Vishvananda Ishaya wrote:
>> 
>> On Aug 20, 2013, at 2:44 PM, Mike Perez > > wrote:
>> 
>>> On Tue, Aug 20, 2013 at 1:59 PM, Jay Pipes >> > wrote:
>>> 
>>> 
>>>We should take a look at look at the various entities in the
>>>various database schemata and ask the following questions:
>>> 
>>>1) Do we care about archival of the entity?
>>> 
>>>2) Do we care about audit history of changes to the entity?
>>> 
>>> 
>>> For #1 and #2, really this sounds like another thing doing this along
>>> with Ceilometer. I would really like to leave this in Ceilometer and
>>> not have each project get more complex in having to keep track of this
>>> on their own. I start having fears of discrepancy bugs of what
>>> projects' audit say and what Ceilometer audit says.
>>> 
>>> Have Ceilometer do audits, keep temporary logs for specified time, and
>>> leave it up to the ops user to collect and archive the information
>>> that's important to them.
>>> To answer your original question, I say just get rid of the column and
>>> do a hard delete. We didn't have Ceilometer then, so we no longer need
>>> to keep track in each project.
>>> 
>>> Migration path of course should be thought of for the users that need
>>> this information archived if we decide to get rid of the columns.
>> 
>> This was actually discussed during the summit session. The plan at that
>> time was:
>> 
>> a) bring back unique constraints by improving soft delete
>> b) switch to archiving via shadow tables
>> c) remove archiving and use ceilometer for all of the necessary parts.
>> 
>> c) is going ot take a while. There are still quite a few places in nova,
>> for example, that depend on accessing deleted records.
> 
> Do you have a list of these places?

No. I believe Joe Gordon did an initial look long ago. Off the top of my head I 
remember flavors and the simple-usage extension use them.

> 
>> We realized that c) was not acheivable in a single release so decided to
>> do a) so we could have unique constraints until the other issues were
>> solved.
>> 
>> So ultimately I think we are debating things which we already have a
>> plan for.
> 
> Well, not everyone was in the summit session, for various reasons...and some 
> of us are still catching up. :)

Understood.

Vish

> 
> Best,
> -jay
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Jay Pipes

On 08/20/2013 05:52 PM, Vishvananda Ishaya wrote:


On Aug 20, 2013, at 2:44 PM, Mike Perez mailto:thin...@gmail.com>> wrote:


On Tue, Aug 20, 2013 at 1:59 PM, Jay Pipes mailto:jaypi...@gmail.com>> wrote:


We should take a look at look at the various entities in the
various database schemata and ask the following questions:

1) Do we care about archival of the entity?

2) Do we care about audit history of changes to the entity?


For #1 and #2, really this sounds like another thing doing this along
with Ceilometer. I would really like to leave this in Ceilometer and
not have each project get more complex in having to keep track of this
on their own. I start having fears of discrepancy bugs of what
projects' audit say and what Ceilometer audit says.

Have Ceilometer do audits, keep temporary logs for specified time, and
leave it up to the ops user to collect and archive the information
that's important to them.
To answer your original question, I say just get rid of the column and
do a hard delete. We didn't have Ceilometer then, so we no longer need
to keep track in each project.

Migration path of course should be thought of for the users that need
this information archived if we decide to get rid of the columns.


This was actually discussed during the summit session. The plan at that
time was:

a) bring back unique constraints by improving soft delete
b) switch to archiving via shadow tables
c) remove archiving and use ceilometer for all of the necessary parts.

c) is going ot take a while. There are still quite a few places in nova,
for example, that depend on accessing deleted records.

We realized that c) was not acheivable in a single release so decided to
do a) so we could have unique constraints until the other issues were
solved.

So ultimately I think we are debating things which we already have a
plan for.


Also, another follow-up question: was the decision to go with the above 
steps applied for all entities in the database(s) or was there to be a 
decision for each entity instead of a one-size-fits-all plan?


Thanks,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] import task update

2013-08-20 Thread Joshua Harlow
Very cool, would u consider trying to use taskflow for some of this. It
seems to fit (or could fit) part of the bill nicely.

I'd be up to working with you guys to make this happen, if you guys want
to discuss more u know where to find me (on IRC, ha).

On 8/20/13 2:44 PM, "Brian Rosmaita"  wrote:

>In light of today's IRC meeting in #openstack-glance (notes in etherpad
>https://etherpad.openstack.org/LG39UnQA7z), I've updated the tasks api
>document and the import document:
>
>   https://wiki.openstack.org/wiki/Glance-tasks-api
>   https://wiki.openstack.org/wiki/Glance-tasks-import
>
>cheers,
>brian
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Jay Pipes

On 08/20/2013 05:52 PM, Vishvananda Ishaya wrote:


On Aug 20, 2013, at 2:44 PM, Mike Perez mailto:thin...@gmail.com>> wrote:


On Tue, Aug 20, 2013 at 1:59 PM, Jay Pipes mailto:jaypi...@gmail.com>> wrote:


We should take a look at look at the various entities in the
various database schemata and ask the following questions:

1) Do we care about archival of the entity?

2) Do we care about audit history of changes to the entity?


For #1 and #2, really this sounds like another thing doing this along
with Ceilometer. I would really like to leave this in Ceilometer and
not have each project get more complex in having to keep track of this
on their own. I start having fears of discrepancy bugs of what
projects' audit say and what Ceilometer audit says.

Have Ceilometer do audits, keep temporary logs for specified time, and
leave it up to the ops user to collect and archive the information
that's important to them.
To answer your original question, I say just get rid of the column and
do a hard delete. We didn't have Ceilometer then, so we no longer need
to keep track in each project.

Migration path of course should be thought of for the users that need
this information archived if we decide to get rid of the columns.


This was actually discussed during the summit session. The plan at that
time was:

a) bring back unique constraints by improving soft delete
b) switch to archiving via shadow tables
c) remove archiving and use ceilometer for all of the necessary parts.

c) is going ot take a while. There are still quite a few places in nova,
for example, that depend on accessing deleted records.


Do you have a list of these places?


We realized that c) was not acheivable in a single release so decided to
do a) so we could have unique constraints until the other issues were
solved.

So ultimately I think we are debating things which we already have a
plan for.


Well, not everyone was in the summit session, for various reasons...and 
some of us are still catching up. :)


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Vishvananda Ishaya

On Aug 20, 2013, at 2:44 PM, Mike Perez  wrote:

> On Tue, Aug 20, 2013 at 1:59 PM, Jay Pipes  wrote:
> 
> We should take a look at look at the various entities in the various database 
> schemata and ask the following questions:
> 
> 1) Do we care about archival of the entity?
> 
> 2) Do we care about audit history of changes to the entity?
> 
> For #1 and #2, really this sounds like another thing doing this along with 
> Ceilometer. I would really like to leave this in Ceilometer and not have each 
> project get more complex in having to keep track of this on their own. I 
> start having fears of discrepancy bugs of what projects' audit say and what 
> Ceilometer audit says.
> 
> Have Ceilometer do audits, keep temporary logs for specified time, and leave 
> it up to the ops user to collect and archive the information that's important 
> to them.
>  
> To answer your original question, I say just get rid of the column and do a 
> hard delete. We didn't have Ceilometer then, so we no longer need to keep 
> track in each project.
> 
> Migration path of course should be thought of for the users that need this 
> information archived if we decide to get rid of the columns.

This was actually discussed during the summit session. The plan at that time 
was:

a) bring back unique constraints by improving soft delete
b) switch to archiving via shadow tables
c) remove archiving and use ceilometer for all of the necessary parts.

c) is going ot take a while. There are still quite a few places in nova, for 
example, that depend on accessing deleted records.

We realized that c) was not acheivable in a single release so decided to do a) 
so we could have unique constraints until the other issues were solved.

So ultimately I think we are debating things which we already have a plan for.

Vish

> 
> -Mike Perez
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] import task update

2013-08-20 Thread Brian Rosmaita
In light of today's IRC meeting in #openstack-glance (notes in etherpad 
https://etherpad.openstack.org/LG39UnQA7z), I've updated the tasks api document 
and the import document:

   https://wiki.openstack.org/wiki/Glance-tasks-api
   https://wiki.openstack.org/wiki/Glance-tasks-import

cheers,
brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Mike Perez
On Tue, Aug 20, 2013 at 1:59 PM, Jay Pipes  wrote:

>
> We should take a look at look at the various entities in the various
> database schemata and ask the following questions:
>
> 1) Do we care about archival of the entity?
>
> 2) Do we care about audit history of changes to the entity?
>

For #1 and #2, really this sounds like another thing doing this along with
Ceilometer. I would really like to leave this in Ceilometer and not have
each project get more complex in having to keep track of this on their own.
I start having fears of discrepancy bugs of what projects' audit say and
what Ceilometer audit says.

Have Ceilometer do audits, keep temporary logs for specified time, and
leave it up to the ops user to collect and archive the information that's
important to them.

To answer your original question, I say just get rid of the column and do a
hard delete. We didn't have Ceilometer then, so we no longer need to keep
track in each project.

Migration path of course should be thought of for the users that need this
information archived if we decide to get rid of the columns.

-Mike Perez
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Jay Pipes

On 08/20/2013 04:16 PM, Chris Behrens wrote:


On Aug 20, 2013, at 1:05 PM, Jay Pipes  wrote:


I see the following use case:

1) Create something with a unique name within your tenant
2) Delete that
3) Create something with the same unique name immediately after

As a pointless and silly use case that we should not cater to.

It's made the database schema needlessly complex IMO and added columns to a 
unique constraint that make a DBA's job more complex in order to fulfill a use 
case that really isn't particularly compelling.

I was having a convo on IRC with Boris and stated the use case in different 
terms:

If you delete your Gmail email address, do you expect to immediately be able to 
create a new Gmail email with the previous address?

If you answer yes, then this unique constraint on the deleted column makes 
sense to you. If you answer no, then the whole thing seems like we've spent a 
lot of effort on something that isn't particularly useful except in random test 
cases that try to create and delete the same thing in rapid succession. And 
IMO, those kinds of test cases should be deleted -- hard-deleted.



I would answer 'no' to the gmail question.  I would answer 'yes' depending on 
what other things we may talk about.  If we put (or maybe we have this -- I 
didn't check) unique constraints on the metadata table for metadata key… It 
would be rather silly to not allow someone to reset some metadata with the same 
key immediately.  One could argue that we just un-delete the former row and 
update it, however… but I think that breaks archiving (something*I'm* not a fan 
of ;)


Yeah, totally agreed. It's wrong to lump all entities into the same boat.

Some back-of-napkin thoughts...

We should take a look at look at the various entities in the various 
database schemata and ask the following questions:


1) Do we care about archival of the entity?

2) Do we care about audit history of changes to the entity?

3) Do we care about whether an entity with a unique name/key needs to be 
able to be deleted and immediately re-created with the same name/key?


For entities with "No" answers to #1 -- and I would suggest instance 
metadata might be one of these things... -- we could do hard-delete and 
hard-updates.


For entities with "Yes" answers to #1 and "No" answers to #2 and #3, we 
could use a simple periodic purge/archive process and leave deleted 
columns out of unique constraints on the tables


For entities with "Yes" answers to #1 and #2 and "No" answers to #3, we 
could use audit tables (note: not shadow tables, but audit tables, which 
record the changes made to entities over time) along with a periodic 
archival/purge process and leave the deleted columns out of unique 
constraints on the tables.


For entities with "Yes" answers to all three, we could use hard-delete 
in the main table along with audit tables and remove the 
deleted_at/deleted/created_at/updated_at from tables (since the entire 
history of the entity is contained in the audit table.


Thoughts?
-jay




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Chris Behrens

On Aug 20, 2013, at 1:05 PM, Jay Pipes  wrote:

> I see the following use case:
> 
> 1) Create something with a unique name within your tenant
> 2) Delete that
> 3) Create something with the same unique name immediately after
> 
> As a pointless and silly use case that we should not cater to.
> 
> It's made the database schema needlessly complex IMO and added columns to a 
> unique constraint that make a DBA's job more complex in order to fulfill a 
> use case that really isn't particularly compelling.
> 
> I was having a convo on IRC with Boris and stated the use case in different 
> terms:
> 
> If you delete your Gmail email address, do you expect to immediately be able 
> to create a new Gmail email with the previous address?
> 
> If you answer yes, then this unique constraint on the deleted column makes 
> sense to you. If you answer no, then the whole thing seems like we've spent a 
> lot of effort on something that isn't particularly useful except in random 
> test cases that try to create and delete the same thing in rapid succession. 
> And IMO, those kinds of test cases should be deleted -- hard-deleted.
> 

I would answer 'no' to the gmail question.  I would answer 'yes' depending on 
what other things we may talk about.  If we put (or maybe we have this -- I 
didn't check) unique constraints on the metadata table for metadata key… It 
would be rather silly to not allow someone to reset some metadata with the same 
key immediately.  One could argue that we just un-delete the former row and 
update it, however… but I think that breaks archiving (something*I'm* not a fan 
of ;)

- Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Chris Behrens

On Aug 20, 2013, at 12:51 PM, Ed Leafe  wrote:

> On Aug 20, 2013, at 2:33 PM, Chris Behrens 
> wrote:
> 
>> For instances table, we want to make sure 'uuid' is unique.  But we can't 
>> put a unique constraint on that alone.  If that instance gets deleted.. we 
>> should be able to create another entry with the same uuid without a problem. 
>>  So we need a unique constraint on uuid+deleted.  But if 'deleted' is only 0 
>> or 1… we can only have 1 entry deleted and 1 entry not deleted.  Using 
>> deleted=`id` to mark deletion solves that problem.  You could use 
>> deleted_at… but 2 creates and deletes within the same second would not work. 
>> :)
> 
> This creates another problem if you ever need to delete this second instance, 
> because now you have two with the same uuid and the same deleted status.

Not with the setting of 'deleted' to the row's `id` on delete… since `id` is 
unique.

- Chris




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Jay Pipes

I see the following use case:

1) Create something with a unique name within your tenant
2) Delete that
3) Create something with the same unique name immediately after

As a pointless and silly use case that we should not cater to.

It's made the database schema needlessly complex IMO and added columns 
to a unique constraint that make a DBA's job more complex in order to 
fulfill a use case that really isn't particularly compelling.


I was having a convo on IRC with Boris and stated the use case in 
different terms:


If you delete your Gmail email address, do you expect to immediately be 
able to create a new Gmail email with the previous address?


If you answer yes, then this unique constraint on the deleted column 
makes sense to you. If you answer no, then the whole thing seems like 
we've spent a lot of effort on something that isn't particularly useful 
except in random test cases that try to create and delete the same thing 
in rapid succession. And IMO, those kinds of test cases should be 
deleted -- hard-deleted.


Best,
-jay

On 08/20/2013 03:33 PM, Chris Behrens wrote:


This is kind of a stupid example, but it makes the point:

For instances table, we want to make sure 'uuid' is unique.  But we can't put a 
unique constraint on that alone.  If that instance gets deleted.. we should be 
able to create another entry with the same uuid without a problem.  So we need 
a unique constraint on uuid+deleted.  But if 'deleted' is only 0 or 1… we can 
only have 1 entry deleted and 1 entry not deleted.  Using deleted=`id` to mark 
deletion solves that problem.  You could use deleted_at… but 2 creates and 
deletes within the same second would not work. :)

- Chris


On Aug 20, 2013, at 7:33 AM, Jay Pipes  wrote:


*sigh* I wish I'd been aware of these conversations and been in the Grizzly 
summit session on soft delete...

What specific unique constraint was needed that changing the deleted column to 
use the id value solved?

-jay

On 08/19/2013 03:56 AM, Chris Behrens wrote:

'deleted' is used so that we can have proper unique constraints by setting it 
to `id` on deletion.  This was not the case until Grizzly, and before Grizzly I 
would have agreed completely.

- Chris

On Aug 19, 2013, at 12:39 AM, Jay Pipes  wrote:


I'm throwing this up here to get some feedback on something that's always 
bugged me about the model base used in many of the projects.

There's a mixin class that looks like so:

class SoftDeleteMixin(object):
deleted_at = Column(DateTime)
deleted = Column(Integer, default=0)

def soft_delete(self, session=None):
"""Mark this object as deleted."""
self.deleted = self.id
self.deleted_at = timeutils.utcnow()
self.save(session=session)

Once mixed in to a concrete model class, the primary join is typically modified 
to include the deleted column, like so:

class ComputeNode(BASE, NovaBase):
...
service = relationship(Service,
   backref=backref('compute_node'),
   foreign_keys=service_id,
   primaryjoin='and_('
'ComputeNode.service_id == Service.id,'
'ComputeNode.deleted == 0)')

My proposal is to get rid of the deleted column in the SoftDeleteMixin class 
entirely, as it is redundant with the deleted_at column. Instead of doing a 
join condition on deleted == 0, one would instead just do the join condition on 
deleted_at is None, which translates to the SQL: AND deleted_at IS NULL.

There isn't much of a performance benefit -- you're only reducing the row size 
by 4 bytes. But, you'd remove the redundant data from all the tables, which 
would make the normal form freaks like myself happy ;)

Thoughts?

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Meeting Tuesday August 20th at 19:00 UTC

2013-08-20 Thread Elizabeth Krumbach Joseph
On Mon, Aug 19, 2013 at 2:10 PM, Elizabeth Krumbach Joseph
 wrote:
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting tomorrow, Tuesday August 20th, at 19:00 UTC in
> #openstack-meeting

We ended up with a somewhat off-agenda meeting heavily focused on just
the components that will help get us through this very busy FF week,
logs and minutes:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-08-20-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-08-20-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-08-20-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Ed Leafe
On Aug 20, 2013, at 2:33 PM, Chris Behrens 
wrote:

> For instances table, we want to make sure 'uuid' is unique.  But we can't put 
> a unique constraint on that alone.  If that instance gets deleted.. we should 
> be able to create another entry with the same uuid without a problem.  So we 
> need a unique constraint on uuid+deleted.  But if 'deleted' is only 0 or 1… 
> we can only have 1 entry deleted and 1 entry not deleted.  Using deleted=`id` 
> to mark deletion solves that problem.  You could use deleted_at… but 2 
> creates and deletes within the same second would not work. :)

This creates another problem if you ever need to delete this second instance, 
because now you have two with the same uuid and the same deleted status. 
'deleted_at' is slightly better, as the only place it would fail is if all of 
this happened within the same unit of precision for that time value.


-- Ed Leafe

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] postpone key distribution bp until icehouse?

2013-08-20 Thread Mark McLoughlin
On Wed, 2013-08-14 at 18:02 -0300, Thierry Carrez wrote:
> Simo Sorce wrote:
> > On Wed, 2013-08-14 at 14:06 -0300, Thierry Carrez wrote:
> >> I explained why I prefer it to land in a few weeks rather than now...
> >> Can someone explain why they prefer the reverse ? Why does it have to be
> >> in havana ?
> > 
> > Because it was painful top rebase due to the migrations code, however
> > since Adam landed the code that splits migrations so that extensions can
> > have their own separate code for that I think the burden will be
> > substantially lower.
> > 
> > If this is your final word on the matter I'll take notice that the work
> > will be deferred till Icehouse and I will slightly demote its priority
> > in my work queue.
> 
> Like I said in the other email, I don't really have "the final word" on
> this, Dolph has. I can only influence his decision. Personally, with the
> information I have so far, I think the benefits of waiting outweigh the
> drawbacks, but that may be just me and my fear of repeating past mistakes.

During the Oslo meeting last Friday, we decided it would be best to
defer the secure messaging feature until Icehouse. While the code seems
pretty complete, it'll naturally need some settling in time, time to
make all the projects use it and then stuff like docs, devstack support,
etc.

I'm looking forward to see this land early in Icehouse.

Part of the implications of this is that we'll focus efforts on adding
this feature to oslo.messaging in Icehouse rather than the rpc library
in oslo-incubator.

Cheers,
Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Chris Behrens

This is kind of a stupid example, but it makes the point:

For instances table, we want to make sure 'uuid' is unique.  But we can't put a 
unique constraint on that alone.  If that instance gets deleted.. we should be 
able to create another entry with the same uuid without a problem.  So we need 
a unique constraint on uuid+deleted.  But if 'deleted' is only 0 or 1… we can 
only have 1 entry deleted and 1 entry not deleted.  Using deleted=`id` to mark 
deletion solves that problem.  You could use deleted_at… but 2 creates and 
deletes within the same second would not work. :)

- Chris


On Aug 20, 2013, at 7:33 AM, Jay Pipes  wrote:

> *sigh* I wish I'd been aware of these conversations and been in the Grizzly 
> summit session on soft delete...
> 
> What specific unique constraint was needed that changing the deleted column 
> to use the id value solved?
> 
> -jay
> 
> On 08/19/2013 03:56 AM, Chris Behrens wrote:
>> 'deleted' is used so that we can have proper unique constraints by setting 
>> it to `id` on deletion.  This was not the case until Grizzly, and before 
>> Grizzly I would have agreed completely.
>> 
>> - Chris
>> 
>> On Aug 19, 2013, at 12:39 AM, Jay Pipes  wrote:
>> 
>>> I'm throwing this up here to get some feedback on something that's always 
>>> bugged me about the model base used in many of the projects.
>>> 
>>> There's a mixin class that looks like so:
>>> 
>>> class SoftDeleteMixin(object):
>>>deleted_at = Column(DateTime)
>>>deleted = Column(Integer, default=0)
>>> 
>>>def soft_delete(self, session=None):
>>>"""Mark this object as deleted."""
>>>self.deleted = self.id
>>>self.deleted_at = timeutils.utcnow()
>>>self.save(session=session)
>>> 
>>> Once mixed in to a concrete model class, the primary join is typically 
>>> modified to include the deleted column, like so:
>>> 
>>> class ComputeNode(BASE, NovaBase):
>>>...
>>>service = relationship(Service,
>>>   backref=backref('compute_node'),
>>>   foreign_keys=service_id,
>>>   primaryjoin='and_('
>>>'ComputeNode.service_id == Service.id,'
>>>'ComputeNode.deleted == 0)')
>>> 
>>> My proposal is to get rid of the deleted column in the SoftDeleteMixin 
>>> class entirely, as it is redundant with the deleted_at column. Instead of 
>>> doing a join condition on deleted == 0, one would instead just do the join 
>>> condition on deleted_at is None, which translates to the SQL: AND 
>>> deleted_at IS NULL.
>>> 
>>> There isn't much of a performance benefit -- you're only reducing the row 
>>> size by 4 bytes. But, you'd remove the redundant data from all the tables, 
>>> which would make the normal form freaks like myself happy ;)
>>> 
>>> Thoughts?
>>> 
>>> -jay
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Boris Pavlovic
Jay,

Don't worry I investigate this question very well.
There are actually two approaches:

1) Use deleted_at to create Unique Constraints.

But then we are not able to store in deleted_at NONE value, because it
won't work
e.g. We have table for Users (user_name, deleted_at, deleted), and we won't
to make user_name Unique, then if we just add UC to (user_name, deleted_at)
in MySql we will get next behavior:
(user1, NONE) and (user1, NONE) are different and could be stored in DB
because NONE != NONE in mysql.

So to solve this thing we have to add some base VALUE instead of NONE (e.g.
1.1.1970) But this is really dirty thing and produce a lot of hacks.


2) Use deleted column

So change type of deleted column to ID type.
Use 0 or "" as a base value, and store value of ID in deleted column on
deletion (which is really UNIQUE)

and use UC as (column1, column2, deleted)


So I think that second variant is much cleaner then first.


Best regards,
Boris Pavlovic
---
Mirantis Inc.



On Tue, Aug 20, 2013 at 6:33 PM, Jay Pipes  wrote:

> *sigh* I wish I'd been aware of these conversations and been in the
> Grizzly summit session on soft delete...
>
> What specific unique constraint was needed that changing the deleted
> column to use the id value solved?
>
> -jay
>
>
> On 08/19/2013 03:56 AM, Chris Behrens wrote:
>
>> 'deleted' is used so that we can have proper unique constraints by
>> setting it to `id` on deletion.  This was not the case until Grizzly, and
>> before Grizzly I would have agreed completely.
>>
>> - Chris
>>
>> On Aug 19, 2013, at 12:39 AM, Jay Pipes  wrote:
>>
>>  I'm throwing this up here to get some feedback on something that's
>>> always bugged me about the model base used in many of the projects.
>>>
>>> There's a mixin class that looks like so:
>>>
>>> class SoftDeleteMixin(object):
>>> deleted_at = Column(DateTime)
>>> deleted = Column(Integer, default=0)
>>>
>>> def soft_delete(self, session=None):
>>> """Mark this object as deleted."""
>>> self.deleted = self.id
>>> self.deleted_at = timeutils.utcnow()
>>> self.save(session=session)
>>>
>>> Once mixed in to a concrete model class, the primary join is typically
>>> modified to include the deleted column, like so:
>>>
>>> class ComputeNode(BASE, NovaBase):
>>> ...
>>> service = relationship(Service,
>>>backref=backref('compute_node'**),
>>>foreign_keys=service_id,
>>>primaryjoin='and_('
>>> 'ComputeNode.service_id == Service.id,'
>>> 'ComputeNode.deleted == 0)')
>>>
>>> My proposal is to get rid of the deleted column in the SoftDeleteMixin
>>> class entirely, as it is redundant with the deleted_at column. Instead of
>>> doing a join condition on deleted == 0, one would instead just do the join
>>> condition on deleted_at is None, which translates to the SQL: AND
>>> deleted_at IS NULL.
>>>
>>> There isn't much of a performance benefit -- you're only reducing the
>>> row size by 4 bytes. But, you'd remove the redundant data from all the
>>> tables, which would make the normal form freaks like myself happy ;)
>>>
>>> Thoughts?
>>>
>>> -jay
>>>
>>> __**_
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.**org 
>>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>>>
>>
>>
>> __**_
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.**org 
>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>>
>>
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-20 Thread Sandy Walsh


On 08/20/2013 10:42 AM, Thomas Maddox wrote:
> On 8/19/13 8:21 AM, "Sandy Walsh"  wrote:
> 
>>
>>
>> On 08/18/2013 04:04 PM, Jay Pipes wrote:
>>> On 08/17/2013 03:10 AM, Julien Danjou wrote:
 On Fri, Aug 16 2013, Jay Pipes wrote:

> Actually, that's the opposite of what I'm suggesting :) I'm suggesting
> getting rid of the resource_metadata column in the meter table and
> using the
> resource table in joins...

 I think there's a lot of scenario where this would fail, like for
 example instances being resized; the flavor is a metadata.
>>>
>>> I'm proposing that in these cases, a *new* resource would be added to
>>> the resource table (and its ID inserted in meter) table with the new
>>> flavor/instance's metadata.
>>>
 Though, changing the schema to improve performance is a good one, this
 needs to be thought from the sample sending to the storage, through the
 whole chain. This is something that will break a lot of current
 assumption; that doesn't mean it's bad or we can't do it, just that we
 need to think it through. :)
>>>
>>> Yup, understood completely. The change I am proposing would not affect
>>> any assumptions made from the point of view of a sample sent to storage.
>>> The current assumption is that a sample's *exact* state at time of
>>> sampling would be stored so that the exact sample state could be
>>> reflected even if the underlying resource that triggered the sample
>>> changed over time.
>>>
>>> All I am proposing is a change to the existing implementation of that
>>> assumption: instead of storing the original resource metadata in the
>>> meter table, we instead ensure that we store the resource in the
>>> resource table, and upon new sample records being inserted into the
>>> meter table, we check to see if the resource for the sample is the same
>>> as it was last time. If it is, we simply insert the resource ID from
>>> last time. If it isn't, we add a new record to the resource table that
>>> describes the new resource attributes, and we insert that new resource
>>> ID into the meter table for that sample...
>>
>> I'm assuming we wouldn't need a backlink to the older resource?
>>
>> I'm thinking about how this would work work Events and Request ID's. The
>> two most common reports we run from StackTach are based on Request ID
>> and some resource ID.
>>
>> "Show me all the events related to this Request UUID"
>> "Show me all the events related to this  UUID"
>>
>> A new Resource entry would be fine so long as it was still associated
>> with the underlying Resource UUID (instance, image, etc). We could get
>> back a list of all the Resources with the same UUID and, if needed,
>> lookup the metadata for it. This would allow us to see how to the
>> resource changed over time.
>>
>> I think that's what you're suggesting ... if so, yep.
>>
>> As for the first query "... for this Request ID", we'd have to map Event
>> many related Resources since one event could have a related
>> instance/image/network/volume/host/scheduler, etc.
>>
>> These relationships would have to get mapped when the Event is turned
>> into Meters. Changing the Resource ID might not be a problem if we keep
>> a common Resource UUID. I have to think about that some more.
>>
>> Would we use timestamps to determine which Resource is the most recent?
>>
>>
>> -S
>>
> 
> Are we going to be incurring significant performance cost from this?

There's certainly a storage cost and potentially a race condition (read
the current metadata, change something and, at the same time, someone
else did the another metadata change). But the performance overhead
should be slight.

> Let me see if I understand how a query will work for this based on the
> current way CM gets billing:
> 
> Scenario: Monthly billing for Winston who built 12 machines this month; we
> don't want to bill for failed/stalled builds that weren't cleaned up yet
> either.
> 
> 1. Filter Meter table for the time range in the samples to get the
> Resources that were updated

I would do like we do in StackTach, query for the unique set of Request
ID's over that time range by Tenant. From there determine which of those
operations were billable actions (CUD).

I think Dragon's Trigger work will make this far less expensive than it
is in StackTach currently since we'll be able to create a "Request"
Resource (each Request will have a related Resource) with metadata
saying this was a billable event and the tenant ID. The corresponding
events to the request ID can link to this Request resource. The query
should be pretty tight.


> 2. Because the metadata changes a few times throughout the build process,
> we have samples referencing several different metadata states over time
> for each instance
> 3. Because of the metadata over time, we filter the Resource table to
> provide distinct resources

Yeah, knowing which is the "latest" Resource is my concern as well.
Getting the metadata from that resource should be the

Re: [openstack-dev] [Nova] Proposal to revert per-user-quotas

2013-08-20 Thread Yingjun Li
Hi Andrew,

I have addressed a bug here https://bugs.launchpad.net/nova/+bug/1214523 for
the sync issue.
Codes to try to resolve the problem https://review.openstack.org/#/c/42966/1

Please have a look at the codes, any suggestions would be appreciated !

Thanks

Yingjun



2013/8/21 Andrew Laski 

> On 08/21/13 at 12:02am, Yingjun Li wrote:
>
>> Thanks for address the issues. About the bad state for fixed_ips,
>> floating_ips, i think we could make the user_id column=NULL when creating
>> the quota usage and reservation, so the usages for fixed_ips and
>> floating_ips will be  synced within the project.
>> Does this make sense?
>>
>
> On the database side that should address the issue I'm seeing, and will
> fix the issue with the sync methods for those resources.
>
> I would be interested to see how the distinction between user level
> resources and project level resources is handled in the code so that these
> types of accidental bugs are avoided.
>
>
>>
>> 2013/8/20 Andrew Laski 
>>
>>  The patch in question 
>> (https://review.openstack.org/#/c/28232/24
>>> 
>>> >)
>>>
>>> adds the ability to track quota usage on a per user basis within a
>>> project.
>>>  I have run into two issues with it so far: the db migration is
>>> incomplete
>>> and leaves the data in a bad state, and the sync methods used during
>>> quota
>>> reservations no longer work for fixed_ips, floating_ips, and networks
>>> since
>>> they are not tied to a user.
>>>
>>> The db migration issue is documented at https://bugs.launchpad.net/**
>>> nova/+bug/1212798 
>>> >
>>> but the
>>>
>>> tl;dr is that the quota usages that were in place before the migration is
>>> run can not be decremented and aren't fixed by the healing sync that
>>> occurs.  I sought to address this by introducing a new migration which
>>> performs a full sync of quota usages and removes the bad rows but that
>>> led
>>> me to the next issue.
>>>
>>> Some resources can't be synced properly because they're tracked per user
>>> in the quota table but they're not tied to a user so it's not feasible to
>>> grab a count of how many are being used by any particular user.  So right
>>> now the quota_usages table can get into a bad state with no good way to
>>> address it.
>>>
>>> Right now I think it will be better to revert this change and
>>> re-introduce
>>> it once these issues are worked out. Thoughts?
>>>
>>> As an addendum, the patch merged about a month ago on Jul 25th and looks
>>> to have some minor conflicts for a revert but should be minimally
>>> disruptive.
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org >> openstack.org >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>> openstack-dev
>>> 
>>> >
>>>
>>>
>  __**_
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.**org 
>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>>
>
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to revert per-user-quotas

2013-08-20 Thread Vishvananda Ishaya

On Aug 20, 2013, at 9:02 AM, Yingjun Li  wrote:

> Thanks for address the issues. About the bad state for fixed_ips, 
> floating_ips, i think we could make the user_id column=NULL when creating the 
> quota usage and reservation, so the usages for fixed_ips and floating_ips 
> will be  synced within the project.
> Does this make sense?

If this is a viable approach, I prefer that we attempt to fix the code in tree. 
We attempted to get this code in grizzly and had to revert. I'd hate to go 
through the cycle again in I if we can fix it now.

Vish

> 
> 
> 2013/8/20 Andrew Laski 
> The patch in question (https://review.openstack.org/#/c/28232/24) adds the 
> ability to track quota usage on a per user basis within a project.  I have 
> run into two issues with it so far: the db migration is incomplete and leaves 
> the data in a bad state, and the sync methods used during quota reservations 
> no longer work for fixed_ips, floating_ips, and networks since they are not 
> tied to a user.
> 
> The db migration issue is documented at 
> https://bugs.launchpad.net/nova/+bug/1212798 but the tl;dr is that the quota 
> usages that were in place before the migration is run can not be decremented 
> and aren't fixed by the healing sync that occurs.  I sought to address this 
> by introducing a new migration which performs a full sync of quota usages and 
> removes the bad rows but that led me to the next issue.
> 
> Some resources can't be synced properly because they're tracked per user in 
> the quota table but they're not tied to a user so it's not feasible to grab a 
> count of how many are being used by any particular user.  So right now the 
> quota_usages table can get into a bad state with no good way to address it.
> 
> Right now I think it will be better to revert this change and re-introduce it 
> once these issues are worked out. Thoughts?
> 
> As an addendum, the patch merged about a month ago on Jul 25th and looks to 
> have some minor conflicts for a revert but should be minimally disruptive.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-20 Thread Clint Byrum
Excerpts from Mark McLoughlin's message of 2013-08-20 03:26:01 -0700:
> On Thu, 2013-08-15 at 14:12 +1200, Robert Collins wrote:
> > This may interest data-driven types here.
> > 
> > https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
> > 
> > Note specifically the citation of 200-400 lines as the knee of the review
> > effectiveness curve: that's lower than I thought - I thought 200 was
> > clearly fine - but no.
> 
> The full study is here:
> 
> http://support.smartbear.com/resources/cc/book/code-review-cisco-case-study.pdf
> 
> This is an important subject and I'm glad folks are studying it, but I'm
> sceptical about whether the "Defect density vs LOC" is going to help us
> come up with better guidelines than we have already.
> 
> Obviously, a metric like LOC hides some serious subtleties. Not all
> changes are of equal complexity. We see massive refactoring patches
> (like s/assertEquals/assertEqual/) that are actually safer than gnarly,
> single-line, head-scratcher bug-fixes. The only way the report addresses
> that issue with the underlying data is by eliding >10k LOC patches.
> 

I'm not so sure that it is obvious what these subtleties are, or they
would not be subtleties, they would be glaring issues.

I agree that LOC changed is an imperfect measure. However, so are the
hacking rules. They, however, have allowed us to not spend time on these
things. We whole-heartedly embrace an occasional imperfection by deferring
to something that can be measured by automation and thus free up valuable
time for other activities more suited to limited reviewer/developer time.

I'd like to see automation enforce change size. And just like with
hacking, it would not be possible without a switch like "#noqa" that
one can put in the commit message that says "hey automation, this is
a trivial change".  That is something a reviewer can also see as a cue
that this change, while big, should be trivial.

> The "one logical change per commit" is a more effective guideline than
> any LOC based guideline:
> 
> https://wiki.openstack.org/wiki/GitCommitMessages#Structural_split_of_changes
> 
> IMHO, the number of distinct logical changes in a patch has a more
> predictable impact on review effectiveness than the LOC metric.

Indeed, however, automating a check for that may be very difficult. I
have seen tools like PMD[1] that try very hard to measure the complexity
of code and the risk of change, and those might be interesting to see,
but I'm not sure they are reliable enough to put in the OpenStack gate.

[1] http://pmd.sourceforge.net/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo.db] Eventlet in db (was, Replacing Glance DB code to Oslo DB code.)

2013-08-20 Thread Mark Washenberger
We do something similar that works in python-glanceclient.

https://github.com/openstack/python-glanceclient/blob/master/glanceclient/common/http.py#L43


On Tue, Aug 20, 2013 at 10:09 AM, Joshua Harlow wrote:

>  Ok, that’s good.
>
>  I don't think the following though would work. Maybe something else is
> needed?
>
>  try:
> import eventlet
> eventlet_on = True
> except ImportError:
> eventlet_on = False
>
>  Due to how oslo.db could be used the environment may actually have
> eventlet installed (say a server running keystone and nova-api at the same
> time). The first project (keystone) might not want to use eventlet (but it
> might be in the python module path, while nova-api on the same box would
> want to use it). So we might need a more advanced configuration setting to
> make this tunable (and not depend on the python import statement to be that
> tunable setting).
>
>   From: Ben Nemec 
> Reply-To: "openst...@nemebean.com" , OpenStack
> Development Mailing List 
> Date: Monday, August 19, 2013 9:41 PM
> To: "openstack-dev@lists.openstack.org"  >
> Subject: Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB
> code.
>
>   On 08/19/13 20:34, Joshua Harlow wrote:
>
> Just a related question,
>
> Oslo 'incubator' db code I think depends on eventlet. This means any code
> that uses the oslo.db code could/would(?) be dependent on eventlet.
>
> Will there be some refactoring there to not require it (useful for
> projects that are trying to move away from eventlet).
>
>
> https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/session.py#L248
>
>
> Glancing through that file, it looks like the greenthread import is only
> used for playing nice with other greenthreads.  It should be pretty easy to
> make it conditional so we don't require it, but will use it if it's
> available.
>
> -Ben
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] rpcapi versions control with can_send_version()

2013-08-20 Thread Russell Bryant
On 08/20/2013 01:37 PM, Day, Phil wrote:
> If I want to add a parameter and bump this to version 2.36, do I just
> change the version checked in can_send_version – or should there now be
> specific handling for each new version:

Specific handling for each version would be ideal.  We want to send the
newest message format we can.

What you had in the example code is what I would do.  There are other
examples of this in the conductor rpcapi.

(snipped the code ... the formatting was destroyed since it wasn't a
plain text email)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stats on blueprint design info / creation times

2013-08-20 Thread Thierry Carrez
Daniel P. Berrange wrote:
> On Tue, Aug 20, 2013 at 05:18:21PM +0100, Daniel P. Berrange wrote:
>> On Tue, Aug 20, 2013 at 12:53:25PM -0300, Thierry Carrez wrote:
>>> It would be more interesting to check how many blueprints are created
>>> more than two weeks after the design summit. Those would be the late
>>> blueprints (or the ones created as a tickbox), which escape the release
>>> planning process.
>>
>> I'll look up the historic dates for each summit, and try to generate
>> some stats based on blueprint creation date vs  summit date + 2 weeks.
> 
> Re-running using the summit date + 2 weeks shift things a little bit.
> Here is the  summary for 3 most recent series:
> 
> Series: folsom all
>   Specs (Before Mon, 30 Apr 2012 00:00:00 +): 62
>   Specs (After Mon, 30 Apr 2012 00:00:00 +): 115
> 
> Series: grizzly all
>   Specs (Before Sun, 28 Oct 2012 23:00:00 +): 81
>   Specs (After Sun, 28 Oct 2012 23:00:00 +): 174
> 
> Series: havana all
>   Specs (Before Mon, 29 Apr 2013 00:00:00 +): 197
>   Specs (After Mon, 29 Apr 2013 00:00:00 +): 273

Interesting, looks like we actually did a better job with havana,
jumping from 31% to 42% of "planned blueprints". That may be Nova and
Neutron PTLs enforcing more rules to retain their sanity.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] proposing Alex Gaynor for core on openstack/requirements

2013-08-20 Thread Alex Gaynor
Thanks everyone, I look forward to continuing to help out!

Alex


On Tue, Aug 20, 2013 at 9:49 AM, Doug Hellmann
wrote:

> Without any objections, I've added Alex Gaynor to the requirements-core
> team.
>
> Welcome, Alex!
>
>
> On Sat, Aug 17, 2013 at 2:46 PM, Jeremy Stanley  wrote:
>
>> On 2013-08-16 11:04:14 -0400 (-0400), Doug Hellmann wrote:
>> > I'd like to propose Alex Gaynor for core status on the
>> > requirements project.
>> [...]
>>
>> Agreed, I for one welcome his continued assistance.
>> --
>> Jeremy Stanley
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
"I disapprove of what you say, but I will defend to the death your right to
say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
"The people's good is the highest law." -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-20 Thread Joe Gordon
On Aug 19, 2013 8:35 AM, "Flavio Percoco"  wrote:
>
> On 19/08/13 04:33 -0700, Gary Kotton wrote:
>>
>> So are you agree with next points?
>> 1) In Havana focus on migrating in all projects to oslo.db code
>>
>>
>> [Gary Kotton] It is worth going for.
>
>
> +1
>
>
>>
>> 2) in IceHouse create and move to oslo.db lib
>>
>> [Gary Kotton] I am in favor of this pending the stability of the oslo db
code
>> (which is on the right track)
>>
>>
>
> I agree with Gary.

How do we define API stability for oslo.db? It seems like most Oslo libs
were used in most projects for a full cycle with minimal to no api changes
before being pulled out.

>
>
>> And are you agree that we should start working around olso.db lib now.
>>
>> [Gary Kotton] I am not sure what the effort for this is, but if this is
just a
>> matter of preparing it all for the start of Icehouse then cool, go for
it. I
>> nonetheless suggest speaking with Mark McLoughlinto try and learn
lessons from
>> the process with the common config module J
>>
>
> +1 here as well
>
>
> FF
>
> --
> @flaper87
> Flavio Percoco
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] rpcapi versions control with can_send_version()

2013-08-20 Thread Day, Phil
Hi Folks,

Now we have explicit control to limit the version of rpc methods that can be 
sent, but I'm wondering what I need to do now to make the next version of a 
call adding an additional parameter.

It looks like the current code is really focused on the data types being 
passed, rather that the signature of the call (i.e do we pass an old or new 
style instance object).   So taking terminate_instance() as an example:


def terminate_instance(self, ctxt, instance, bdms, reservations=None):
if self.can_send_version('2.35'):
version = '2.35'
else:
version = '2.27'
instance = jsonutils.to_primitive(instance)
bdms_p = jsonutils.to_primitive(bdms)
self.cast(ctxt, self.make_msg('terminate_instance',
instance=instance, bdms=bdms_p,
reservations=reservations),
topic=_compute_topic(self.topic, ctxt, None, instance),
version=version)

If I want to add a parameter and bump this to version 2.36, do I just change 
the version checked in can_send_version - or should there now be specific 
handling for each new version:

def terminate_instance(self, ctxt, instance, bdms, reservations=None, 
clean_shutdown=False):

bdms_p = jsonutils.to_primitive(bdms)

if self.can_send_version('2.36'):
version = '2.36'
   msg = self.make_msg('terminate_instance',
 instance=instance, bdms=bdms_p,
 reservations=reservations,
 clean_shutdown=clean_shutdown)
elif self.can_send_version('2.35'):
version = '2.35'
   msg = self.make_msg('terminate_instance',
 instance=instance, bdms=bdms_p,
 reservations=reservations)
else:
version = '2.27'
instance = jsonutils.to_primitive(instance)
msg = self.make_msg('terminate_instance',
 instance=instance, bdms=bdms_p,
 reservations=reservations)

self.cast(ctxt, msg,
topic=_compute_topic(self.topic, ctxt, None, instance),
version=version)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][VPNaaS] Supporting OpenSwan or StrongSwan or Both?

2013-08-20 Thread Kyle Mestery (kmestery)
On Aug 20, 2013, at 12:02 PM, Nachi Ueno  wrote:
> Hi Paul
> 
> 2013/8/20 Paul Michali :
>> Was the original reasoning to use StrongSwan over OpenSwan, only because of
>> community support? I vaguely recall something mentioned about StrongSwan
>> having additional capabilities or something. Can anyone confirm?
>> 
>> As far as which option, it sounds like B or C-2 are the better choices, just
>> because of the RHEL support. The two are very similar (from an end-user
>> standpoint), so the added doc/help shouldn't be too bad. From a  code
>> perspective, much of the code can be shared, so the added testing
>> requirements should also be minimal.
> 
> OK so C-2 has +3. (Salvatore, Paul and me)
> 
+1 for C-2 as well from me.

>> The only point to make about C-2 is it requires us to either take the extra
>> time now to support multiple drivers (we will have to eventually - I'll be
>> working on one), or do a refactoring later to support a hierarchy of
>> drivers. I brought that point up in the review of the reference driver, and
>> Nachi and I talked about this a bit yesterday. We both agreed that we could
>> do the refactoring later, to support drivers that are different than the
>> Swan family.
>> 
>> Related to that, I did have some question about multiple drivers...
>> 
>> How do we handle the case where the drivers support different capabilities?
>> For example, say one driver supports an encryption mode that the other does
>> not.
>> 
>> Can we reject unsupported capabilities at configuration time? That seems
>> cleaner, but I'm wondering how that would be done (I know we'll specify the
>> provider, but wondering how we'll invoke driver specific verification
>> routines - do we have that mechanism defined?).
> 
> There is two kind of drivers.
> - service_drivers (server side)
>  Handle API request and dispatch request for agent side
> #This is called plugin_driver in LBaaS, but I prefer service_driver
> because it is more specific name)
> 
> - device_drivers (agent side)
>  Get request from API then apply config for agent
> 
> So regarding to the capabilities, we should write new service_drivers.
> 
> Best
> Nachi
> 
>> Regards,
>> 
>> PCM (Paul Michali)
>> 
>> MAIL p...@cisco.com
>> IRC   pcm_  (irc.freenode.net)
>> TW   @pmichali
>> 
>> On Aug 19, 2013, at 6:15 PM, Nachi Ueno  wrote:
>> 
>> Hi folks
>> 
>> I would like to discuss whether supporting OpenSwan or StrongSwan or Both
>> for
>> ipsec driver?
>> 
>> We choose StrongSwan because of the community is active and plenty of docs.
>> However It looks like RHEL is only supporting OpenSwan.
>> 
>> so we should choose
>> 
>> (A) Support StrongSwan
>> (B) Support OpenSwan
>> (C) Support both
>>  (C-1) Make StrongSwan default
>>  (C-2) Make OpenSwan default
>> 
>> Actually, I'm working on C-2.
>> The patch is still WIP https://review.openstack.org/#/c/42264/
>> 
>> Besides the patch is small, supporting two driver may burden
>> in H3 including docs or additional help.
>> IMO, this is also a valid comment.
>> 
>> Best
>> Nachi
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][keystone][LDAP] Error about "enable" and "desc" attribute type undefine.

2013-08-20 Thread XINYU ZHAO
Which release are you using?
According to my experience last year when ldap backend was much more
premature, i had to add those attributes in my ldap server's  manually,
because there is no such attribute in its schema.
I don't know the status now.


On Mon, Aug 19, 2013 at 11:42 PM, Qinglong.Meng wrote:

> Hi all,
>   I configure keystone with ldap backend followed LDAP section of
> http://docs.openstack.org/developer/keystone/configuration.html,
>  and when I create tenant in ldap, I got the error about "enable" and
> "desc" attribute type undefined in keystone.log.
>  Here is keystone.conf:
>  http://paste.openstack.org/show/44574/
>
>  keystone.log
>  http://paste.openstack.org/show/44575/
>
>  the ldif of ldap server
>  http://paste.openstack.org/show/44578/
>
>  sample slapd.conf
>  http://paste.openstack.org/show/44579/
>
> --
>
> Lawrency Meng
> mail: mengql112...@gmail.com
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openst...@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-20 Thread John Bresnahan
Mark, good thoughts (as usual)

On 08/19/2013 09:15 PM, Mark Washenberger wrote:
> The goal isn't really to replace sqlalchemy completely.

Perhaps my problem is that I am not exactly sure what the goals are.
Cleanup (BL mixed in the BL seems wrong)?  HA or performance (are people
hitting limits that are traced to SQL) ?  Flexibility/Research
(plug-able modules for experimentation)?  I think it would help scope
the effort (and temper my concern about the work/reward ratio) if they
were enumerated in clear place.

> I'm hoping I can
> create a space where multiple drivers can operate efficiently without
> introducing bugs (i.e. pull all that business logic out of the driver!)
> I'll be very interested to see if people can, after such a refactoring,
> try out some more storage approaches, such as dropping the sqlalchemy
> orm in favor of its generic engine support or direct sql execution, as
> well as NoSQL what-have-you. We don't have to make all of the drivers
> live in the project, so it really can be a good place for interested
> parties to experiment.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo.db] Eventlet in db (was, Replacing Glance DB code to Oslo DB code.)

2013-08-20 Thread Joshua Harlow
Ok, that’s good.

I don't think the following though would work. Maybe something else is needed?

try:
import eventlet
eventlet_on = True
except ImportError:
eventlet_on = False

Due to how oslo.db could be used the environment may actually have eventlet 
installed (say a server running keystone and nova-api at the same time). The 
first project (keystone) might not want to use eventlet (but it might be in the 
python module path, while nova-api on the same box would want to use it). So we 
might need a more advanced configuration setting to make this tunable (and not 
depend on the python import statement to be that tunable setting).

From: Ben Nemec mailto:openst...@nemebean.com>>
Reply-To: "openst...@nemebean.com" 
mailto:openst...@nemebean.com>>, OpenStack Development 
Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, August 19, 2013 9:41 PM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.


On 08/19/13 20:34, Joshua Harlow wrote:

Just a related question,

Oslo 'incubator' db code I think depends on eventlet. This means any code that 
uses the oslo.db code could/would(?) be dependent on eventlet.

Will there be some refactoring there to not require it (useful for projects 
that are trying to move away from eventlet).

https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/session.py#L248


Glancing through that file, it looks like the greenthread import is only used 
for playing nice with other greenthreads.  It should be pretty easy to make it 
conditional so we don't require it, but will use it if it's available.

-Ben
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][VPNaaS] Supporting OpenSwan or StrongSwan or Both?

2013-08-20 Thread Nachi Ueno
Hi Paul

2013/8/20 Paul Michali :
> Was the original reasoning to use StrongSwan over OpenSwan, only because of
> community support? I vaguely recall something mentioned about StrongSwan
> having additional capabilities or something. Can anyone confirm?
>
> As far as which option, it sounds like B or C-2 are the better choices, just
> because of the RHEL support. The two are very similar (from an end-user
> standpoint), so the added doc/help shouldn't be too bad. From a  code
> perspective, much of the code can be shared, so the added testing
> requirements should also be minimal.

OK so C-2 has +3. (Salvatore, Paul and me)

> The only point to make about C-2 is it requires us to either take the extra
> time now to support multiple drivers (we will have to eventually - I'll be
> working on one), or do a refactoring later to support a hierarchy of
> drivers. I brought that point up in the review of the reference driver, and
> Nachi and I talked about this a bit yesterday. We both agreed that we could
> do the refactoring later, to support drivers that are different than the
> Swan family.
>
> Related to that, I did have some question about multiple drivers...
>
> How do we handle the case where the drivers support different capabilities?
> For example, say one driver supports an encryption mode that the other does
> not.
>
> Can we reject unsupported capabilities at configuration time? That seems
> cleaner, but I'm wondering how that would be done (I know we'll specify the
> provider, but wondering how we'll invoke driver specific verification
> routines - do we have that mechanism defined?).

There is two kind of drivers.
- service_drivers (server side)
  Handle API request and dispatch request for agent side
#This is called plugin_driver in LBaaS, but I prefer service_driver
because it is more specific name)

- device_drivers (agent side)
  Get request from API then apply config for agent

So regarding to the capabilities, we should write new service_drivers.

Best
Nachi

> Regards,
>
> PCM (Paul Michali)
>
> MAIL p...@cisco.com
> IRC   pcm_  (irc.freenode.net)
> TW   @pmichali
>
> On Aug 19, 2013, at 6:15 PM, Nachi Ueno  wrote:
>
> Hi folks
>
> I would like to discuss whether supporting OpenSwan or StrongSwan or Both
> for
> ipsec driver?
>
> We choose StrongSwan because of the community is active and plenty of docs.
> However It looks like RHEL is only supporting OpenSwan.
>
> so we should choose
>
> (A) Support StrongSwan
> (B) Support OpenSwan
> (C) Support both
>   (C-1) Make StrongSwan default
>   (C-2) Make OpenSwan default
>
> Actually, I'm working on C-2.
> The patch is still WIP https://review.openstack.org/#/c/42264/
>
> Besides the patch is small, supporting two driver may burden
> in H3 including docs or additional help.
> IMO, this is also a valid comment.
>
> Best
> Nachi
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] proposing Alex Gaynor for core on openstack/requirements

2013-08-20 Thread Doug Hellmann
Without any objections, I've added Alex Gaynor to the requirements-core
team.

Welcome, Alex!


On Sat, Aug 17, 2013 at 2:46 PM, Jeremy Stanley  wrote:

> On 2013-08-16 11:04:14 -0400 (-0400), Doug Hellmann wrote:
> > I'd like to propose Alex Gaynor for core status on the
> > requirements project.
> [...]
>
> Agreed, I for one welcome his continued assistance.
> --
> Jeremy Stanley
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stats on blueprint design info / creation times

2013-08-20 Thread Daniel P. Berrange
On Tue, Aug 20, 2013 at 05:18:21PM +0100, Daniel P. Berrange wrote:
> On Tue, Aug 20, 2013 at 12:53:25PM -0300, Thierry Carrez wrote:
> > Anne Gentle wrote:
> > >   - Less than 1 in 4 blueprints is created before the devel
> > > period starts for a release.
> > > 
> > > I find this date mismatch especially intriguing, because the Foundation
> > > and member company sponsors spend millions on Design Summits annually
> > > and caters so much to getting people together in person. Yet the
> > > blueprints aren't created in enough detail for discussion before the
> > > Summit dates? Is that really what the data says? Is any one project
> > > skewing this (as in, they haven't been at a Summit or they don't follow
> > > integrated release dates?)
> > 
> > That does not surprise me. A lot of people do not link a blueprint to
> > their session proposal on the design summit session suggestion system --
> > sometimes it's the discussion itself which allows to formulate the right
> > blueprints, and those are filed in the weeks just after the summit. And
> > I think that's fine.
> > 
> > It would be more interesting to check how many blueprints are created
> > more than two weeks after the design summit. Those would be the late
> > blueprints (or the ones created as a tickbox), which escape the release
> > planning process.
> 
> I'll look up the historic dates for each summit, and try to generate
> some stats based on blueprint creation date vs  summit date + 2 weeks.

Re-running using the summit date + 2 weeks shift things a little bit.
Here is the  summary for 3 most recent series:


Series: folsom all
  Specs: 177
  Specs (no URL): 145
  Specs (w/ URL): 32
  Specs (Before Mon, 30 Apr 2012 00:00:00 +): 62
  Specs (After Mon, 30 Apr 2012 00:00:00 +): 115
  Average lines: 5
  Average words: 54


Series: grizzly all
  Specs: 255
  Specs (no URL): 187
  Specs (w/ URL): 68
  Specs (Before Sun, 28 Oct 2012 23:00:00 +): 81
  Specs (After Sun, 28 Oct 2012 23:00:00 +): 174
  Average lines: 6
  Average words: 61


Series: havana all
  Specs: 470
  Specs (no URL): 378
  Specs (w/ URL): 92
  Specs (Before Mon, 29 Apr 2013 00:00:00 +): 197
  Specs (After Mon, 29 Apr 2013 00:00:00 +): 273
  Average lines: 6
  Average words: 69


Full data set for version 3 of the stats is now here

  http://berrange.fedorapeople.org/openstack-blueprints/v3/

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to revert per-user-quotas

2013-08-20 Thread Andrew Laski

On 08/21/13 at 12:02am, Yingjun Li wrote:

Thanks for address the issues. About the bad state for fixed_ips,
floating_ips, i think we could make the user_id column=NULL when creating
the quota usage and reservation, so the usages for fixed_ips and
floating_ips will be  synced within the project.
Does this make sense?


On the database side that should address the issue I'm seeing, and will 
fix the issue with the sync methods for those resources.


I would be interested to see how the distinction between user level 
resources and project level resources is handled in the code so that 
these types of accidental bugs are avoided.





2013/8/20 Andrew Laski 


The patch in question 
(https://review.openstack.org/**#/c/28232/24)
adds the ability to track quota usage on a per user basis within a project.
 I have run into two issues with it so far: the db migration is incomplete
and leaves the data in a bad state, and the sync methods used during quota
reservations no longer work for fixed_ips, floating_ips, and networks since
they are not tied to a user.

The db migration issue is documented at https://bugs.launchpad.net/**
nova/+bug/1212798  but the
tl;dr is that the quota usages that were in place before the migration is
run can not be decremented and aren't fixed by the healing sync that
occurs.  I sought to address this by introducing a new migration which
performs a full sync of quota usages and removes the bad rows but that led
me to the next issue.

Some resources can't be synced properly because they're tracked per user
in the quota table but they're not tied to a user so it's not feasible to
grab a count of how many are being used by any particular user.  So right
now the quota_usages table can get into a bad state with no good way to
address it.

Right now I think it will be better to revert this change and re-introduce
it once these issues are worked out. Thoughts?

As an addendum, the patch merged about a month ago on Jul 25th and looks
to have some minor conflicts for a revert but should be minimally
disruptive.

__**_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.**org 
http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-20 Thread Mark Washenberger
On Tue, Aug 20, 2013 at 3:20 AM, Flavio Percoco  wrote:

> On 20/08/13 00:15 -0700, Mark Washenberger wrote:
>
>>
>>2) I highly caution folks who think a No-SQL store is a good
>> storage
>>solution for any of the data currently used by Nova, Glance
>> (registry),
>>Cinder (registry), Ceilometer, and Quantum. All of the data stored
>> and
>>manipulated in those projects is HIGHLY relational data, and not
>>objects/documents. Switching to use a KVS for highly relational
>> data is
>>a terrible decision. You will just end up implementing joins in
>> your
>>code...
>>
>>
>>
>>+1
>>
>>FWIW, I'm a huge fan of NoSQL technologies but I couldn't agree more
>>here.
>>
>>
>>
>> I have to say I'm kind of baffled by this sentiment (expressed here and
>> elsewhere in the thread.) I'm not a NoSQL expert, but I hang out with a
>> few and
>> I'm pretty confident Glance at least is not that relational. We do two
>> types of
>> joins in glance. The first, like image properties, is basically just an
>> implementation detail of the sql driver. Its not core to the application.
>> Any
>> NoSQL implementation will simply completely denormalize those properties
>> into
>> the image record. (And honestly, so might an optimized SQL
>> implementation. . .)
>>
>> The second type of join, image_members, is basically just a hack to solve
>> the
>> problem created because the glance api offers several simultaneous
>> implicit
>> "views" of images. Specifically, when you list images in glance, you are
>> seeing
>> a union of three views: public images, images you own, and images shared
>> with
>> you. IMO its actually a more scalable and sensible solution to make these
>> views
>> more explicit and independent in the API and code, taking a lesson from
>> filesystems which have to scale to a lot of metadata (notice how
>> visibility is
>> generally an attribute of a directory, not of regular files in your
>> typical
>> Unix FS?). And to solve this problem in SQL now we still have to do a
>> server-side union, which is a bit sad. But even before we can refactor
>> the API
>> (v3 anyone?) I don't see it as unworkably slow for a NoSQL driver to track
>> these kinds of views.
>>
>
> You make really good points here but I don't fully agree.
>

Thanks for your measured response. I wrote my previous response a bit late
at night for me and I hope I wasn't rude :-/

>
> I don't think the issue is actually translating Glance's models to
> NoSQL or NoSQL db's performance, I'm pretty sure we could benefit in some
> areas but not all of them. To me, and that's what my comment was referring
> to, this is more related to  what kind of data we're actually
> treating, the guarantees we should provide and how they are
> implemented now.
>
> There are a couple of things that would worry me about an hypothetic
> support for NoSQL but I guess one that I'd consider very critical is
> migrations. Some could argue asking whether we'd really need them or
> not  - when talking about NoSQL databases - but we do. Using a
> schemaless database wouldn't mean we don't have a schema. Migrations
> are not trivial for some NoSQL databases, plus, this would mean
> drivers, most probably, would have to have their own implementation.


I definitely think different drivers will need their own migrations. When
I've been playing around with this refactoring, I created a "Migrator"
interface and made it part of the driver interface to instantiate an
appropriate migrator object. But I was definitely concerned about a number
of things here. First off, is it just too confusing to have multiple
migrations? The migration sequences will definitely need to be different
per driver. How do we support cross-driver migrations?


>
>
>  The bigger concern to me is that Glance seems a bit trigger-happy with
>> indexes.
>> But I'm confident we're in a similar boat there: performance in NoSQL
>> won't be
>> that terrible for the most important use cases, and a later refactoring
>> can put
>> us on a more sustainable track in the long run.
>>
>
> I'm not worried about this, though.
>

Okay, that is reassuring.


>
>  All I'm saying is that we should be careful not to swap one set of
 problems for another.

>>>
>>  My 2 cents: I am in agreement with Jay.  I am leery of NoSQL being a
>>> direct sub in and I fear that this effort can be adding a large workload
>>> for little benefit.
>>>
>>
>> The goal isn't really to replace sqlalchemy completely. I'm hoping I can
>> create
>> a space where multiple drivers can operate efficiently without
>> introducing bugs
>> (i.e. pull all that business logic out of the driver!) I'll be very
>> interested
>> to see if people can, after such a refactoring, try out some more storage
>> approaches, such as dropping the sqlalchemy orm in favor of its generic
>> engine
>> support or direct sql execution, as well as NoSQL what-have-you. We don't
>> have
>> to make all of the drivers live

[openstack-dev] Hyper-V Meeting minutes

2013-08-20 Thread Alessandro Pilotti
Today's Hyper-V meeting minutes:

Minutes:
http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-08-20-16.06.html
Minutes (text): 
http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-08-20-16.06.txt
Log:
http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-08-20-16.06.log.html


Thanks,

Alessandro
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stats on blueprint design info / creation times

2013-08-20 Thread Daniel P. Berrange
On Tue, Aug 20, 2013 at 12:53:25PM -0300, Thierry Carrez wrote:
> Anne Gentle wrote:
> >   - Less than 1 in 4 blueprints is created before the devel
> > period starts for a release.
> > 
> > I find this date mismatch especially intriguing, because the Foundation
> > and member company sponsors spend millions on Design Summits annually
> > and caters so much to getting people together in person. Yet the
> > blueprints aren't created in enough detail for discussion before the
> > Summit dates? Is that really what the data says? Is any one project
> > skewing this (as in, they haven't been at a Summit or they don't follow
> > integrated release dates?)
> 
> That does not surprise me. A lot of people do not link a blueprint to
> their session proposal on the design summit session suggestion system --
> sometimes it's the discussion itself which allows to formulate the right
> blueprints, and those are filed in the weeks just after the summit. And
> I think that's fine.
> 
> It would be more interesting to check how many blueprints are created
> more than two weeks after the design summit. Those would be the late
> blueprints (or the ones created as a tickbox), which escape the release
> planning process.

I'll look up the historic dates for each summit, and try to generate
some stats based on blueprint creation date vs  summit date + 2 weeks.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stats on blueprint design info / creation times

2013-08-20 Thread Daniel P. Berrange
On Tue, Aug 20, 2013 at 10:36:39AM -0500, Anne Gentle wrote:
> On Mon, Aug 19, 2013 at 10:47 AM, Daniel P. Berrange 
> wrote:
> > The data for the last 3 releases is:
> >
> >   Series: folsom
> > Specs: 178
> > Specs (no URL): 144
> > Specs (w/ URL): 34
> > Specs (Early): 38
> > Specs (Late): 140
> > Average lines: 5
> > Average words: 55
> >
> >
> >   Series: grizzly
> > Specs: 227
> > Specs (no URL): 175
> > Specs (w/ URL): 52
> > Specs (Early): 42
> > Specs (Late): 185
> > Average lines: 5
> > Average words: 56
> >
> >
> >   Series: havana
> > Specs: 415
> > Specs (no URL): 336
> > Specs (w/ URL): 79
> > Specs (Early): 117
> > Specs (Late): 298
> > Average lines: 6
> > Average words: 68
> >
> >
> > Looking at this data there are 4 key take away points
> >
> >   - We're creating more blueprints in every release.
> >
> >   - Less than 1 in 4 blueprints has a link to a design document.
> >
> >   - The description text for blueprints is consistently short
> > (6 lines) across releases.
> >
> >
> Thanks for running the numbers. My instinct told me this was the case, but
> the data is especially helpful. Sometimes six lines is enough, but mostly I
> rely on the linked spec for writing docs. If those links are at 25% that's
> a bad trend.
> 
> 
> >   - Less than 1 in 4 blueprints is created before the devel
> > period starts for a release.
> >
> >
> I find this date mismatch especially intriguing, because the Foundation and
> member company sponsors spend millions on Design Summits annually and
> caters so much to getting people together in person. Yet the blueprints
> aren't created in enough detail for discussion before the Summit dates? Is
> that really what the data says? Is any one project skewing this (as in,
> they haven't been at a Summit or they don't follow integrated release
> dates?)
> 
> I'll dig in further to the data set below.
> 
> 
> >
> > You can view the full data set + the script to generate the
> > data which you can look at to see if I made any logic mistakes:
> >
> >   http://berrange.fedorapeople.org/openstack-blueprints/
> >
> >
> >
> I wouldn't think to include marconi in the dataset as they've just asked
> about incubation in June 2013. I think you excluded keystone. You also want
> ceilometer and oslo if you already included heat. Is it fairly easy to
> re-run? I'd like to see it re-run with the correct program listings.
> 
> Also please rerun with Swift excluded as their release dates are not on the
> mark with the other projects. I'd like more info around the actual timing.

Ok, I've changed the projects it analyses per your recommendation.
Also I've made it print the cut off date I used to assign blueprints
to the "early" or "late" creation date buckets

The overall results are approximately the same though

Series: folsom all
  Specs: 177
  Specs (no URL): 145
  Specs (w/ URL): 32
  Specs (Before 2012-04-05 14:43:29.870782+00:00): 39
  Specs (After 2012-04-05 14:43:29.870782+00:00): 138
  Average lines: 5
  Average words: 54


Series: grizzly all
  Specs: 255
  Specs (no URL): 187
  Specs (w/ URL): 68
  Specs (Before 2012-09-27 00:00:00+00:00): 47
  Specs (After 2012-09-27 00:00:00+00:00): 208
  Average lines: 6
  Average words: 61


Series: havana all
  Specs: 470
  Specs (no URL): 379
  Specs (w/ URL): 91
  Specs (Before 2013-04-04 12:59:00+00:00): 137
  Specs (After 2013-04-04 12:59:00+00:00): 333
  Average lines: 6
  Average words: 69


I also produced summary stats per project. Showing Nova 


Series: folsom nova
  Specs: 54
  Specs (no URL): 37
  Specs (w/ URL): 17
  Specs (Before 2012-04-05 14:43:29.870782+00:00): 20
  Specs (After 2012-04-05 14:43:29.870782+00:00): 34
  Average lines: 6
  Average words: 61


Series: grizzly nova
  Specs: 68
  Specs (no URL): 51
  Specs (w/ URL): 17
  Specs (Before 2012-09-27 00:00:00+00:00): 17
  Specs (After 2012-09-27 00:00:00+00:00): 51
  Average lines: 6
  Average words: 65


Series: havana nova
  Specs: 131
  Specs (no URL): 107
  Specs (w/ URL): 24
  Specs (Before 2013-04-04 12:59:00+00:00): 31
  Specs (After 2013-04-04 12:59:00+00:00): 100
  Average lines: 7
  Average words: 72


And keystone



Series: folsom keystone
  Specs: 9
  Specs (no URL): 8
  Specs (w/ URL): 1
  Specs (Before 2012-04-05 14:43:29.870782+00:00): 3
  Specs (After 2012-04-05 14:43:29.870782+00:00): 6
  Average lines: 4
  Average words: 37


Series: grizzly keystone
  Specs: 16
  Specs (no URL): 9
  Specs (w/ URL): 7
  Specs (Before 2012-09-27 00:00:00+00:00): 7
  Specs (After 2012-09-27 00:00:00+00:00): 9
  Average lines: 11
  Average words: 117


Series: havana keystone
  Specs: 25
  Specs (no URL): 13
  Specs (w/ URL): 12
  Specs (Before 2013-04-04 12:59:00+00:00): 5
  Specs (After 2013-04-04 12:59:00+00:00): 20
  Average lines: 9
  Average words: 95


I won't include the other projects in this mail, but you can see them in
the blueprint-summary-XXX.txt files here:


Re: [openstack-dev] [Nova] Proposal to revert per-user-quotas

2013-08-20 Thread Yingjun Li
Thanks for address the issues. About the bad state for fixed_ips,
floating_ips, i think we could make the user_id column=NULL when creating
the quota usage and reservation, so the usages for fixed_ips and
floating_ips will be  synced within the project.
Does this make sense?


2013/8/20 Andrew Laski 

> The patch in question 
> (https://review.openstack.org/**#/c/28232/24)
> adds the ability to track quota usage on a per user basis within a project.
>  I have run into two issues with it so far: the db migration is incomplete
> and leaves the data in a bad state, and the sync methods used during quota
> reservations no longer work for fixed_ips, floating_ips, and networks since
> they are not tied to a user.
>
> The db migration issue is documented at https://bugs.launchpad.net/**
> nova/+bug/1212798  but the
> tl;dr is that the quota usages that were in place before the migration is
> run can not be decremented and aren't fixed by the healing sync that
> occurs.  I sought to address this by introducing a new migration which
> performs a full sync of quota usages and removes the bad rows but that led
> me to the next issue.
>
> Some resources can't be synced properly because they're tracked per user
> in the quota table but they're not tied to a user so it's not feasible to
> grab a count of how many are being used by any particular user.  So right
> now the quota_usages table can get into a bad state with no good way to
> address it.
>
> Right now I think it will be better to revert this change and re-introduce
> it once these issues are worked out. Thoughts?
>
> As an addendum, the patch merged about a month ago on Jul 25th and looks
> to have some minor conflicts for a revert but should be minimally
> disruptive.
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stats on blueprint design info / creation times

2013-08-20 Thread Mark McLoughlin
On Mon, 2013-08-19 at 14:38 -0300, Thierry Carrez wrote:
> Note that in some cases, some "improvements" that do not clearly fall
> into the "bug" category are landed without a blueprint link (or a bug
> link). So a first step could be to require that a review always
> references a bug or a blueprint before it's landed. Then, improve the
> quality of the information present in said bug/blueprint.

I think that a "every review must reference a bug or blueprint" rule
would encourage more of this process checkbox behaviour.

Blueprints are useful for some things:

  - where a longer design rationale than is appropriate for a commit 
message is required
  - where it's a feature we want to raise awareness around
  - where it's something that's going to take a while to bake and we 
need to track its progress
  - etc.

(We've seen already how DocImpact can be used to obviate the need for
"docs folks should look at this" use case. We could do similarly for
QA.)

And for bugs:

  - where the person with info about a problem isn't the person fixing 
it
  - where there's important background information (like detailed logs) 
which can't be summarized appropriately in a commit message
  - where we want it tracked as a must-have for a given release
  - etc.

So, I'm totally fine with someone showing up with a standalone commit
(i.e. no bug or blueprint) and a nice explanatory commit message, if the
bug or blueprint would not provide any of the value listed above.

Where a bug or blueprint doesn't provide such value, you often see
people with terse commit messages referencing a process checkbox bug or
blueprint ... and that isn't helping anything.

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stats on blueprint design info / creation times

2013-08-20 Thread Thierry Carrez
Anne Gentle wrote:
>   - Less than 1 in 4 blueprints is created before the devel
> period starts for a release.
> 
> I find this date mismatch especially intriguing, because the Foundation
> and member company sponsors spend millions on Design Summits annually
> and caters so much to getting people together in person. Yet the
> blueprints aren't created in enough detail for discussion before the
> Summit dates? Is that really what the data says? Is any one project
> skewing this (as in, they haven't been at a Summit or they don't follow
> integrated release dates?)

That does not surprise me. A lot of people do not link a blueprint to
their session proposal on the design summit session suggestion system --
sometimes it's the discussion itself which allows to formulate the right
blueprints, and those are filed in the weeks just after the summit. And
I think that's fine.

It would be more interesting to check how many blueprints are created
more than two weeks after the design summit. Those would be the late
blueprints (or the ones created as a tickbox), which escape the release
planning process.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-20 Thread Jay Buffington
On Tue, Aug 20, 2013 at 8:02 AM, Mark McLoughlin  wrote:

> On Tue, 2013-08-20 at 11:26 +0100, Mark McLoughlin wrote:
>
> The full study is here:
> >
> >
> http://support.smartbear.com/resources/cc/book/code-review-cisco-case-study.pdf
>

I can't find the data they based their numbers on, nor their definition for
a line of code, so I feel like I have to take that study with a grain of
salt.


> We should not enforce rules like this blindly.


 I agree with the sentiment: less complex patches are easier to review.
 Reviewers should use their judgement and push back on complex patches to
be split up.

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stats on blueprint design info / creation times

2013-08-20 Thread Anne Gentle
On Mon, Aug 19, 2013 at 10:47 AM, Daniel P. Berrange wrote:

> In this thread about code review:
>
>
> http://lists.openstack.org/pipermail/openstack-dev/2013-August/013701.html
>
> I mentioned that I thought there were too many blueprints created without
> sufficient supporting design information and were being used for "tickbox"
> process compliance only. I based this assertion on a gut feeling I have
> from experiance in reviewing.
>
> To try and get a handle on whether there is truely a problem, I used the
> launchpadlib API to extract some data on blueprints [1].
>
> In particular I was interested in seeing:
>
>   - What portion of blueprints have an URL containing an associated
> design doc,
>
>   - How long the descriptive text was in typical blueprints
>
>   - Whether a blueprint was created before or after the dev period
> started for that major release.
>
>
> The first two items are easy to get data on. On the second point, I redid
> line wrapping on description text to normalize the line count across all
> blueprints. This is because many blueprints had all their text on one
> giant long line, which would skew results. I thus wrapped all blueprints
> at 70 characters.
>
> The blueprint creation date vs release cycle dev start date is a little
> harder. I inferred the start date of each release, by using the end date
> of the previous release. This is probably a little out but hopefully not
> by enough to totally invalidate the usefulness of the stats below. Below,
> "Early" means created before start of devel, "Late" means created after
> the start of devel period.
>
> The data for the last 3 releases is:
>
>   Series: folsom
> Specs: 178
> Specs (no URL): 144
> Specs (w/ URL): 34
> Specs (Early): 38
> Specs (Late): 140
> Average lines: 5
> Average words: 55
>
>
>   Series: grizzly
> Specs: 227
> Specs (no URL): 175
> Specs (w/ URL): 52
> Specs (Early): 42
> Specs (Late): 185
> Average lines: 5
> Average words: 56
>
>
>   Series: havana
> Specs: 415
> Specs (no URL): 336
> Specs (w/ URL): 79
> Specs (Early): 117
> Specs (Late): 298
> Average lines: 6
> Average words: 68
>
>
> Looking at this data there are 4 key take away points
>
>   - We're creating more blueprints in every release.
>
>   - Less than 1 in 4 blueprints has a link to a design document.
>
>   - The description text for blueprints is consistently short
> (6 lines) across releases.
>
>
Thanks for running the numbers. My instinct told me this was the case, but
the data is especially helpful. Sometimes six lines is enough, but mostly I
rely on the linked spec for writing docs. If those links are at 25% that's
a bad trend.


>   - Less than 1 in 4 blueprints is created before the devel
> period starts for a release.
>
>
I find this date mismatch especially intriguing, because the Foundation and
member company sponsors spend millions on Design Summits annually and
caters so much to getting people together in person. Yet the blueprints
aren't created in enough detail for discussion before the Summit dates? Is
that really what the data says? Is any one project skewing this (as in,
they haven't been at a Summit or they don't follow integrated release
dates?)

I'll dig in further to the data set below.


>
> You can view the full data set + the script to generate the
> data which you can look at to see if I made any logic mistakes:
>
>   http://berrange.fedorapeople.org/openstack-blueprints/
>
>
>
I wouldn't think to include marconi in the dataset as they've just asked
about incubation in June 2013. I think you excluded keystone. You also want
ceilometer and oslo if you already included heat. Is it fairly easy to
re-run? I'd like to see it re-run with the correct program listings.

Also please rerun with Swift excluded as their release dates are not on the
mark with the other projects. I'd like more info around the actual timing.



> There's only so much you can infer from stats like this, but IMHO think the
> stats show that we ought to think about how well we are using blueprints as
> design / feature approval / planning tools.
>
>
> That 3 in 4 blueprint lack any link to a design doc and have only 6 lines
> of
> text description, is a cause for concern IMHO. The blueprints should be
> giving
> code reviewers useful background on the motivation of the dev work & any
> design planning that took place. While there are no doubt some simple
> features
> where 6 lines of text is sufficient info in the blueprint, I don't think
> that
> holds true for the majority.
>
> In addition to helping code reviewers, the blueprints are also arguably a
> source of info for QA people testing OpenStack and for the docs teams
> documenting new features in each release. I'm not convinced that there is
> enough info in many of the blueprints to be of use to QA / docs people.
>
>
> The creation dates of the blueprints are also an interesting data point.
> If the

Re: [openstack-dev] Code review study

2013-08-20 Thread Russell Bryant
On 08/20/2013 11:08 AM, Daniel P. Berrange wrote:
> On Tue, Aug 20, 2013 at 04:02:12PM +0100, Mark McLoughlin wrote:
>> On Tue, 2013-08-20 at 11:26 +0100, Mark McLoughlin wrote:
>>> On Thu, 2013-08-15 at 14:12 +1200, Robert Collins wrote:
 This may interest data-driven types here.

 https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/

 Note specifically the citation of 200-400 lines as the knee of the review
 effectiveness curve: that's lower than I thought - I thought 200 was
 clearly fine - but no.
>>>
>>> The full study is here:
>>>
>>> http://support.smartbear.com/resources/cc/book/code-review-cisco-case-study.pdf
>>>
>>> This is an important subject and I'm glad folks are studying it, but I'm
>>> sceptical about whether the "Defect density vs LOC" is going to help us
>>> come up with better guidelines than we have already.
>>>
>>> Obviously, a metric like LOC hides some serious subtleties. Not all
>>> changes are of equal complexity. We see massive refactoring patches
>>> (like s/assertEquals/assertEqual/) that are actually safer than gnarly,
>>> single-line, head-scratcher bug-fixes. The only way the report addresses
>>> that issue with the underlying data is by eliding >10k LOC patches.
>>>
>>> The "one logical change per commit" is a more effective guideline than
>>> any LOC based guideline:
>>>
>>> https://wiki.openstack.org/wiki/GitCommitMessages#Structural_split_of_changes
>>>
>>> IMHO, the number of distinct logical changes in a patch has a more
>>> predictable impact on review effectiveness than the LOC metric.
>>
>> Wow, I didn't notice Joe had started to enforce that here:
>>
>>   https://review.openstack.org/41695
>>
>> and the exact example I mentioned above :)
>>
>> We should not enforce rules like this blindly.
> 
> Agreed, lines of code is a particularly poor metric for evaluating
> commits on. 

Agreed, I would _strongly_ prefer no enforcement around LOC.  It's just
not the right metric to be looking at for a hard rule.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-20 Thread Johannes Erdfelt
On Tue, Aug 20, 2013, Flavio Percoco  wrote:
> There are a couple of things that would worry me about an hypothetic
> support for NoSQL but I guess one that I'd consider very critical is
> migrations. Some could argue asking whether we'd really need them or
> not  - when talking about NoSQL databases - but we do. Using a
> schemaless database wouldn't mean we don't have a schema. Migrations
> are not trivial for some NoSQL databases, plus, this would mean
> drivers, most probably, would have to have their own implementation.

Migrations aren't always about the schema. Take migrations 015 and 017
in glance for instance. They migrate data by fixing the URI and making
sure it's quoted correctly. The schema doesn't change, but the data
does.

This shares many of the same practical problems that schema migrations
have and would apply to NoSQL databases.

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-20 Thread Daniel P. Berrange
On Tue, Aug 20, 2013 at 04:02:12PM +0100, Mark McLoughlin wrote:
> On Tue, 2013-08-20 at 11:26 +0100, Mark McLoughlin wrote:
> > On Thu, 2013-08-15 at 14:12 +1200, Robert Collins wrote:
> > > This may interest data-driven types here.
> > > 
> > > https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
> > > 
> > > Note specifically the citation of 200-400 lines as the knee of the review
> > > effectiveness curve: that's lower than I thought - I thought 200 was
> > > clearly fine - but no.
> > 
> > The full study is here:
> > 
> > http://support.smartbear.com/resources/cc/book/code-review-cisco-case-study.pdf
> > 
> > This is an important subject and I'm glad folks are studying it, but I'm
> > sceptical about whether the "Defect density vs LOC" is going to help us
> > come up with better guidelines than we have already.
> > 
> > Obviously, a metric like LOC hides some serious subtleties. Not all
> > changes are of equal complexity. We see massive refactoring patches
> > (like s/assertEquals/assertEqual/) that are actually safer than gnarly,
> > single-line, head-scratcher bug-fixes. The only way the report addresses
> > that issue with the underlying data is by eliding >10k LOC patches.
> > 
> > The "one logical change per commit" is a more effective guideline than
> > any LOC based guideline:
> > 
> > https://wiki.openstack.org/wiki/GitCommitMessages#Structural_split_of_changes
> > 
> > IMHO, the number of distinct logical changes in a patch has a more
> > predictable impact on review effectiveness than the LOC metric.
> 
> Wow, I didn't notice Joe had started to enforce that here:
> 
>   https://review.openstack.org/41695
> 
> and the exact example I mentioned above :)
> 
> We should not enforce rules like this blindly.

Agreed, lines of code is a particularly poor metric for evaluating
commits on. 


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-20 Thread Mark McLoughlin
On Tue, 2013-08-20 at 11:26 +0100, Mark McLoughlin wrote:
> On Thu, 2013-08-15 at 14:12 +1200, Robert Collins wrote:
> > This may interest data-driven types here.
> > 
> > https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
> > 
> > Note specifically the citation of 200-400 lines as the knee of the review
> > effectiveness curve: that's lower than I thought - I thought 200 was
> > clearly fine - but no.
> 
> The full study is here:
> 
> http://support.smartbear.com/resources/cc/book/code-review-cisco-case-study.pdf
> 
> This is an important subject and I'm glad folks are studying it, but I'm
> sceptical about whether the "Defect density vs LOC" is going to help us
> come up with better guidelines than we have already.
> 
> Obviously, a metric like LOC hides some serious subtleties. Not all
> changes are of equal complexity. We see massive refactoring patches
> (like s/assertEquals/assertEqual/) that are actually safer than gnarly,
> single-line, head-scratcher bug-fixes. The only way the report addresses
> that issue with the underlying data is by eliding >10k LOC patches.
> 
> The "one logical change per commit" is a more effective guideline than
> any LOC based guideline:
> 
> https://wiki.openstack.org/wiki/GitCommitMessages#Structural_split_of_changes
> 
> IMHO, the number of distinct logical changes in a patch has a more
> predictable impact on review effectiveness than the LOC metric.

Wow, I didn't notice Joe had started to enforce that here:

  https://review.openstack.org/41695

and the exact example I mentioned above :)

We should not enforce rules like this blindly.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Proposal to revert per-user-quotas

2013-08-20 Thread Andrew Laski
The patch in question (https://review.openstack.org/#/c/28232/24) adds 
the ability to track quota usage on a per user basis within a project.  
I have run into two issues with it so far: the db migration is 
incomplete and leaves the data in a bad state, and the sync methods used 
during quota reservations no longer work for fixed_ips, floating_ips, 
and networks since they are not tied to a user.


The db migration issue is documented at 
https://bugs.launchpad.net/nova/+bug/1212798 but the tl;dr is that the 
quota usages that were in place before the migration is run can not be 
decremented and aren't fixed by the healing sync that occurs.  I sought 
to address this by introducing a new migration which performs a full 
sync of quota usages and removes the bad rows but that led me to the 
next issue.


Some resources can't be synced properly because they're tracked per user 
in the quota table but they're not tied to a user so it's not feasible 
to grab a count of how many are being used by any particular user.  So 
right now the quota_usages table can get into a bad state with no good 
way to address it.


Right now I think it will be better to revert this change and 
re-introduce it once these issues are worked out. Thoughts?


As an addendum, the patch merged about a month ago on Jul 25th and looks 
to have some minor conflicts for a revert but should be minimally 
disruptive.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMwareAPI sub-team status update 2013-08-19

2013-08-20 Thread Shawn Hartsock
The script is stupid. If my name isn't on the list of reviewers, sometimes it 
fails to catch the patch. I'll add myself to these.

# Shawn Hartsock

- Original Message -
> From: "Gary Kotton" 
> To: "Shawn Hartsock" , openstack-dev@lists.openstack.org
> Sent: Tuesday, August 20, 2013 10:15:11 AM
> Subject: RE: VMwareAPI sub-team status update 2013-08-19
> 
> Hi,
> You have omitted the following which are very critical for our tempest runs:
> 1. https://review.openstack.org/#/c/39046/ (VMware: fix rescue/unrescue
> instance)
> 2. https://review.openstack.org/#/c/41058/ (VMware: Add driver support for
> hypervisor uptime)
> 3. [Included below] https://review.openstack.org/#/c/40298/ (Fix snapshot in
> VMWwareVCDriver)
> 
> In addition to this we a critical issue with Neutron and Nova which has been
> in review for a very long time:
> 1. https://review.openstack.org/#/c/37389/ (VMware: Ensure Neutron networking
> works with VMware drivers)
> 
> If it would help the core reviewers we can provide tempest runs for each of
> the reviews below. I understand that one may find it difficult to review or
> approve code if she/he cannot run it. So if the tempest results will help
> then we can provide them.
> 
> Thanks
> Gary
> 
> > -Original Message-
> > From: Shawn Hartsock [mailto:hartso...@vmware.com]
> > Sent: Tuesday, August 20, 2013 6:44 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: [openstack-dev][nova][vmware] VMwareAPI sub-team status
> > update 2013-08-19
> > 
> > 
> > Greetings stackers!
> > 
> > August 22nd is fast approaching. Here's the reviews in flight. We have 5
> > ready for a core reviewer to take a look. One needing some attention from
> > someone who knows VMware's APIs and 8 that are in need of
> > work/discussion. I noticed that there was some issues with Jenkins earlier,
> > some of you may have been caught in that. Don't use 'recheck no bug' until
> > you've read the failures and made sure it has nothing to do with your
> > patch.
> > 
> > Ready for core reviewer:
> > * NEW, https://review.openstack.org/#/c/37819/ ,'VMware image clone
> > strategy settings and overrides'
> > https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-
> > strategy
> > core votes,0, non-core votes,4, down votes, 0
> > * NEW, https://review.openstack.org/#/c/33100/ ,'Fixes host stats for
> > VMWareVCDriver'
> > https://bugs.launchpad.net/nova/+bug/1190515
> > core votes,0, non-core votes,5, down votes, 0
> > * NEW, https://review.openstack.org/#/c/30628/ ,'Fix VCDriver to pick the
> > datastore that has capacity'
> > https://bugs.launchpad.net/nova/+bug/1171930
> > core votes,0, non-core votes,7, down votes, 0
> > * NEW, https://review.openstack.org/#/c/33504/ ,'VMware: nova-compute
> > crashes if VC not available'
> > https://bugs.launchpad.net/nova/+bug/1192016
> > core votes,0, non-core votes,5, down votes, 0
> > * NEW, https://review.openstack.org/#/c/40298/ ,'Fix snapshot in
> > VMWwareVCDriver'
> > https://bugs.launchpad.net/nova/+bug/1184807
> > core votes,0, non-core votes,6, down votes, 0
> > 
> > Needs VMware API expert review:
> > * NEW, https://review.openstack.org/#/c/42024/ ,'VMWare: Disabling
> > linked clone doesn't cache images'
> > https://bugs.launchpad.net/nova/+bug/1207064
> > core votes,0, non-core votes,0, down votes, 0
> > 
> > Needs discussion/work (has -1):
> > * NEW, https://review.openstack.org/#/c/37659/ ,'Enhance VMware Hyper
> > instance disk usage'
> > https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-
> > usage
> > core votes,0, non-core votes,2, down votes, -1
> > * NEW, https://review.openstack.org/#/c/39720/ ,'VMware: Added check for
> > datastore state before selection'
> > https://bugs.launchpad.net/nova/+bug/1194078
> > core votes,0, non-core votes,4, down votes, -1
> > * NEW, https://review.openstack.org/#/c/40105/ ,'VMware: use VM uuid for
> > volume attach and detach'
> > https://bugs.launchpad.net/nova/+bug/1208173
> > core votes,1, non-core votes,7, down votes, -1
> > * NEW, https://review.openstack.org/#/c/41387/ ,'VMware: Nova boot from
> > cinder volume'
> > https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-
> > support
> > core votes,0, non-core votes,2, down votes, -1
> > * NEW, https://review.openstack.org/#/c/40245/ ,'Nova support for vmware
> > cinder driver'
> > https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-
> > support
> > core votes,0, non-core votes,2, down votes, -1
> > * NEW, https://review.openstack.org/#/c/41657/ ,'Fix VMwareVCDriver to
> > support multi-datastore'
> > https://bugs.launchpad.net/nova/+bug/1104994
> > core votes,0, non-core votes,0, down votes, -1
> > * NEW, https://review.openstack.org/#/c/30282/ ,'Multiple Clusters using
> > single compute service'
> > https://blueprints.launchpad.net/nova/+spec/multiple-clusters-
> > managed-by-one-service
> > core votes,0, 

Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-20 Thread Jay Pipes
*sigh* I wish I'd been aware of these conversations and been in the 
Grizzly summit session on soft delete...


What specific unique constraint was needed that changing the deleted 
column to use the id value solved?


-jay

On 08/19/2013 03:56 AM, Chris Behrens wrote:

'deleted' is used so that we can have proper unique constraints by setting it 
to `id` on deletion.  This was not the case until Grizzly, and before Grizzly I 
would have agreed completely.

- Chris

On Aug 19, 2013, at 12:39 AM, Jay Pipes  wrote:


I'm throwing this up here to get some feedback on something that's always 
bugged me about the model base used in many of the projects.

There's a mixin class that looks like so:

class SoftDeleteMixin(object):
deleted_at = Column(DateTime)
deleted = Column(Integer, default=0)

def soft_delete(self, session=None):
"""Mark this object as deleted."""
self.deleted = self.id
self.deleted_at = timeutils.utcnow()
self.save(session=session)

Once mixed in to a concrete model class, the primary join is typically modified 
to include the deleted column, like so:

class ComputeNode(BASE, NovaBase):
...
service = relationship(Service,
   backref=backref('compute_node'),
   foreign_keys=service_id,
   primaryjoin='and_('
'ComputeNode.service_id == Service.id,'
'ComputeNode.deleted == 0)')

My proposal is to get rid of the deleted column in the SoftDeleteMixin class 
entirely, as it is redundant with the deleted_at column. Instead of doing a 
join condition on deleted == 0, one would instead just do the join condition on 
deleted_at is None, which translates to the SQL: AND deleted_at IS NULL.

There isn't much of a performance benefit -- you're only reducing the row size 
by 4 bytes. But, you'd remove the redundant data from all the tables, which 
would make the normal form freaks like myself happy ;)

Thoughts?

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMwareAPI sub-team status update 2013-08-19

2013-08-20 Thread Gary Kotton
Hi,
You have omitted the following which are very critical for our tempest runs:
1. https://review.openstack.org/#/c/39046/ (VMware: fix rescue/unrescue 
instance)
2. https://review.openstack.org/#/c/41058/ (VMware: Add driver support for 
hypervisor uptime)
3. [Included below] https://review.openstack.org/#/c/40298/ (Fix snapshot in 
VMWwareVCDriver)

In addition to this we a critical issue with Neutron and Nova which has been in 
review for a very long time:
1. https://review.openstack.org/#/c/37389/ (VMware: Ensure Neutron networking 
works with VMware drivers)

If it would help the core reviewers we can provide tempest runs for each of the 
reviews below. I understand that one may find it difficult to review or approve 
code if she/he cannot run it. So if the tempest results will help then we can 
provide them.

Thanks
Gary

> -Original Message-
> From: Shawn Hartsock [mailto:hartso...@vmware.com]
> Sent: Tuesday, August 20, 2013 6:44 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev][nova][vmware] VMwareAPI sub-team status
> update 2013-08-19
> 
> 
> Greetings stackers!
> 
> August 22nd is fast approaching. Here's the reviews in flight. We have 5
> ready for a core reviewer to take a look. One needing some attention from
> someone who knows VMware's APIs and 8 that are in need of
> work/discussion. I noticed that there was some issues with Jenkins earlier,
> some of you may have been caught in that. Don't use 'recheck no bug' until
> you've read the failures and made sure it has nothing to do with your patch.
> 
> Ready for core reviewer:
> * NEW, https://review.openstack.org/#/c/37819/ ,'VMware image clone
> strategy settings and overrides'
> https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-
> strategy
> core votes,0, non-core votes,4, down votes, 0
> * NEW, https://review.openstack.org/#/c/33100/ ,'Fixes host stats for
> VMWareVCDriver'
> https://bugs.launchpad.net/nova/+bug/1190515
> core votes,0, non-core votes,5, down votes, 0
> * NEW, https://review.openstack.org/#/c/30628/ ,'Fix VCDriver to pick the
> datastore that has capacity'
> https://bugs.launchpad.net/nova/+bug/1171930
> core votes,0, non-core votes,7, down votes, 0
> * NEW, https://review.openstack.org/#/c/33504/ ,'VMware: nova-compute
> crashes if VC not available'
> https://bugs.launchpad.net/nova/+bug/1192016
> core votes,0, non-core votes,5, down votes, 0
> * NEW, https://review.openstack.org/#/c/40298/ ,'Fix snapshot in
> VMWwareVCDriver'
> https://bugs.launchpad.net/nova/+bug/1184807
> core votes,0, non-core votes,6, down votes, 0
> 
> Needs VMware API expert review:
> * NEW, https://review.openstack.org/#/c/42024/ ,'VMWare: Disabling
> linked clone doesn't cache images'
> https://bugs.launchpad.net/nova/+bug/1207064
> core votes,0, non-core votes,0, down votes, 0
> 
> Needs discussion/work (has -1):
> * NEW, https://review.openstack.org/#/c/37659/ ,'Enhance VMware Hyper
> instance disk usage'
> https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-
> usage
> core votes,0, non-core votes,2, down votes, -1
> * NEW, https://review.openstack.org/#/c/39720/ ,'VMware: Added check for
> datastore state before selection'
> https://bugs.launchpad.net/nova/+bug/1194078
> core votes,0, non-core votes,4, down votes, -1
> * NEW, https://review.openstack.org/#/c/40105/ ,'VMware: use VM uuid for
> volume attach and detach'
> https://bugs.launchpad.net/nova/+bug/1208173
> core votes,1, non-core votes,7, down votes, -1
> * NEW, https://review.openstack.org/#/c/41387/ ,'VMware: Nova boot from
> cinder volume'
> https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-
> support
> core votes,0, non-core votes,2, down votes, -1
> * NEW, https://review.openstack.org/#/c/40245/ ,'Nova support for vmware
> cinder driver'
> https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-
> support
> core votes,0, non-core votes,2, down votes, -1
> * NEW, https://review.openstack.org/#/c/41657/ ,'Fix VMwareVCDriver to
> support multi-datastore'
> https://bugs.launchpad.net/nova/+bug/1104994
> core votes,0, non-core votes,0, down votes, -1
> * NEW, https://review.openstack.org/#/c/30282/ ,'Multiple Clusters using
> single compute service'
> https://blueprints.launchpad.net/nova/+spec/multiple-clusters-
> managed-by-one-service
> core votes,0, non-core votes,2, down votes, -2
> * NEW, https://review.openstack.org/#/c/34903/ ,'Deploy vCenter
> templates'
> https://blueprints.launchpad.net/nova/+spec/deploy-vcenter-templates-
> from-vmware-nova-driver
> core votes,0, non-core votes,2, down votes, -2
> 
> Meeting info:
> * https://wiki.openstack.org/wiki/Meetings/VMwareAPI
> 
> 
> # Shawn Hartsock
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-20 Thread Thomas Maddox
On 8/19/13 8:21 AM, "Sandy Walsh"  wrote:

>
>
>On 08/18/2013 04:04 PM, Jay Pipes wrote:
>> On 08/17/2013 03:10 AM, Julien Danjou wrote:
>>> On Fri, Aug 16 2013, Jay Pipes wrote:
>>>
 Actually, that's the opposite of what I'm suggesting :) I'm suggesting
 getting rid of the resource_metadata column in the meter table and
 using the
 resource table in joins...
>>>
>>> I think there's a lot of scenario where this would fail, like for
>>> example instances being resized; the flavor is a metadata.
>> 
>> I'm proposing that in these cases, a *new* resource would be added to
>> the resource table (and its ID inserted in meter) table with the new
>> flavor/instance's metadata.
>> 
>>> Though, changing the schema to improve performance is a good one, this
>>> needs to be thought from the sample sending to the storage, through the
>>> whole chain. This is something that will break a lot of current
>>> assumption; that doesn't mean it's bad or we can't do it, just that we
>>> need to think it through. :)
>> 
>> Yup, understood completely. The change I am proposing would not affect
>> any assumptions made from the point of view of a sample sent to storage.
>> The current assumption is that a sample's *exact* state at time of
>> sampling would be stored so that the exact sample state could be
>> reflected even if the underlying resource that triggered the sample
>> changed over time.
>> 
>> All I am proposing is a change to the existing implementation of that
>> assumption: instead of storing the original resource metadata in the
>> meter table, we instead ensure that we store the resource in the
>> resource table, and upon new sample records being inserted into the
>> meter table, we check to see if the resource for the sample is the same
>> as it was last time. If it is, we simply insert the resource ID from
>> last time. If it isn't, we add a new record to the resource table that
>> describes the new resource attributes, and we insert that new resource
>> ID into the meter table for that sample...
>
>I'm assuming we wouldn't need a backlink to the older resource?
>
>I'm thinking about how this would work work Events and Request ID's. The
>two most common reports we run from StackTach are based on Request ID
>and some resource ID.
>
>"Show me all the events related to this Request UUID"
>"Show me all the events related to this  UUID"
>
>A new Resource entry would be fine so long as it was still associated
>with the underlying Resource UUID (instance, image, etc). We could get
>back a list of all the Resources with the same UUID and, if needed,
>lookup the metadata for it. This would allow us to see how to the
>resource changed over time.
>
>I think that's what you're suggesting ... if so, yep.
>
>As for the first query "... for this Request ID", we'd have to map Event
>many related Resources since one event could have a related
>instance/image/network/volume/host/scheduler, etc.
>
>These relationships would have to get mapped when the Event is turned
>into Meters. Changing the Resource ID might not be a problem if we keep
>a common Resource UUID. I have to think about that some more.
>
>Would we use timestamps to determine which Resource is the most recent?
>
>
>-S
>

Are we going to be incurring significant performance cost from this?

Let me see if I understand how a query will work for this based on the
current way CM gets billing:

Scenario: Monthly billing for Winston who built 12 machines this month; we
don't want to bill for failed/stalled builds that weren't cleaned up yet
either.

1. Filter Meter table for the time range in the samples to get the
Resources that were updated
2. Because the metadata changes a few times throughout the build process,
we have samples referencing several different metadata states over time
for each instance
3. Because of the metadata over time, we filter the Resource table to
provide distinct resources
4. We then correlate the resulting resources with their aggregate samples
to have the measurements for each resource

A thought for what may make this easier is to apply Jay's idea to a
normalized version of resource_metadata in a separate table that is then
referenced from the samples' resource_metadata attribute and a
latest_metadata column in the Resource table. That way we're not repeating
ourselves and we're not incurring any more complication than we already
have with the current implementation (it would keep the FK to the Resource
table). This way we can easily get to the latest state (hit the Resource
table from the sample) and the associated measurements derived from the
samples. We then only have to deal with metadata over time when it's
important, which seems like a relatively infrequent request to trace, but
still a use case that need be satisfied.

I hope I'm not seeing a problem that doesn't exist, but either way I'll
learn something so, thoughts? =]

Cheers!

-Thomas

>
>
>
>> Best,
>> -jay
>> 
>> 
>> ___
>>

Re: [openstack-dev] [Neutron][VPNaaS] Supporting OpenSwan or StrongSwan or Both?

2013-08-20 Thread Paul Michali
Was the original reasoning to use StrongSwan over OpenSwan, only because of 
community support? I vaguely recall something mentioned about StrongSwan having 
additional capabilities or something. Can anyone confirm?

As far as which option, it sounds like B or C-2 are the better choices, just 
because of the RHEL support. The two are very similar (from an end-user 
standpoint), so the added doc/help shouldn't be too bad. From a  code 
perspective, much of the code can be shared, so the added testing requirements 
should also be minimal.

The only point to make about C-2 is it requires us to either take the extra 
time now to support multiple drivers (we will have to eventually - I'll be 
working on one), or do a refactoring later to support a hierarchy of drivers. I 
brought that point up in the review of the reference driver, and Nachi and I 
talked about this a bit yesterday. We both agreed that we could do the 
refactoring later, to support drivers that are different than the Swan family.

Related to that, I did have some question about multiple drivers...

How do we handle the case where the drivers support different capabilities? For 
example, say one driver supports an encryption mode that the other does not.

Can we reject unsupported capabilities at configuration time? That seems 
cleaner, but I'm wondering how that would be done (I know we'll specify the 
provider, but wondering how we'll invoke driver specific verification routines 
- do we have that mechanism defined?).

Regards,

PCM (Paul Michali)

MAIL p...@cisco.com
IRC   pcm_  (irc.freenode.net)
TW   @pmichali

On Aug 19, 2013, at 6:15 PM, Nachi Ueno  wrote:

> Hi folks
> 
> I would like to discuss whether supporting OpenSwan or StrongSwan or Both for
> ipsec driver?
> 
> We choose StrongSwan because of the community is active and plenty of docs.
> However It looks like RHEL is only supporting OpenSwan.
> 
> so we should choose
> 
> (A) Support StrongSwan
> (B) Support OpenSwan
> (C) Support both
>   (C-1) Make StrongSwan default
>   (C-2) Make OpenSwan default
> 
> Actually, I'm working on C-2.
> The patch is still WIP https://review.openstack.org/#/c/42264/
> 
> Besides the patch is small, supporting two driver may burden
> in H3 including docs or additional help.
> IMO, this is also a valid comment.
> 
> Best
> Nachi
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder: Project & release status meeting - 21:00 UTC

2013-08-20 Thread Thierry Carrez
Today in the Project & release status meeting, some
FeatureProposalFreezes are upon us. Some early deferrals will be
considered to facilitate the review rush.

Feel free to add extra topics to the agenda:
[1] http://wiki.openstack.org/Meetings/ProjectMeeting

All Technical Leads for integrated programs should be present (if you
can't make it, please name a substitute on [1]). Other program leads and
everyone else is very welcome to attend.

The meeting will be held at 21:00 UTC on the #openstack-meeting channel
on Freenode IRC. You can look up how this time translates locally at:
[2] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20130820T21

See you there,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] tarballs of savanna-extra

2013-08-20 Thread Matthew Farrellee
Is there a downside to having it? A positive is it gives a snapshot of 
everything for each release.


I'm not at fan of having a snapshot of the Hadoop swift patches compiled 
into a jar and stored in the repository. I'd prefer that it is hosted 
elsewhere.


Best,


matt

On 08/19/2013 04:37 PM, Sergey Lukjanov wrote:

Hi Matt,

it is not an accident that savanna-extra has no tarballs at
tarballs.o.o, because this repo is used for storing some date that is
only needed for some stuff like building images for vanilla plugin,
storing Swift support patch for Hadoop and etc. So, it looks like
that we should not package all of them to one heterogeneous tarball.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Aug 20, 2013, at 0:25, Matthew Farrellee  wrote:


Will someone setup a tarballs.os.o release of savanna-extra's master 
(https://github.com/stackforge/savanna-extra), and make sure it gets an 
official release for 0.3?

Best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Osl and dangerous code merging

2013-08-20 Thread Roman Bogorodskiy
FWIW, I've created a sample project using olso as a submodule:

https://github.com/novel/oslo-helloworld

I spotted some minor difficulties:

1. As olso directory layout is not designed to be used as a submodule, a
symlink has to be created to place code to the proper location
2. sys.path hack has to be applied for proper imports


On Thu, Aug 8, 2013 at 2:39 PM, Mark McLoughlin  wrote:

> What do you mean by "dangerous code merging" in the subject? The body of
> your mail doesn't make any reference to whatever "danger" you're seeing.
>
> On Thu, 2013-08-08 at 14:16 +0400, Boris Pavlovic wrote:
> > Hi All,
> >
> > Could somebody answer me, why we are merging oslo code in other projects
> > and don't use
> > git submodules (http://git-scm.com/book/en/Git-Tools-Submodules)
>
> The idea of using submodules has come a few times. I don't have a
> fundamental objection to it, except any time I've seen submodules used
> in a project they've been extremely painful for everyone involved.
>
> I'd be happy to look at a demo of a submodule based system for projects
> to use code from oslo-incubator.
>
> Mark.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-20 Thread Mark McLoughlin
On Thu, 2013-08-15 at 14:12 +1200, Robert Collins wrote:
> This may interest data-driven types here.
> 
> https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
> 
> Note specifically the citation of 200-400 lines as the knee of the review
> effectiveness curve: that's lower than I thought - I thought 200 was
> clearly fine - but no.

The full study is here:

http://support.smartbear.com/resources/cc/book/code-review-cisco-case-study.pdf

This is an important subject and I'm glad folks are studying it, but I'm
sceptical about whether the "Defect density vs LOC" is going to help us
come up with better guidelines than we have already.

Obviously, a metric like LOC hides some serious subtleties. Not all
changes are of equal complexity. We see massive refactoring patches
(like s/assertEquals/assertEqual/) that are actually safer than gnarly,
single-line, head-scratcher bug-fixes. The only way the report addresses
that issue with the underlying data is by eliding >10k LOC patches.

The "one logical change per commit" is a more effective guideline than
any LOC based guideline:

https://wiki.openstack.org/wiki/GitCommitMessages#Structural_split_of_changes

IMHO, the number of distinct logical changes in a patch has a more
predictable impact on review effectiveness than the LOC metric.

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-20 Thread Flavio Percoco

On 20/08/13 00:15 -0700, Mark Washenberger wrote:


   2) I highly caution folks who think a No-SQL store is a good storage
   solution for any of the data currently used by Nova, Glance (registry),
   Cinder (registry), Ceilometer, and Quantum. All of the data stored and
   manipulated in those projects is HIGHLY relational data, and not
   objects/documents. Switching to use a KVS for highly relational data is
   a terrible decision. You will just end up implementing joins in your
   code...



   +1

   FWIW, I'm a huge fan of NoSQL technologies but I couldn't agree more
   here.



I have to say I'm kind of baffled by this sentiment (expressed here and
elsewhere in the thread.) I'm not a NoSQL expert, but I hang out with a few and
I'm pretty confident Glance at least is not that relational. We do two types of
joins in glance. The first, like image properties, is basically just an
implementation detail of the sql driver. Its not core to the application. Any
NoSQL implementation will simply completely denormalize those properties into
the image record. (And honestly, so might an optimized SQL implementation. . .)

The second type of join, image_members, is basically just a hack to solve the
problem created because the glance api offers several simultaneous implicit
"views" of images. Specifically, when you list images in glance, you are seeing
a union of three views: public images, images you own, and images shared with
you. IMO its actually a more scalable and sensible solution to make these views
more explicit and independent in the API and code, taking a lesson from
filesystems which have to scale to a lot of metadata (notice how visibility is
generally an attribute of a directory, not of regular files in your typical
Unix FS?). And to solve this problem in SQL now we still have to do a
server-side union, which is a bit sad. But even before we can refactor the API
(v3 anyone?) I don't see it as unworkably slow for a NoSQL driver to track
these kinds of views.


You make really good points here but I don't fully agree.

I don't think the issue is actually translating Glance's models to
NoSQL or NoSQL db's performance, I'm pretty sure we could benefit in some
areas but not all of them. To me, and that's what my comment was referring
to, this is more related to  what kind of data we're actually
treating, the guarantees we should provide and how they are
implemented now.

There are a couple of things that would worry me about an hypothetic
support for NoSQL but I guess one that I'd consider very critical is
migrations. Some could argue asking whether we'd really need them or
not  - when talking about NoSQL databases - but we do. Using a
schemaless database wouldn't mean we don't have a schema. Migrations
are not trivial for some NoSQL databases, plus, this would mean
drivers, most probably, would have to have their own implementation.


The bigger concern to me is that Glance seems a bit trigger-happy with indexes.
But I'm confident we're in a similar boat there: performance in NoSQL won't be
that terrible for the most important use cases, and a later refactoring can put
us on a more sustainable track in the long run. 


I'm not worried about this, though. 




All I'm saying is that we should be careful not to swap one set of
problems for another.



My 2 cents: I am in agreement with Jay.  I am leery of NoSQL being a
direct sub in and I fear that this effort can be adding a large workload
for little benefit.


The goal isn't really to replace sqlalchemy completely. I'm hoping I can create
a space where multiple drivers can operate efficiently without introducing bugs
(i.e. pull all that business logic out of the driver!) I'll be very interested
to see if people can, after such a refactoring, try out some more storage
approaches, such as dropping the sqlalchemy orm in favor of its generic engine
support or direct sql execution, as well as NoSQL what-have-you. We don't have
to make all of the drivers live in the project, so it really can be a good
place for interested parties to experiment.


And this is exactly what I'm concerned about. There's a lot of
business logic implemented at the driver level right now which makes
it really difficult (impossible?) to even think about using a NoSQL
database. However, I'm not even sure that taking BL to a higher level
would be the "go-time" for new NoSQL drivers. 


As mentioned already, this might end up in app-level implementations
that shouldn't be there.

Again, I'm not arguing NoSQL capabilities in this matter - I'm a huge
fan of NoSQL technologies -, what I'd argue is whether they are the
best tool for this task. This is something that should be evaluated in
a per module basis, which I obviously don't have a complete knowledge
of.

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listin

[openstack-dev] [Nova] Requesting feedback on review #39441

2013-08-20 Thread Kanade, Rohan12
Requesting the community/nova core reviewers to please review and provide
feedback on the patch https://review.openstack.org/#/c/39441/


Rohan Kanade | Senior Software Engineer R&D (Cloud Computing) | NTT DATA Global 
Technology Services Pvt. Ltd. | w. +91.20.6604.1500 x 573 | 
rohan.kan...@nttdata.com |  Learn more at nttdata.com/americas

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] compute_node_get_all() slowness by sql-a ORM

2013-08-20 Thread Wang, Shane
Dear stackers,

For the bug https://bugs.launchpad.net/nova/+bug/1212428 raised by Chris, we 
did some further tests.

We have 10K compute nodes in compute_nodes, and each node has 20 stat records.
As we know, by using sql-a ORM, it takes our test code 16 seconds to call 
compute_node_get_all() and get all nodes.
The return nodes are constructed a list of dictionaries, each dictionary has an 
attribute 'stats' to get the stats of the node by ORM.

I make a patch to replace ORM with sql-a APIs, and test it.
Calling compute_node_get_all() costs about 4 seconds. However the format of the 
return nodes is a little bit different from the above.
Without ORM, each node is represented by 20 records, which indicate 20 stat 
records after join.

Again, I have a patch to use MySQLdb instead of sqlalchemy to fetch all compute 
nodes. It takes about 2 seconds.
Certainly running the SQL query under mysql command costs 0.54 second.

I don't think using stat table as "key":"value" is a good idea, because each 
node should aggregate all stat records, translate them into real_name: 
real_value with statmap() after retrieving them from db, and translate them 
into key:value with _prep_stats_dict() before saving them to db, at least at 
this moment.
If we consider to remove ORM some day, the stat format of the query results 
should be a problem, or we need to construct 'stats' attribute for each node by 
ourselves based on those 20 records, which still costs time.

I agree with Chris to save 'stats' as a json at this moment, mentioned in the 
bug description. I have ever had a similar patch 
https://review.openstack.org/#/c/38802/ which is abandoned currently.

What do you think? Also, do we have a plan to use MySQLdb to access mysql? I 
know sql-a is common to db backends, is it possible to create an abstract layer 
to use MySQLdb for mysql, and use sql-a for others in the future release?

Thanks.
--
Shane
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-20 Thread Mark Washenberger
So much great stuff to respond to in this snip and response!


On Mon, Aug 19, 2013 at 2:17 AM, Flavio Percoco  wrote:

> On 19/08/13 02:51 -0400, Jay Pipes wrote:
>
>> On 08/19/2013 12:56 AM, Joshua Harlow wrote:
>>
>>> Another good article from an ex-coworker that keeps on making more and
>>> more sense the more projects I get into...
>>>
>>> http://seldo.com/weblog/2011/**08/11/orm_is_an_antipattern
>>>
>>> Your mileage/opinion though may vary :)
>>>
>>
This article looks great--but I think it depends on taking the incredibly
limited / incorrect view of an ORM that has been popularized and that we
currently employ in many OpenStack projects.

In particular, the critical issue is, are you actually using a mapper? Do
you *know* what the mapper pattern is? The key part of the mapper is that
you've got two components, say A and B, that you desperately want to keep
decoupled. Now, A and B need to interact, but they both are very likely to
need to change. And maybe what's worse, they really suck to change together
at the same time. (A great example of A and B is "db schema" and "business
logic".) Since they need to interact, which one is going to depend on the
other? A mapper M solves this problem by depending on both A and B,
allowing the two key modules to continue to evolve independently.

This approach would be amazing for our CD efforts, because it lets you move
one step at a time. In one deploy, you update the schema and mapper, but
keep the application code the same. In the next change, you just change the
application code. And so forth, allowing a solution to the problem of
temporary schema/code/functionality incompatibilities (well for part of the
problem anyway) that happens during a large-scale deployment.

But of course sqlalchemy's declarative models hamstring any such effort
while simultaneously teaching developers entirely the wrong lesson about
mappers! What I mean is that if I'm using sqlalchemy model objects and want
to change a table definition, I have to change a model object and thus much
of my application code. Decoupling went flying out the window. . . sad
times.



>
>> 2) I highly caution folks who think a No-SQL store is a good storage
>> solution for any of the data currently used by Nova, Glance (registry),
>> Cinder (registry), Ceilometer, and Quantum. All of the data stored and
>> manipulated in those projects is HIGHLY relational data, and not
>> objects/documents. Switching to use a KVS for highly relational data is a
>> terrible decision. You will just end up implementing joins in your code...
>>
>>
> +1
>
> FWIW, I'm a huge fan of NoSQL technologies but I couldn't agree more
> here.
>
>
I have to say I'm kind of baffled by this sentiment (expressed here and
elsewhere in the thread.) I'm not a NoSQL expert, but I hang out with a few
and I'm pretty confident Glance at least is not that relational. We do two
types of joins in glance. The first, like image properties, is basically
just an implementation detail of the sql driver. Its not core to the
application. Any NoSQL implementation will simply completely denormalize
those properties into the image record. (And honestly, so might an
optimized SQL implementation. . .)

The second type of join, image_members, is basically just a hack to solve
the problem created because the glance api offers several simultaneous
implicit "views" of images. Specifically, when you list images in glance,
you are seeing a union of three views: public images, images you own, and
images shared with you. IMO its actually a more scalable and sensible
solution to make these views more explicit and independent in the API and
code, taking a lesson from filesystems which have to scale to a lot of
metadata (notice how visibility is generally an attribute of a directory,
not of regular files in your typical Unix FS?). And to solve this problem
in SQL now we still have to do a server-side union, which is a bit sad. But
even before we can refactor the API (v3 anyone?) I don't see it as
unworkably slow for a NoSQL driver to track these kinds of views.

The bigger concern to me is that Glance seems a bit trigger-happy with
indexes. But I'm confident we're in a similar boat there: performance in
NoSQL won't be that terrible for the most important use cases, and a later
refactoring can put us on a more sustainable track in the long run.

And then, so I'm not just picking on flaper87. .

jbresnah sez:

>> All I'm saying is that we should be careful not to swap one set of
>> problems for another.

> My 2 cents: I am in agreement with Jay.  I am leery of NoSQL being a
> direct sub in and I fear that this effort can be adding a large workload
> for little benefit.

The goal isn't really to replace sqlalchemy completely. I'm hoping I can
create a space where multiple drivers can operate efficiently without
introducing bugs (i.e. pull all that business logic out of the driver!)
I'll be very interested to see i