[openstack-dev] [openstack-manuals] Need some help when mvn clean generate-sources

2013-08-19 Thread Tian, Shuangtai
Hi guys
When do some build on openstack-manuals project,there is error when bulid 
'openstack-compute-admin' ,but others (such as openstack-user, docbkx-example 
e.g) are all SUCCESS.
Anybody know why ?
Thanks!


Cd openstack-manuals/doc/src/docbkx/openstack-compute-admin
mvn clean generate-sources

[INFO] Scanning for projects...
[INFO]
[INFO] 
[INFO] Building OpenStack Administration Guides 1.0.0-SNAPSHOT
[INFO] 
Downloading: 
http://maven.research.rackspacecloud.com/content/groups/public/org/apache/maven/plugins/maven-clean-plugin/2.3/maven-clean-plugin-2.3.pom
Downloading: 
http://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-clean-plugin/2.3/maven-clean-plugin-2.3.pom
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 2:17.516s
[INFO] Finished at: Mon Aug 19 10:56:57 CST 2013
[INFO] Final Memory: 8M/245M
[INFO] 
[ERROR] Plugin org.apache.maven.plugins:maven-clean-plugin:2.3 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.apache.maven.plugins:maven-clean-plugin:jar:2.3: Could not transfer 
artifact org.apache.maven.plugins:maven-clean-plugin:pom:2.3 from/to 
rackspace-research 
(http://maven.research.rackspacecloud.com/content/groups/public/): Connection 
to http://maven.research.rackspacecloud.com refused: Connection timed out - 
[Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException


Best regards,
Tian, Shuangtai (Kenneth)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-19 Thread Jay Pipes

On 08/18/2013 11:07 PM, Robert Collins wrote:

On 19 August 2013 14:22, Jay Pipes jaypi...@gmail.com wrote:


I'm completely with Joshua here - the ORM layer is more often than not
a source of bugs and performance issues.


If used improperly, yep.


http://www.codinghorror.com/blog/2006/06/object-relational-mapping-is-the-vietnam-of-computer-science.html

There is no proper use of an ORM.


I'm not a super-fan of ORMs, Robert. I'm not sure why you're insisting 
on taking me down this road...



We don't use the SQLAlchemy ORM for cross-SQL-DB support - thats a
lower layer. It's the model objects themselves that we use the ORM
for, and we could use SQLAlchemy's lower layers but not the ORM.


Hmmm, not quite... see below.



An alternative I think would be better would be to scrap the use of
the SQLAlchemy ORM; keep using the DB engine abstraction support.


Just keep in mind that the Session and Query objects and their related APIs
are in the SQLAlchemy ORM, not the SQLAlchemy Core.


Ok, so either it's not a bright line, or we'd need to have an
alternative thing - not just a reimplementation either, cause that's
pointless.


All I'm saying is that we should be careful not to swap one set of 
problems for another. I say this because I've seen the Nova data-access 
code develop from its very earliest days, up to this point. I've seen 
the horrors of trying to mask an object approach on top of a 
non-relational data store, witnessed numerous attempts to rewrite the 
way that connection pooling and session handling is done, and in general 
just noticed the tension between the two engineering factions that want 
to keep things agnostic towards backend storage and at the same time 
make the backend storage perform and scale adequately.


I'm not sure why you are being so aggressive about this topic. I 
certainly am not being aggressive about my responses -- just cautioning 
that the existing codebase has seen its fair share of refactoring, some 
of which has been a failure and had to be reverted. I would hate to jump 
into a frenzy to radically change the way that the data access code 
works in Nova without a good discussion.



But sure, ok.

But then I guarantee somebody is gonna spend a bunch of time writing an
object-oriented API to the model objects because the ORM is very useful for
the data modification part of the DB interaction.


!cite - seriously...


? I give an example below... a cautionary tale if you will, about one 
possible consequence of getting rid of the ORM.



Because people will complain about having to do this:

conn = engine.connection()
# instances is the sqlalchemy Table object for instances
inst_ins = instances.insert().values(blah=blah)
ip_ins = fixed_ips.insert().values(blah=blah)
conn.execute(ip_ins)
conn.execute(inst_ins)
conn.close()


This strawman is one way that it might be written. Given that a
growing set of our projects have non-SQL backends, this doesn't look
like the obvious way to phrase it to me.


I'm using the SQLAlchemy Core API above, with none of the SQLAlchemy ORM 
code... which is (I thought), what you were proposing we do? How is that 
a strawman argument? :(



instead of this:

i = Instance(blah=blah)
ip = FixedIp(blah=blah)
i.fixed_ips.append(ip)
session.add(u)
session.commit()

And so you've thrown the baby out with the bathwater and made more work for
everyone.


Perhaps; or perhaps we've avoided a raft of death-by-thousand-cuts
bugs across the project.


Could just as easily introduce the same bugs by radically redesigning 
the data access code without first considering all sides of the problem 
domain.


-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-manuals] Need some help when mvn clean generate-sources

2013-08-19 Thread Andreas Jaeger
On 08/19/2013 08:12 AM, Tian, Shuangtai wrote:
 Hi guys
 When do some build on openstack-manuals project,there is error when bulid 
 'openstack-compute-admin' ,but others (such as openstack-user, docbkx-example 
 e.g) are all SUCCESS.
 Anybody know why ?
 Thanks!

Is this reproduceable? I just tried and it works fine for me.

 
 Cd openstack-manuals/doc/src/docbkx/openstack-compute-admin
 mvn clean generate-sources
 
 [INFO] Scanning for projects...
 [INFO]
 [INFO] 
 
 [INFO] Building OpenStack Administration Guides 1.0.0-SNAPSHOT
 [INFO] 
 
 Downloading: 
 http://maven.research.rackspacecloud.com/content/groups/public/org/apache/maven/plugins/maven-clean-plugin/2.3/maven-clean-plugin-2.3.pom
 Downloading: 
 http://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-clean-plugin/2.3/maven-clean-plugin-2.3.pom
 [INFO] 
 
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 2:17.516s
 [INFO] Finished at: Mon Aug 19 10:56:57 CST 2013
 [INFO] Final Memory: 8M/245M
 [INFO] 
 
 [ERROR] Plugin org.apache.maven.plugins:maven-clean-plugin:2.3 or one of its 
 dependencies could not be resolved: Failed to read artifact descriptor for 
 org.apache.maven.plugins:maven-clean-plugin:jar:2.3: Could not transfer 
 artifact org.apache.maven.plugins:maven-clean-plugin:pom:2.3 from/to 
 rackspace-research 
 (http://maven.research.rackspacecloud.com/content/groups/public/): Connection 
 to http://maven.research.rackspacecloud.com refused: Connection timed out - 
 [Help 1]

It seems that just clean is broken, looks like a networking problem to
download the plugin.

Btw. there's an openstack-docs mailing list which can help with any
manual related issues,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-19 Thread Jay Pipes

On 08/19/2013 12:56 AM, Joshua Harlow wrote:

Another good article from an ex-coworker that keeps on making more and
more sense the more projects I get into...

http://seldo.com/weblog/2011/08/11/orm_is_an_antipattern

Your mileage/opinion though may vary :)


I don't disagree with most of that article. All good points.

However, I will say a couple things:

1) We can still use the SQLAlchemy ORM module -- Query and Session 
object specifically, along with using the SQLAlchemy Model base class 
with no relation() loading at all in the Model classes -- and get good 
performance. We just wouldn't use the ActiveRecord pattern.


2) I highly caution folks who think a No-SQL store is a good storage 
solution for any of the data currently used by Nova, Glance (registry), 
Cinder (registry), Ceilometer, and Quantum. All of the data stored and 
manipulated in those projects is HIGHLY relational data, and not 
objects/documents. Switching to use a KVS for highly relational data is 
a terrible decision. You will just end up implementing joins in your code...


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-19 Thread Robert Collins
On 19 August 2013 18:35, Jay Pipes jaypi...@gmail.com wrote:

 http://www.codinghorror.com/blog/2006/06/object-relational-mapping-is-the-vietnam-of-computer-science.html

 There is no proper use of an ORM.


 I'm not a super-fan of ORMs, Robert. I'm not sure why you're insisting on
 taking me down this road...

Sorry, not sure how we ended up here ;)

 All I'm saying is that we should be careful not to swap one set of problems
 for another. I say this because I've seen the Nova data-access code develop
 from its very earliest days, up to this point. I've seen the horrors of
 trying to mask an object approach on top of a non-relational data store,
 witnessed numerous attempts to rewrite the way that connection pooling and
 session handling is done, and in general just noticed the tension between
 the two engineering factions that want to keep things agnostic towards
 backend storage and at the same time make the backend storage perform and
 scale adequately.

Ah! Ok, completely agree: playing flip-flop on problem sets would be a
poor outcome.

 I'm not sure why you are being so aggressive about this topic. I certainly
 am not being aggressive about my responses -- just cautioning that the
 existing codebase has seen its fair share of refactoring, some of which has
 been a failure and had to be reverted. I would hate to jump into a frenzy to
 radically change the way that the data access code works in Nova without a
 good discussion.

I didn't intend to be aggressive - sorry - super sorry in fact. I've
been burnt by months of effort turning around problem codebases where
the ORM was a significant cause of the problems.


 But then I guarantee somebody is gonna spend a bunch of time writing an
 object-oriented API to the model objects because the ORM is very useful
 for
 the data modification part of the DB interaction.


 !cite - seriously...


 ? I give an example below... a cautionary tale if you will, about one
 possible consequence of getting rid of the ORM.

I think what I really meant here is 'you say months, but if we're
writing an object-orientated API surely we'd just use one of the
mapping techniques available in SQLAlchemy..'

 This strawman is one way that it might be written. Given that a
 growing set of our projects have non-SQL backends, this doesn't look
 like the obvious way to phrase it to me.


 I'm using the SQLAlchemy Core API above, with none of the SQLAlchemy ORM
 code... which is (I thought), what you were proposing we do? How is that a
 strawman argument? :(

So what is in my head is that we have two layers:
business logic
storage logic

And the thing I don't like about the ORM approach is that our business
logic objects are storage logic objects - even though we don't use
http://martinfowler.com/eaaCatalog/domainModel.html we can easily
trigger late evaluation when traversing collections. In particular
because we have large numbers of developers who are likely going to
not be holding the entire problem domain in their head; the churn that
results on code and design tends to throw things out again and again
over time. And we have IMO too much business logic in the
db/sqlalchemy/api.py files scattered around.

So, what I'd like to see is something where the storage layer and
logic layer are more thoroughly decoupled: only return plain ol Python
objects from the DB layer; but within that layer I wouldn't object to
an ORM being used; secondly I'd like to make sure we don't end up
making business decisions in the storage layer, because that makes it
harder when porting to a different storage layer - such as the nova
conductor is.

So the business logic layer for adding a fixed IP would be something like:
i = business.Instance.find(blah=blah)
ip = business.FixedIP(blah=blah)
i.fixed_ips.append(ip)
storage.save(i)

i and ip would be plain ol python objects
storage.save would have the same semantics as an RPC call - it could
do a transaction itself, but there's no holding transactions between
calls to save.

This is very close to:

 instead of this:

 i = Instance(blah=blah)
 ip = FixedIp(blah=blah)
 i.fixed_ips.append(ip)
 session.add(u)
 session.commit()

But there is no ORM exposed to the developers working with the storage
API - it's contained.

 And so you've thrown the baby out with the bathwater and made more work
 for
 everyone.


 Perhaps; or perhaps we've avoided a raft of death-by-thousand-cuts
 bugs across the project.


 Could just as easily introduce the same bugs by radically redesigning the
 data access code without first considering all sides of the problem domain.

Totally!

Again, sorry for the tone before, I can only claim a) been burnt in
the past and and b) a week or so of reduced sleep thanks to baby :(.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-19 Thread Jay Pipes

On 08/18/2013 10:33 PM, Joe Gordon wrote:

An alternative I think would be better would be to scrap the use of
the SQLAlchemy ORM; keep using the DB engine abstraction support.

+1, I am hoping this will provide noticeable performance benefits while
being agnostic of what DB back-end is being used.  With the way we use
  SQLALchemy being 25x slower then MySQL we have lots of room for
improvement (see http://paste.openstack.org/show/44143/ from
https://bugs.launchpad.net/nova/+bug/1212418).


@require_admin_context
def compute_node_get_all(context):
return model_query(context, models.ComputeNode).\
options(joinedload('service')).\
options(joinedload('stats')).\
all()

Well, yeah... I suppose if you are attempting to create 115K objects in 
memory in Python (Need to collate each ComputeNode model object and each 
of its relation objects for Service and Stats) you are going to run into 
some performance problems. :)


Would be interesting to see what the performance difference would be if 
you instead had dicts instead of model objects and did something like 
this instead (code not tested, just off top of head...):


# Assume a method to_dict() that takes a Model
# and returns a dict with appropriate empty dicts for
# relationship fields.

qr = session.query(ComputeNode).join(Service).join(Stats)

results = {}

for record in qr:
  node_id = record.ComputeNode.id
  service_id = record.Service.id
  stat_id = record.ComputeNodeStat.id
  if node_id not in results.keys():
results[node_id] = to_dict(record.ComputeNode)
  if service_id not in results[node_id]['services'].keys():
results[node_id]['services'][service_id] = to_dict(record.Service)
  if stat_id not in results[node_id]['stats'].keys():
results[node_id]['stats'][stat_id] = to_dict(record.ComputeNodeStat)

return results

Whether it would be any faster than SQLAlchemy's joinedload...

Besides that, though, probably is a good idea to look at even the 
existence of DB calls that potentially do that kind of massive query 
returning as A Bad Thing...


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-19 Thread Jay Pipes
I'm throwing this up here to get some feedback on something that's 
always bugged me about the model base used in many of the projects.


There's a mixin class that looks like so:

class SoftDeleteMixin(object):
deleted_at = Column(DateTime)
deleted = Column(Integer, default=0)

def soft_delete(self, session=None):
Mark this object as deleted.
self.deleted = self.id
self.deleted_at = timeutils.utcnow()
self.save(session=session)

Once mixed in to a concrete model class, the primary join is typically 
modified to include the deleted column, like so:


class ComputeNode(BASE, NovaBase):
snip...
service = relationship(Service,
   backref=backref('compute_node'),
   foreign_keys=service_id,
   primaryjoin='and_('
'ComputeNode.service_id == Service.id,'
'ComputeNode.deleted == 0)')

My proposal is to get rid of the deleted column in the SoftDeleteMixin 
class entirely, as it is redundant with the deleted_at column. Instead 
of doing a join condition on deleted == 0, one would instead just do the 
join condition on deleted_at is None, which translates to the SQL: AND 
deleted_at IS NULL.


There isn't much of a performance benefit -- you're only reducing the 
row size by 4 bytes. But, you'd remove the redundant data from all the 
tables, which would make the normal form freaks like myself happy ;)


Thoughts?

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-19 Thread Jay Pipes
OK, cool. I'm in agreement with your explained storage/logic separation 
below.


Cheers,
-jay

On 08/19/2013 03:12 AM, Robert Collins wrote:

On 19 August 2013 18:35, Jay Pipes jaypi...@gmail.com wrote:


http://www.codinghorror.com/blog/2006/06/object-relational-mapping-is-the-vietnam-of-computer-science.html

There is no proper use of an ORM.



I'm not a super-fan of ORMs, Robert. I'm not sure why you're insisting on
taking me down this road...


Sorry, not sure how we ended up here ;)


All I'm saying is that we should be careful not to swap one set of problems
for another. I say this because I've seen the Nova data-access code develop
from its very earliest days, up to this point. I've seen the horrors of
trying to mask an object approach on top of a non-relational data store,
witnessed numerous attempts to rewrite the way that connection pooling and
session handling is done, and in general just noticed the tension between
the two engineering factions that want to keep things agnostic towards
backend storage and at the same time make the backend storage perform and
scale adequately.


Ah! Ok, completely agree: playing flip-flop on problem sets would be a
poor outcome.


I'm not sure why you are being so aggressive about this topic. I certainly
am not being aggressive about my responses -- just cautioning that the
existing codebase has seen its fair share of refactoring, some of which has
been a failure and had to be reverted. I would hate to jump into a frenzy to
radically change the way that the data access code works in Nova without a
good discussion.


I didn't intend to be aggressive - sorry - super sorry in fact. I've
been burnt by months of effort turning around problem codebases where
the ORM was a significant cause of the problems.



But then I guarantee somebody is gonna spend a bunch of time writing an
object-oriented API to the model objects because the ORM is very useful
for
the data modification part of the DB interaction.



!cite - seriously...



? I give an example below... a cautionary tale if you will, about one
possible consequence of getting rid of the ORM.


I think what I really meant here is 'you say months, but if we're
writing an object-orientated API surely we'd just use one of the
mapping techniques available in SQLAlchemy..'


This strawman is one way that it might be written. Given that a
growing set of our projects have non-SQL backends, this doesn't look
like the obvious way to phrase it to me.



I'm using the SQLAlchemy Core API above, with none of the SQLAlchemy ORM
code... which is (I thought), what you were proposing we do? How is that a
strawman argument? :(


So what is in my head is that we have two layers:
business logic
storage logic

And the thing I don't like about the ORM approach is that our business
logic objects are storage logic objects - even though we don't use
http://martinfowler.com/eaaCatalog/domainModel.html we can easily
trigger late evaluation when traversing collections. In particular
because we have large numbers of developers who are likely going to
not be holding the entire problem domain in their head; the churn that
results on code and design tends to throw things out again and again
over time. And we have IMO too much business logic in the
db/sqlalchemy/api.py files scattered around.

So, what I'd like to see is something where the storage layer and
logic layer are more thoroughly decoupled: only return plain ol Python
objects from the DB layer; but within that layer I wouldn't object to
an ORM being used; secondly I'd like to make sure we don't end up
making business decisions in the storage layer, because that makes it
harder when porting to a different storage layer - such as the nova
conductor is.

So the business logic layer for adding a fixed IP would be something like:
i = business.Instance.find(blah=blah)
ip = business.FixedIP(blah=blah)
i.fixed_ips.append(ip)
storage.save(i)

i and ip would be plain ol python objects
storage.save would have the same semantics as an RPC call - it could
do a transaction itself, but there's no holding transactions between
calls to save.

This is very close to:


instead of this:

i = Instance(blah=blah)
ip = FixedIp(blah=blah)
i.fixed_ips.append(ip)
session.add(u)
session.commit()


But there is no ORM exposed to the developers working with the storage
API - it's contained.


And so you've thrown the baby out with the bathwater and made more work
for
everyone.



Perhaps; or perhaps we've avoided a raft of death-by-thousand-cuts
bugs across the project.



Could just as easily introduce the same bugs by radically redesigning the
data access code without first considering all sides of the problem domain.


Totally!

Again, sorry for the tone before, I can only claim a) been burnt in
the past and and b) a week or so of reduced sleep thanks to baby :(.

-Rob




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [oslo.db] Proposal: Get rid of deleted column

2013-08-19 Thread Boris Pavlovic
Hi Jay,


When I started working around unique keys, I tried to use deleted_at
column. so answer about why we don't use deleted_at column you could read
in Devananda's comment on my patch https://review.openstack.org/#/c/16162/ .

Also I should mention that this is really huge change and it will take a
lot of time to implement it.  E.g. I started working around unique keys at
begin of the Grizzly and we still didn't finish this work in OpenStack
projects (except Nova where we have last the patch on review
https://review.openstack.org/#/c/36880/).


Best regards,
Boris Pavlovic
--
Mirantis Inc.







On Mon, Aug 19, 2013 at 11:39 AM, Jay Pipes jaypi...@gmail.com wrote:

 I'm throwing this up here to get some feedback on something that's always
 bugged me about the model base used in many of the projects.

 There's a mixin class that looks like so:

 class SoftDeleteMixin(object):
 deleted_at = Column(DateTime)
 deleted = Column(Integer, default=0)

 def soft_delete(self, session=None):
 Mark this object as deleted.
 self.deleted = self.id
 self.deleted_at = timeutils.utcnow()
 self.save(session=session)

 Once mixed in to a concrete model class, the primary join is typically
 modified to include the deleted column, like so:

 class ComputeNode(BASE, NovaBase):
 snip...
 service = relationship(Service,
backref=backref('compute_node'**),
foreign_keys=service_id,
primaryjoin='and_('
 'ComputeNode.service_id == Service.id,'
 'ComputeNode.deleted == 0)')

 My proposal is to get rid of the deleted column in the SoftDeleteMixin
 class entirely, as it is redundant with the deleted_at column. Instead of
 doing a join condition on deleted == 0, one would instead just do the join
 condition on deleted_at is None, which translates to the SQL: AND
 deleted_at IS NULL.

 There isn't much of a performance benefit -- you're only reducing the row
 size by 4 bytes. But, you'd remove the redundant data from all the tables,
 which would make the normal form freaks like myself happy ;)

 Thoughts?

 -jay

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-19 Thread Flavio Percoco

On 18/08/13 18:47 -0400, Jay Pipes wrote:

On 08/18/2013 06:28 PM, Joe Gordon wrote:


On Aug 18, 2013 3:58 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

 On 08/18/2013 03:53 AM, Joshua Harlow wrote:

 I always just liked SQL as the database abstraction layer ;)

 On a more serious note I think novas new object model might be a way
to go but in all honesty there won't be a one size fits all solution. I
just don't think sqlalchemy is that solution personally (maybe if we
just use sqlalchemy core it will be better and eject just the orm layer).


 What is specifically wrong with SQLAlchemy's ORM layer? What would
you replace it with? Why would use SQLAlchemy's core be better?

 I've seen little evidence that SQLAlchemy's ORM layer is the cause
for database performance problems. Rather, I've found that the database
schemas in use -- and in some cases, the *way* that the SQLAlchemy ORM
is called (for example, doing correlated subqueries instead of straight
joins) -- are primary causes for database performance issues.

From what I have seen the issue is both the queries and the ORM layer.
See https://bugs.launchpad.net/nova/+bug/1212418  for details.


Good point.

For the record, I'm not a fan of lazy/eager loading of relations in 
the models themselves, but instead always being explicit about the 
exact data you wish to query for.


It's similar in nature to the SQL best practice of never doing SELECT 
* FROM set and instead of always being explicity about the columns 
you wish to retrieve...




+1

I've seen a couple of cases where this is not being taken under
consideration. I'd like to see some of the lazy loaded relations being
explicitly loaded. 


I think a good rule for this is:

If you know you'll need it, then load it. If you don't know it, then
you're *probably* doing something wrong.

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-19 Thread Boris Pavlovic
Flavio,

Agreed. I'd also like to see other project migrated before pulling
oslo.db out from oslo-incubator


as I wrote before oslo.db code is used by:  Nova, Neutron, Cinder, Ironic,
Ceilometer use oslo.db. And we have already patches to switch in Glance to
id. And we are woking in Keystone and Heat.

Why (nova, neutron, cinder, ironic and ceilometer) is not enough to say
that the code is OK?


I'd also add that pulling oslo db into its own package means that
projects using Oslo's db code have to be migrated as well. I think
it's a bit late for that. The focus should be on making sure current
code is stable enough for the not-so-far release.


Hm, I really don't see big problems with migrating to oslo.db lib even in
this moment, because in oslo-incubator and in oslo.db is the same code..
Could you explain what problem you see in process of migartion? (For me it
is add one more requirments, remove openstack/db/.. folder, switch
imports)..



On Mon, Aug 19, 2013 at 1:29 PM, Flavio Percoco fla...@redhat.com wrote:

 On 19/08/13 00:34 -0700, Gary Kotton wrote:

 Hi,

 I have a number of things to say here:

 1.   Great work in getting the DB into the common and ironing out the
 issues


 +1


  2.   As far as I know only Neutron and Nova are making use of the
 common DB
 code. Neutron has been using this since the beginning of H2 (this did not
 resolve all of the issues that we had and we had) and Nova has just
 recently
 upgraded to the latest DB code (this was a few weeks ago).

 3.In general I like the idea of having a separate lib for this
 but have
 a number of reservations regarding the timing and stability:

 a.   I do not think that this has been running long enough in Neutron
 and
 Nova for us to give it a stamp of approval (the common CFG code was at
 least
 one cycle as common code prior to moving into its own lib). I think that
 in
 Neutron we still have a number of issues with load on the DB. I need to
 double
 check on this.


 Agreed. I'd also like to see other project migrated before pulling
 oslo.db out from oslo-incubator



 b.  I think that the beginning of Icehouse is a good time. When we
 moved to
 the CFG library there were a number of hickups and issues along the way. I
 think that Mark (oslo PTL) can elaborate a little more on this. Timing is
 essential.


 +1

 I'd also add that pulling oslo db into its own package means that
 projects using Oslo's db code have to be migrated as well. I think
 it's a bit late for that. The focus should be on making sure current
 code is stable enough for the not-so-far release.

 Thanks for the hard work!
 FF

 --
 @flaper87
 Flavio Percoco


 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Can I Expose the host-aggregate as availability zone?

2013-08-19 Thread sudheesh sk
Hi,

1) Can I Expose
the host-aggregate as availability zone?


2) Is there anyway to make the Host Aggregate dynamically growing (with each VM 
creation- add the host to host aggregate if its not already there)?

Thanks,
Sudheesh___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Can I Expose the host-aggregate as availability zone?

2013-08-19 Thread sudheesh sk


Hi,


1) Can I Expose
the host-aggregate as availability zone?


2) Is there anyway to make the Host Aggregate dynamically growing (with each VM 
creation- add the host to host aggregate if its not already there)?

Thanks,
Sudheesh___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Help consuming trusts

2013-08-19 Thread Steven Hardy
On Sun, Aug 18, 2013 at 07:02:04PM +0200, Matthieu Huin wrote:
 Hi Steve,
 
 It might be a bit late for this, but here's a script I wrote when 
 experimenting with trusts: 
 https://github.com/mhuin/keystone_trust/blob/master/tests/swift_example.sh
 
 I hope it'll help you.

Thanks for this!!

Exactly what I was looking for and has enabled me to solve my problem (my test 
code was broken).

I've marked this bug invalid:

https://bugs.launchpad.net/keystone/+bug/1213340

Interestingly, my debugging has highlighted a slightly non-obvious issue with
the creation and consumption of a trust which is probably worth mentioning here:

The docs state A project_id may not be specified without at least one role,
and vice versa., however /OS-TRUST/trusts *does* allow you to create a trust
with an empty roles list.

This results in 401 responses whenever you try to consume the trust, which is
not exactly obvious until you realize what's happening..

Can I ask if this is deliberate, or is it a bug in the trusts create code?

It seems odd to allow creation of a trust which is seemingly useless and can
never be consumed?

Thanks all for your help working through this!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-19 Thread Flavio Percoco

On 19/08/13 04:33 -0700, Gary Kotton wrote:
So are you agree with next points? 


1) In Havana focus on migrating in all projects to oslo.db code


[Gary Kotton] It is worth going for.


+1



2) in IceHouse create and move to oslo.db lib

[Gary Kotton] I am in favor of this pending the stability of the oslo db code
(which is on the right track)




I agree with Gary.

And are you agree that we should start working around olso.db lib now. 



[Gary Kotton] I am not sure what the effort for this is, but if this is just a
matter of preparing it all for the start of Icehouse then cool, go for it. I
nonetheless suggest speaking with Mark McLoughlinto try and learn lessons from
the process with the common config module J



+1 here as well

FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-19 Thread Boris Pavlovic
Flavio,

So could you review please patches in Glance? =)


Best regards,
Boris Pavlovic
--
Mirantis Inc.



On Mon, Aug 19, 2013 at 4:33 PM, Flavio Percoco fla...@redhat.com wrote:

 On 19/08/13 04:33 -0700, Gary Kotton wrote:

 So are you agree with next points?
 1) In Havana focus on migrating in all projects to oslo.db code


 [Gary Kotton] It is worth going for.


 +1



 2) in IceHouse create and move to oslo.db lib

 [Gary Kotton] I am in favor of this pending the stability of the oslo db
 code
 (which is on the right track)



 I agree with Gary.


  And are you agree that we should start working around olso.db lib now.

 [Gary Kotton] I am not sure what the effort for this is, but if this is
 just a
 matter of preparing it all for the start of Icehouse then cool, go for
 it. I
 nonetheless suggest speaking with Mark McLoughlinto try and learn lessons
 from
 the process with the common config module J


 +1 here as well


 FF

 --
 @flaper87
 Flavio Percoco

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal oslo.db lib

2013-08-19 Thread Flavio Percoco

On 19/08/13 16:45 +0400, Boris Pavlovic wrote:
Flavio, 


So could you review please patches in Glance? =)



Yes, I'll sync with Mark and other folks to make sure all doubts are
cleared.


--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-19 Thread Sandy Walsh


On 08/18/2013 04:04 PM, Jay Pipes wrote:
 On 08/17/2013 03:10 AM, Julien Danjou wrote:
 On Fri, Aug 16 2013, Jay Pipes wrote:

 Actually, that's the opposite of what I'm suggesting :) I'm suggesting
 getting rid of the resource_metadata column in the meter table and
 using the
 resource table in joins...

 I think there's a lot of scenario where this would fail, like for
 example instances being resized; the flavor is a metadata.
 
 I'm proposing that in these cases, a *new* resource would be added to
 the resource table (and its ID inserted in meter) table with the new
 flavor/instance's metadata.
 
 Though, changing the schema to improve performance is a good one, this
 needs to be thought from the sample sending to the storage, through the
 whole chain. This is something that will break a lot of current
 assumption; that doesn't mean it's bad or we can't do it, just that we
 need to think it through. :)
 
 Yup, understood completely. The change I am proposing would not affect
 any assumptions made from the point of view of a sample sent to storage.
 The current assumption is that a sample's *exact* state at time of
 sampling would be stored so that the exact sample state could be
 reflected even if the underlying resource that triggered the sample
 changed over time.
 
 All I am proposing is a change to the existing implementation of that
 assumption: instead of storing the original resource metadata in the
 meter table, we instead ensure that we store the resource in the
 resource table, and upon new sample records being inserted into the
 meter table, we check to see if the resource for the sample is the same
 as it was last time. If it is, we simply insert the resource ID from
 last time. If it isn't, we add a new record to the resource table that
 describes the new resource attributes, and we insert that new resource
 ID into the meter table for that sample...

I'm assuming we wouldn't need a backlink to the older resource?

I'm thinking about how this would work work Events and Request ID's. The
two most common reports we run from StackTach are based on Request ID
and some resource ID.

Show me all the events related to this Request UUID
Show me all the events related to this Instance/Image/Network/etc UUID

A new Resource entry would be fine so long as it was still associated
with the underlying Resource UUID (instance, image, etc). We could get
back a list of all the Resources with the same UUID and, if needed,
lookup the metadata for it. This would allow us to see how to the
resource changed over time.

I think that's what you're suggesting ... if so, yep.

As for the first query ... for this Request ID, we'd have to map Event
many related Resources since one event could have a related
instance/image/network/volume/host/scheduler, etc.

These relationships would have to get mapped when the Event is turned
into Meters. Changing the Resource ID might not be a problem if we keep
a common Resource UUID. I have to think about that some more.

Would we use timestamps to determine which Resource is the most recent?


-S




 Best,
 -jay
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Concerning get_resources/get_meters and the Ceilometer API

2013-08-19 Thread Sandy Walsh


On 08/19/2013 09:40 AM, Julien Danjou wrote:
 On Mon, Aug 19 2013, Sandy Walsh wrote:
 
 On 08/19/2013 05:08 AM, Julien Danjou wrote:
 On Sun, Aug 18 2013, Jay Pipes wrote:

 I'm proposing that in these cases, a *new* resource would be added to the
 resource table (and its ID inserted in meter) table with the new
 flavor/instance's metadata.

 Ah I see. Considering we're storing metadata as a serialized string
 (whereas it's a dict), isn't there a chance we fail?
 I'm not sure about the idempotence of the JSON serialization on dicts.

 Yeah, using a json blob should only be for immutable data. I'm assuming
 metadata can change so we'd need idempotence. I could easily see two
 pipelines altering metadata fields. Last write wins. :(
 
 No, actually I'm not worried about that, it would work as described by
 Jay. It's just that I'm not sure that we can assert json.dumps(somedict)
 returns always the same string.
 


Gotcha. I think that's fine though. We're going to get that with a
frequently changing resource. So long as we have a common UUID (from the
source system), the database ID can change all it wants.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Help consuming trusts

2013-08-19 Thread Dolph Mathews
On Mon, Aug 19, 2013 at 6:06 AM, Steven Hardy sha...@redhat.com wrote:

 On Sun, Aug 18, 2013 at 07:02:04PM +0200, Matthieu Huin wrote:
  Hi Steve,
 
  It might be a bit late for this, but here's a script I wrote when
 experimenting with trusts:
 https://github.com/mhuin/keystone_trust/blob/master/tests/swift_example.sh
 
  I hope it'll help you.

 Thanks for this!!

 Exactly what I was looking for and has enabled me to solve my problem (my
 test code was broken).

 I've marked this bug invalid:

 https://bugs.launchpad.net/keystone/+bug/1213340

 Interestingly, my debugging has highlighted a slightly non-obvious issue
 with
 the creation and consumption of a trust which is probably worth mentioning
 here:

 The docs state A project_id may not be specified without at least one
 role,
 and vice versa., however /OS-TRUST/trusts *does* allow you to create a
 trust
 with an empty roles list.

 This results in 401 responses whenever you try to consume the trust, which
 is
 not exactly obvious until you realize what's happening..

 Can I ask if this is deliberate, or is it a bug in the trusts create code?


That certainly sounds like a bug, given that it directly conflicts with the
documented behavior.



 It seems odd to allow creation of a trust which is seemingly useless and
 can
 never be consumed?


++



 Thanks all for your help working through this!

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] General Question about CentOS

2013-08-19 Thread Jay Buffington
On Fri, Aug 16, 2013 at 2:51 PM, Miller, Mark M (EB SW Cloud - RD -
Corvallis) mark.m.mil...@hp.com wrote:

   Is OpenStack supported on CentOS running Python 2.6?


Oh, I forgot to mention, keystone's py2.6 support seems to currently be
broken because of this bug:
https://bugs.launchpad.net/keystone/+bug/1213284/

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Stats on blueprint design info / creation times

2013-08-19 Thread Daniel P. Berrange
In this thread about code review:

  http://lists.openstack.org/pipermail/openstack-dev/2013-August/013701.html

I mentioned that I thought there were too many blueprints created without
sufficient supporting design information and were being used for tickbox
process compliance only. I based this assertion on a gut feeling I have
from experiance in reviewing.

To try and get a handle on whether there is truely a problem, I used the
launchpadlib API to extract some data on blueprints [1].

In particular I was interested in seeing:

  - What portion of blueprints have an URL containing an associated
design doc,

  - How long the descriptive text was in typical blueprints

  - Whether a blueprint was created before or after the dev period
started for that major release.


The first two items are easy to get data on. On the second point, I redid
line wrapping on description text to normalize the line count across all
blueprints. This is because many blueprints had all their text on one
giant long line, which would skew results. I thus wrapped all blueprints
at 70 characters.

The blueprint creation date vs release cycle dev start date is a little
harder. I inferred the start date of each release, by using the end date
of the previous release. This is probably a little out but hopefully not
by enough to totally invalidate the usefulness of the stats below. Below,
Early means created before start of devel, Late means created after
the start of devel period.

The data for the last 3 releases is:

  Series: folsom
Specs: 178
Specs (no URL): 144
Specs (w/ URL): 34
Specs (Early): 38
Specs (Late): 140
Average lines: 5
Average words: 55


  Series: grizzly
Specs: 227
Specs (no URL): 175
Specs (w/ URL): 52
Specs (Early): 42
Specs (Late): 185
Average lines: 5
Average words: 56


  Series: havana
Specs: 415
Specs (no URL): 336
Specs (w/ URL): 79
Specs (Early): 117
Specs (Late): 298
Average lines: 6
Average words: 68


Looking at this data there are 4 key take away points

  - We're creating more blueprints in every release.

  - Less than 1 in 4 blueprints has a link to a design document. 

  - The description text for blueprints is consistently short
(6 lines) across releases.

  - Less than 1 in 4 blueprints is created before the devel
period starts for a release.


You can view the full data set + the script to generate the
data which you can look at to see if I made any logic mistakes:

  http://berrange.fedorapeople.org/openstack-blueprints/


There's only so much you can infer from stats like this, but IMHO think the
stats show that we ought to think about how well we are using blueprints as
design / feature approval / planning tools.


That 3 in 4 blueprint lack any link to a design doc and have only 6 lines of
text description, is a cause for concern IMHO. The blueprints should be giving
code reviewers useful background on the motivation of the dev work  any
design planning that took place. While there are no doubt some simple features
where 6 lines of text is sufficient info in the blueprint, I don't think that
holds true for the majority.

In addition to helping code reviewers, the blueprints are also arguably a
source of info for QA people testing OpenStack and for the docs teams
documenting new features in each release. I'm not convinced that there is
enough info in many of the blueprints to be of use to QA / docs people.


The creation dates of the blueprints are also an interesting data point.
If the design summit is our place for reviewing blueprints and 3 in 4
blueprints in a release are created after the summit, that's alot of
blueprints potentially missing summit discussions. On the other hand many
blueprints will have corresponding discussions on mailing lists too,
which is arguably just as good, or even better than, summit discussions.

Based on the creation dates though  terseness of design info, I think
there is a valid concern here that blueprints are being created just for
reason of tickbox process compliance. 

In theory we have an approval process for blueprints, but are we ever
rejecting code submissions for blueprints which are not yet approved ?
I've only noticed that happen a couple of times in Nova for things that
were pretty clearly controversial.

I don't intend to suggest that we have strict rules that all blueprints
must be min X lines of text, or be created by date Y. It is important
to keep the flexibility there to avoid development being drowned in
process without benefits.

I do think we have scope for being more rigourous in our review of
blueprints, asking people to expand on the design info associated with
a blueprint. Perhaps also require that a blueprint is actually approved
by the core team before we go to the trouble of reviewing  approving
the code implementing a blueprint in Gerrit.

Regards,
Daniel

[1] http://berrange.fedorapeople.org/openstack-blueprints/blueprint.py
-- 

Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-08-19 Thread Christopher Armstrong
On Fri, Aug 16, 2013 at 1:35 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Zane Bitter's message of 2013-08-16 09:36:23 -0700:
  On 16/08/13 00:50, Christopher Armstrong wrote:
   *Introduction and Requirements*
  
   So there's kind of a perfect storm happening around autoscaling in Heat
   right now. It's making it really hard to figure out how I should
 compose
   this email. There are a lot of different requirements, a lot of
   different cool ideas, and a lot of projects that want to take advantage
   of autoscaling in one way or another: Trove, OpenShift, TripleO, just
 to
   name a few...
  
   I'll try to list the requirements from various people/projects that may
   be relevant to autoscaling or scaling in general.
  
   1. Some users want a service like Amazon's Auto Scaling or Rackspace's
   Otter -- a simple API that doesn't really involve orchestration.
   2. If such a API exists, it makes sense for Heat to take advantage of
   its functionality instead of reimplementing it.
 
  +1, obviously. But the other half of the story is that the API is likely
  be implemented using Heat on the back end, amongst other reasons because
  that implementation already exists. (As you know, since you wrote it ;)
 
  So, just as we will have an RDS resource in Heat that calls Trove, and
  Trove will use Heat for orchestration:
 
 user = [Heat =] Trove = Heat = Nova
 
  there will be a similar workflow for Autoscaling:
 
 user = [Heat =] Autoscaling - Heat = Nova
 

 After a lot of consideration and an interesting IRC discussion, I think
 the point above makes it clear for me. Autoscaling will have a simpler
 implementation by making use of Heat's orchestration capabilities,
 but the fact that Heat will also use autoscaling is orthogonal to that.

 That does beg the question of why this belongs in Heat. Originally
 we had taken the stance that there must be only one control system,
 lest they have a policy-based battle royale. If we only ever let
 autoscaled resources be controlled via Heat (via nested stack produced
 by autoscaling), then there can be only one.. control service (Heat).

 By enforcing that autoscaling always talks to the world via Heat though,
 I think that reaffirms for me that autoscaling, while not really the same
 project (seems like it could happily live in its own code tree), will
 be best served by staying inside the OpenStack Orchestration program.

 The question of private RPC or driving it via the API is not all that
 interesting to me. I do prefer the SOA method and having things talk via
 their respective public APIs as it keeps things loosely coupled and thus
 easier to fit into one's brain and debug/change.


I agree with using only public APIs. I have managed to fit this model of
autoscaling managing a completely independent Heat stack into my brain, and
I am willing to take it and run with it.

Thanks to Zane and Clint for hashing this out with me in a 2-hour IRC
design discussion, it was incredibly helpful :-)

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stats on blueprint design info / creation times

2013-08-19 Thread Thierry Carrez
Daniel P. Berrange wrote:
 In this thread about code review:
 
   http://lists.openstack.org/pipermail/openstack-dev/2013-August/013701.html
 
 I mentioned that I thought there were too many blueprints created without
 sufficient supporting design information and were being used for tickbox
 process compliance only. I based this assertion on a gut feeling I have
 from experiance in reviewing.
 [...]

Nice analysis, Daniel.

One side of this issue is that the blueprints tool no longer matches our
needs (can't have a blueprint that affects multiple projects, can't
discuss in blueprints the same way we do with bugs...).

So I suspect part of the tickbox effect is due to people not getting
enough value from blueprints. They are essential for project management
types (think PTLs or me), but feel like a process tickbox for everyone
else. I hope that StoryBoard will one day fix that for us.

 I do think we have scope for being more rigourous in our review of
 blueprints, asking people to expand on the design info associated with
 a blueprint. Perhaps also require that a blueprint is actually approved
 by the core team before we go to the trouble of reviewing  approving
 the code implementing a blueprint in Gerrit.

The approval process has been simplified lately: if a blueprint is
targeted to a milestone and has a priority set (not Undefined) then it
is considered approved. I agree you could require that the blueprint was
reviewed/prioritized before landing a feature associated with it.

Note that in some cases, some improvements that do not clearly fall
into the bug category are landed without a blueprint link (or a bug
link). So a first step could be to require that a review always
references a bug or a blueprint before it's landed. Then, improve the
quality of the information present in said bug/blueprint.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [ceilometer] Periodic Auditing In Glance

2013-08-19 Thread Sandy Walsh


On 08/16/2013 04:58 PM, Doug Hellmann wrote:
 The notification messages don't translate 1:1 to database records. Even
 if the notification payload includes multiple resources, we will store
 those as multiple individual records so we can query against them. So it
 seems like sending individual notifications would let us distribute the
 load of processing the notifications across several collector instances,
 and won't have any effect on the data storage requirements.


Well, they would. Each .exists would result in a Event record being
stored with the underlying raw json (pretty big) at the very least.

Like Alex said, if each customer has a daily week-long rolling backup
over 100k instances that's 700k .exists records.

We have some ideas we're kicking around internally about alternative
approaches, but right now I think we have to design for 1 glance .exists
per image (worst case) or 1 glance event per tenant (better case) and
hope that deploying per-cell will help spread the load ... but it'll
suck for making aggregated reports per region.

Phil, like Doug said, I don't think switching from per-instance to
per-tenant or anything else will really affect the end result. The
event-meter mapping will have to break it down anyway.


-S

 
 Doug
 
 
 On Thu, Aug 15, 2013 at 11:58 AM, Alex Meade alex.me...@rackspace.com
 mailto:alex.me...@rackspace.com wrote:
 
 I don't know any actual numbers but I would have the concern that
 images tend to stick around longer than instances. For example, if
 someone takes daily snapshots of their server and keeps them around
 for a long time, the number of exists events would go up and up.
 
 Just a thought, could be a valid avenue of concern.
 
 -Alex
 
 -Original Message-
 From: Doug Hellmann doug.hellm...@dreamhost.com
 mailto:doug.hellm...@dreamhost.com
 Sent: Thursday, August 15, 2013 11:17am
 To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
 mailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance] [ceilometer] Periodic Auditing
 In Glance
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Nova generates a single exists event for each instance, and that doesn't
 cause a lot of trouble as far as I've been able to see.
 
 What is the relative number of images compared to instances in a
 typical
 cloud?
 
 Doug
 
 
 On Tue, Aug 13, 2013 at 7:20 PM, Neal, Phil phil.n...@hp.com
 mailto:phil.n...@hp.com wrote:
 
  I'm a little concerned that a batch payload won't align with exists
  events generated from other services. To my recollection, Cinder,
 Trove and
  Neutron all emit exists events on a per-instance basisa
 consumer would
  have to figure out a way to handle/unpack these separately if they
 needed a
  granular feed. Not the end of the world, I suppose, but a bit
 inconsistent.
 
  And a minor quibble: batching would also make it a much bigger
 issue if a
  consumer missed a notificationthough I guess you could
 counteract that
  by increasing the frequency (but wouldn't that defeat the purpose?)
 
  
  
  
   On 08/13/2013 04:35 PM, Andrew Melton wrote:
I'm just concerned with the type of notification you'd send.
 It has to
be enough fine grained so we don't lose too much information.
   
It's a tough situation, sending out an image.exists for each
 image with
the same payload as say image.upload would likely create TONS of
  traffic.
Personally, I'm thinking about a batch payload, with a bare
 minimum of
  the
following values:
   
'payload': [{'id': 'uuid1', 'owner': 'tenant1', 'created_at':
'some_date', 'size': 1},
   {'id': 'uuid2', 'owner': 'tenant2', 'created_at':
'some_date', 'deleted_at': 'some_other_date', 'size': 2}]
   
That way the audit job/task could be configured to emit in batches
  which
a deployer could tweak the settings so as to not emit too many
  messages.
I definitely welcome other ideas as well.
  
   Would it be better to group by tenant vs. image?
  
   One .exists per tenant that contains all the images owned by
 that tenant?
  
   -S
  
  
Thanks,
Andrew Melton
   
   
On Tue, Aug 13, 2013 at 4:27 AM, Julien Danjou
 jul...@danjou.info mailto:jul...@danjou.info
mailto:jul...@danjou.info mailto:jul...@danjou.info wrote:
   
On Mon, Aug 12 2013, Andrew Melton wrote:
   
 So, my question to the Ceilometer community is this,
 does this
  

Re: [openstack-dev] Code review study

2013-08-19 Thread Jay Buffington
On Wed, Aug 14, 2013 at 7:12 PM, Robert Collins
robe...@robertcollins.netwrote:

 Note specifically the citation of 200-400 lines as the knee of the review
 effectiveness curve: that's lower than I thought - I thought 200 was
 clearly fine - but no.


This is really interesting.  I wish they would have explicitly defined
lines of code.   Is that git show |wc -l? Just the new lines which
were added?  The sum of the lines changed, removed and added?  You can
get vastly different numbers depending on how you count it.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live-snapshot/cloning of virtual machines

2013-08-19 Thread Shawn Hartsock
For what it's worth... this doesn't seem too bad to me...

I was planning on using this part of the vSphere API:
* 
https://www.vmware.com/support/developer/vc-sdk/visdk400pubs/ReferenceGuide/vim.vm.CloneSpec.html

...to accomplish the clone part of the BP. The API contains a spec section 
where you tell the ESX hypervisor how to handle things like network identity...
* 
https://www.vmware.com/support/developer/vc-sdk/visdk400pubs/ReferenceGuide/vim.vm.customization.IPSettings.html

... I was going to use this to plumb together the live-snapshot bit ...
* 
https://www.vmware.com/support/developer/vc-sdk/visdk400pubs/ReferenceGuide/vim.VirtualMachine.html#createSnapshot

... which includes stuff about how to handle memory.

So, I didn't read this blueprint as especially hard to accomplish in the 
vmwareapi driver. It just would eat more time than I have right now and would 
require a deeper level of understanding of how the vSphere hypervisor suite 
works than most of the other features currently use. I'm fully planning on 
playing with this in IceHouse just to see how it would go, it's probably one of 
the more nifty new features we could add.

Note: these are old features for the API and they are a tad complicated, but 
it's all well within the realm of supported! In fact, it's standard operating 
procedure to use the clone feature to scale-out an application in some vSphere 
shops. (albeit, in production the admins I know personally, use clone with 
power-off from a 'template' and the system comes up with a new MAC/etc on first 
boot... cloning from a running system is possible, but I would recommend 
cloning from a power-off state unless you've got a hot-plug feature in your 
guest OS)



# Shawn Hartsock

- Original Message -
 From: Daniel P. Berrange berra...@redhat.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Monday, August 19, 2013 5:24:59 AM
 Subject: Re: [openstack-dev] [nova] live-snapshot/cloning of virtual machines
 
 On Mon, Aug 19, 2013 at 08:28:58AM +1200, Robert Collins wrote:
  On 17 August 2013 07:01, Russell Bryant rbry...@redhat.com wrote:
  
   Maybe we've grown up to the point where we have to be more careful and
   not introduce
   these kind of features and the maintenance cost of introducing
   experimental features is
   too great. If that is the community consensus, then I'm happy keep the
   live snapshot stuff
   in a branch on github for people to experiment with.
  
   My feeling after following this discussion is that it's probably best to
   keep baking in another branch (github or whatever).  The biggest reason
   is because of the last comment quoted from Daniel Berrange above.  I
   feel that like that is a pretty big deal.
  
  So, reading between the lines here, I guess you're worried that we'd
  let code paths that violate what upstream will support leak into the
  main codepaths for libvirt - and thus we'd end up with a situation
  where we aren't supported by upstream for all regular operations.
 
 Yes, if you perform a live clone of a VM, then you have effectively
 tainted that VM for the rest of its lifetime. From the virt host
 vendors' POV, any unexpected or problematic behaviour you get from
 that VM thereafter will be outside scope of support. The onus would
 be on the openstack sysadmin to demonstrate that the same problem
 occurs without the use of live cloning.
 
 Running a production cloud using a feature that your virt host
 vendor considers unsupported would be somewhat reckless IMHO, unless
 you think your sysadmins think they have the skills to solve all
 possible problems in that area themselves, which is unlikely for most
 cloud vendors.
 
 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-19 Thread John Bresnahan
 All I'm saying is that we should be careful not to swap one set of
 problems for another. 

My 2 cents: I am in agreement with Jay.  I am leery of NoSQL being a
direct sub in and I fear that this effort can be adding a large workload
for little benefit.

A somewhat related post:
http://www.joelonsoftware.com/articles/fog69.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] tarballs of savanna-extra

2013-08-19 Thread Sergey Lukjanov
Hi Matt,

it is not an accident that savanna-extra has no tarballs at tarballs.o.o, 
because this repo is used for storing some date that is only needed for some 
stuff like building images for vanilla plugin, storing Swift support patch for 
Hadoop and etc. So, it looks like that we should not package all of them to one 
heterogeneous tarball.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Aug 20, 2013, at 0:25, Matthew Farrellee m...@redhat.com wrote:

 Will someone setup a tarballs.os.o release of savanna-extra's master 
 (https://github.com/stackforge/savanna-extra), and make sure it gets an 
 official release for 0.3?
 
 Best,
 
 
 matt
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday August 20th at 19:00 UTC

2013-08-19 Thread Elizabeth Krumbach Joseph
The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday August 20th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.


-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-19 Thread Boris Pavlovic
Mark,

But for a variety of reasons, I do not consider the general thrust of use
oslo db code to be approved. Instead, lets continue to consider features
from olso db on a case by case basis, and see what the right resolution is
in each case.

Absolutely agree with this point (e.g. we removed shadow tables from our
roadmap after some discussion in other threads)
So we are planing to make all changes using our common approach called
baby steps (Not by one giant patch set).

Btw I answered on your question about changed conf parameter in review (I
mean sql_connection to database.connection).


Best regards,
Boris Pavlovic
---
Mirantis Inc.



On Mon, Aug 19, 2013 at 9:33 PM, Mark Washenberger 
mark.washenber...@markwash.net wrote:

 Thanks for refocusing the discussion on your original questions!

 Also thanks for this additional summary. I consider the patches you have
 up for review in glance to have a general direction-level green light at
 this point (though I've got a question on the specifics in the ultimate
 review).

 But for a variety of reasons, I do not consider the general thrust of use
 oslo db code to be approved. Instead, lets continue to consider features
 from olso db on a case by case basis, and see what the right resolution is
 in each case.

 Thanks for your patience and forbearance, hopefully getting in the patches
 you have submitted now will help unblock progress for your team.

 On Mon, Aug 19, 2013 at 3:49 AM, Boris Pavlovic bo...@pavlovic.me wrote:

 Mark,

 Main part of oslo is:
 1) common migration testing
 2) common sqla.models
 3) common hacks around sqla and sqla-migrate
 4) common work around engines and sessions


 All these points are implemented in Glance almost in the same way as in
 Oslo.
 Also we are able to use only part of this code in Glance, and add some
 other things that are glance related over this code.

 Our current 2 patches on review do next things:
 1) Copy paste oslo.db code into glance
 2) Use sqla session/engine/exception wrappers
 3) Remove Glance code that covers session/engine/exception

 So I really don't see any bad thing in this code:
 1) If you would like to implement other backends = this change won't
 block it
 2) If you would like to make some other sqla utitlites or glance related
 things = this change won't block it
 3) If there are bugs = fix it in oslo and sync = this change won't
 block it

  So I really don't see any reason to block work around migration to
 oslo.db code in Glance.


 Best regards,
 Boris Pavlovic
 ---
 Mirantis Inc.




 On Fri, Aug 16, 2013 at 10:41 PM, Mark Washenberger 
 mark.washenber...@markwash.net wrote:

 I would prefer to pick and choose which parts of oslo common db code to
 reuse in glance. Most parts there look great and very useful. However, some
 parts seem like they would conflict with several goals we have.

 1) To improve code sanity, we need to break away from the idea of having
 one giant db api interface
 2) We need to improve our position with respect to new, non SQL drivers
 - mostly, we need to focus first on removing business logic
 (especially authz) from database driver code
 - we also need to break away from the strict functional interface,
 because it limits our ability to express query filters and tends to lump
 all filter handling for a given function into a single code block (which
 ends up being defect-rich and confusing as hell to reimplement)
 3) It is unfortunate, but I must admit that Glance's code in general is
 pretty heavily coupled to the database code and in particular the schema.
 Basically the only tool we have to manage that problem until we can fix it
 is to try to be as careful as possible about how we change the db code and
 schema. By importing another project, we lose some of that control. Also,
 even with the copy-paste model for oslo incubator, code in oslo does have
 some of its own reasons to change, so we could potentially end up in a
 conflict where glance db migrations (which are operationally costly) have
 to happen for reasons that don't really matter to glance.

 So rather than framing this as glance needs to use oslo common db
 code, I would appreciate framing it as glance database code should have
 features X, Y, and Z, some of which it can get by using oslo code. Indeed,
 I believe in IRC we discussed the idea of writing up a wiki listing these
 feature improvements, which would allow a finer granularity for evaluation.
 I really prefer that format because it feels more like planning and less
 like debate :-)

  I have a few responses inline below.

 On Fri, Aug 16, 2013 at 6:31 AM, Victor Sergeyev vserge...@mirantis.com
  wrote:

 Hello All.

 Glance cores (Mark Washenberger, Flavio Percoco, Iccha Sethi) have some
 questions about Oslo DB code, and why is it so important to use it instead
 of custom implementation and so on. As there were a lot of questions it was
 really hard to answer on all this questions in IRC. So we decided that
 mailing list is 

[openstack-dev] [Nova] Interested in a mid-Icehouse-cycle Nova meet-up?

2013-08-19 Thread Russell Bryant
Greetings,

Some OpenStack programs have started a nice trend of getting together in
the middle of the development cycle.  These meetups can serve a number
of useful purposes: community building, ramping up new contributors,
tackling hard problems by getting together in the same room, and more.

I am in the early stages of attempting to plan a Nova meet-up for the
middle of the Icehouse cycle.  To start, I need to get a rough idea of
how much interest there is.

I have very little detail at this point, other than I'm looking at
locations in the US, and that it would be mid-cycle (January/February).

If you would be interested in this, please fill out the following form:

http://goo.gl/RPa6iG

Thanks!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][VPNaaS] Supporting OpenSwan or StrongSwan or Both?

2013-08-19 Thread Nachi Ueno
Hi folks

I would like to discuss whether supporting OpenSwan or StrongSwan or Both for
ipsec driver?

We choose StrongSwan because of the community is active and plenty of docs.
However It looks like RHEL is only supporting OpenSwan.

so we should choose

(A) Support StrongSwan
(B) Support OpenSwan
(C) Support both
   (C-1) Make StrongSwan default
   (C-2) Make OpenSwan default

Actually, I'm working on C-2.
The patch is still WIP https://review.openstack.org/#/c/42264/

Besides the patch is small, supporting two driver may burden
in H3 including docs or additional help.
IMO, this is also a valid comment.

Best
Nachi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][VPNaaS] Supporting OpenSwan or StrongSwan or Both?

2013-08-19 Thread Salvatore Orlando
As I said during the meeting, I am happy to support both as long as the
code churn is reasonably contained and the chances of strongswan support
introducing bugs into openswan driver are negligible.

Openswan should be the default solution, in muy opinion.

Salvatore


On 20 August 2013 00:15, Nachi Ueno na...@ntti3.com wrote:

 Hi folks

 I would like to discuss whether supporting OpenSwan or StrongSwan or Both
 for
 ipsec driver?

 We choose StrongSwan because of the community is active and plenty of docs.
 However It looks like RHEL is only supporting OpenSwan.

 so we should choose

 (A) Support StrongSwan
 (B) Support OpenSwan
 (C) Support both
(C-1) Make StrongSwan default
(C-2) Make OpenSwan default

 Actually, I'm working on C-2.
 The patch is still WIP https://review.openstack.org/#/c/42264/

 Besides the patch is small, supporting two driver may burden
 in H3 including docs or additional help.
 IMO, this is also a valid comment.

 Best
 Nachi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][VPNaaS] Supporting OpenSwan or StrongSwan or Both?

2013-08-19 Thread Nachi Ueno
Hi Salvatore

Thank you for your comment.
I'm adding OpenSwan support as additional driver, so it is safe for strongswan.

Best
Nachi

2013/8/19 Salvatore Orlando sorla...@nicira.com:
 As I said during the meeting, I am happy to support both as long as the code
 churn is reasonably contained and the chances of strongswan support
 introducing bugs into openswan driver are negligible.

 Openswan should be the default solution, in muy opinion.

 Salvatore


 On 20 August 2013 00:15, Nachi Ueno na...@ntti3.com wrote:

 Hi folks

 I would like to discuss whether supporting OpenSwan or StrongSwan or Both
 for
 ipsec driver?

 We choose StrongSwan because of the community is active and plenty of
 docs.
 However It looks like RHEL is only supporting OpenSwan.

 so we should choose

 (A) Support StrongSwan
 (B) Support OpenSwan
 (C) Support both
(C-1) Make StrongSwan default
(C-2) Make OpenSwan default

 Actually, I'm working on C-2.
 The patch is still WIP https://review.openstack.org/#/c/42264/

 Besides the patch is small, supporting two driver may burden
 in H3 including docs or additional help.
 IMO, this is also a valid comment.

 Best
 Nachi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift, netifaces, PyPy, and cffi

2013-08-19 Thread Alex Gaynor
So, in what can only be described as extremely embarrassing and wow, I
thought I knew how to use a computer: netifaces appears to work ok under
PyPy! I could have sworn I'd tested it, but apparently not. So, this is no
longer a high priority item for me to get swift on pypy (in fact, +/-
 eventlet and pypy releases, the test suite at least all passes!).

I still think long term ditching netifaces is a good idea since it doesn't
really appear to be maintained, but that's a different issue. Since code
churn scares me I'm going to stop really pursuing this, the patch will
always be there if anyone else is excited about this, and maybe someday
I'll get the round tuits to do it myself :)

Alex


On Wed, Aug 14, 2013 at 5:17 PM, Vishvananda Ishaya
vishvana...@gmail.comwrote:


 On Aug 14, 2013, at 11:12 AM, Jarret Raim jarret.r...@rackspace.com
 wrote:

  I vote for including cffi. We are going to use a cffi lib as part of
 Barbican (key management) anyway, so I'd like to see wider acceptance.

  Jarret


 +1

 cffi rocks

 Vish


   From: Alex Gaynor alex.gay...@gmail.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Wednesday, August 14, 2013 12:12 PM
 To: openst...@nemebean.com openst...@nemebean.com, OpenStack List 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Swift, netifaces, PyPy, and cffi

   I just chatted with the Python product owner at Red Hat, he says this
 is going to make it's way to the next step later today (this past weekend
 was a Fedora conference), so this should be happening soon.

  Joe: Yup, I'm familiar with that piece (I had lunch with Vish the other
 week and he's the one who suggested Swift as the best place to get started
 with OpenStack + PyPy). For those who don't know I'm one of the core
 developers of PyPy :)

  Alex



 On Wed, Aug 14, 2013 at 9:24 AM, Ben Nemec openst...@nemebean.com wrote:

 On 2013-08-13 16:58, Alex Gaynor wrote:

 One of the issues that came up in this review however, is that cffi is
 not packaged in the most recent Ubuntu LTS (and likely other
 distributions), although it is available in raring, and in a PPA
  
 (http://packages.ubuntu.com/**raring/python-cffihttp://packages.ubuntu.com/raring/python-cffi[2]
  nd https://launchpad.net/~**pypy/+archive/ppa?field.**
 series_filter=precisehttps://launchpad.net/~pypy/+archive/ppa?field.series_filter=precise
 [3] respectively).


 As a result of this, we wanted to get some feedback on which direction
 is best to go:

 a) cffi-only approach, this is obviously the simplest approach, and
 works everywhere (assuming you can install a PPA, use pip, or similar
 for cffi)
 b) wait until the next LTS to move to this approach (requires waiting
 until 2014 for PyPy support)
 c) Support using either netifaces or cffi: most complex, and most
 code, plus one or the other dependencies aren't well supported by
 most tools as far as I know.


 It doesn't appear to me that this is available for RHEL yet, although it
 looks like they're working on it: https://admin.fedoraproject.**
 org/updates/python-cffi-0.6-4.**el6https://admin.fedoraproject.org/updates/python-cffi-0.6-4.el6

 That's also going to need to happen before we can do this, I think.

 -Ben


 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --
 I disapprove of what you say, but I will defend to the death your right
 to say it. -- Evelyn Beatrice Hall (summarizing Voltaire)
 The people's good is the highest law. -- Cicero
 GPG Key fingerprint: 125F 5C67 DFE9 4084
___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
I disapprove of what you say, but I will defend to the death your right to
say it. -- Evelyn Beatrice Hall (summarizing Voltaire)
The people's good is the highest law. -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Handling provider removal

2013-08-19 Thread Eugene Nikanorov
Hi folks,

I'd like to continue the discussion about this.

I think we have the following questions to answer:
1) What should be the workflow of provider removal for the admin?
2) Do we allow 'update' operation on provider attribute?
3) Do we allow removing provider for users?

My take on these:
1) There are two options for the admin. Before restarting neutron-server
with provider removed from conf, they should either:
 - use script to get all resources and filter them by provider. Since
provider technically is a relationship, it can't be done via CLI, so in
fact admin need to filter pools 'manually' (e.g. have some sort of script).
Disassociate each resource from provider.
 - use special API call that will go over all associations with provider
removing them, and doing 'undeploy' operation. I think it's more convenient
way.

Although the patch that has been on review implied the first way of
removing the provider.

2) I think we need to support it.
As simplified form for H-3 we could only allow updates 'no
provider'-'provider'

3) I think we need to support it as well, as there could be various reasons
for users to remove provider.
While there is no provider, resource is handled by 'no-op' plugin driver,
which mere responsibility is to complement db operations of the plugin.
That also means that no-provider resources are fully operable.

Speaking about the patch which is on review, I'm planning to make following
changes:
- implement association of pools and providers within update operation
instead of member action
- implement 'disassociate' admin-only operation, that probably will be some
call on 'providers' collection.

What do you think?

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-19 Thread Joshua Harlow
Just a related question,

Oslo 'incubator' db code I think depends on eventlet. This means any code that 
uses the oslo.db code could/would(?) be dependent on eventlet.

Will there be some refactoring there to not require it (useful for projects 
that are trying to move away from eventlet).

https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/session.py#L248

From: Boris Pavlovic bo...@pavlovic.memailto:bo...@pavlovic.me
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, August 19, 2013 2:12 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

Mark,

But for a variety of reasons, I do not consider the general thrust of use oslo 
db code to be approved. Instead, lets continue to consider features from olso 
db on a case by case basis, and see what the right resolution is in each case.

Absolutely agree with this point (e.g. we removed shadow tables from our 
roadmap after some discussion in other threads)
So we are planing to make all changes using our common approach called baby 
steps (Not by one giant patch set).

Btw I answered on your question about changed conf parameter in review (I mean 
sql_connection to database.connection).


Best regards,
Boris Pavlovic
---
Mirantis Inc.



On Mon, Aug 19, 2013 at 9:33 PM, Mark Washenberger 
mark.washenber...@markwash.netmailto:mark.washenber...@markwash.net wrote:
Thanks for refocusing the discussion on your original questions!

Also thanks for this additional summary. I consider the patches you have up for 
review in glance to have a general direction-level green light at this point 
(though I've got a question on the specifics in the ultimate review).

But for a variety of reasons, I do not consider the general thrust of use oslo 
db code to be approved. Instead, lets continue to consider features from olso 
db on a case by case basis, and see what the right resolution is in each case.

Thanks for your patience and forbearance, hopefully getting in the patches you 
have submitted now will help unblock progress for your team.

On Mon, Aug 19, 2013 at 3:49 AM, Boris Pavlovic 
bo...@pavlovic.memailto:bo...@pavlovic.me wrote:
Mark,

Main part of oslo is:
1) common migration testing
2) common sqla.models
3) common hacks around sqla and sqla-migrate
4) common work around engines and sessions


All these points are implemented in Glance almost in the same way as in Oslo.
Also we are able to use only part of this code in Glance, and add some other 
things that are glance related over this code.

Our current 2 patches on review do next things:
1) Copy paste oslo.db code into glance
2) Use sqla session/engine/exception wrappers
3) Remove Glance code that covers session/engine/exception

So I really don't see any bad thing in this code:
1) If you would like to implement other backends = this change won't block it
2) If you would like to make some other sqla utitlites or glance related things 
= this change won't block it
3) If there are bugs = fix it in oslo and sync = this change won't block it

 So I really don't see any reason to block work around migration to oslo.db 
code in Glance.


Best regards,
Boris Pavlovic
---
Mirantis Inc.




On Fri, Aug 16, 2013 at 10:41 PM, Mark Washenberger 
mark.washenber...@markwash.netmailto:mark.washenber...@markwash.net wrote:
I would prefer to pick and choose which parts of oslo common db code to reuse 
in glance. Most parts there look great and very useful. However, some parts 
seem like they would conflict with several goals we have.

1) To improve code sanity, we need to break away from the idea of having one 
giant db api interface
2) We need to improve our position with respect to new, non SQL drivers
- mostly, we need to focus first on removing business logic (especially 
authz) from database driver code
- we also need to break away from the strict functional interface, because 
it limits our ability to express query filters and tends to lump all filter 
handling for a given function into a single code block (which ends up being 
defect-rich and confusing as hell to reimplement)
3) It is unfortunate, but I must admit that Glance's code in general is pretty 
heavily coupled to the database code and in particular the schema. Basically 
the only tool we have to manage that problem until we can fix it is to try to 
be as careful as possible about how we change the db code and schema. By 
importing another project, we lose some of that control. Also, even with the 
copy-paste model for oslo incubator, code in oslo does have some of its own 
reasons to change, so we could potentially end up in a conflict where glance db 
migrations (which are operationally costly) have to happen for reasons that 
don't really matter to glance.

So rather than framing this as 

Re: [openstack-dev] Code review study

2013-08-19 Thread Michael Davies
On Tue, Aug 20, 2013 at 5:14 AM, Jay Buffington m...@jaybuff.com wrote:

 This is really interesting.  I wish they would have explicitly defined
 lines of code.   Is that git show |wc -l? Just the new lines which
 were added?  The sum of the lines changed, removed and added?  You can
 get vastly different numbers depending on how you count it.

Normally in the literature LOC is defined as non-comment, non-blank
code line deltas, with a few exceptions.

The exceptions normally refer to not counting braces in C-style
languages and other syntactic sugar elements.  Of course in Python we
don't really have these issues to content with :)

I'd normally include comments and docstrings too, since we review these as well.

Michael...
-- 
Michael Davies   mich...@the-davies.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 api remove security_groups extension (was Re: security_groups extension in nova api v3)

2013-08-19 Thread Alex Xu

On 2013年08月17日 00:14, Vishvananda Ishaya wrote:

On Aug 15, 2013, at 5:58 PM, Melanie Witt melw...@yahoo-inc.com wrote:


On Aug 15, 2013, at 1:13 PM, Joe Gordon wrote:


+1 from me as long as this wouldn't change anything for the EC2 API's security 
groups support, which I assume it won't.

Correct, it's unrelated to the ec2 api.

We discussed briefly in the nova meeting today and there was consensus that 
removing the standalone associate/disassociate actions should happen.

Now the question is whether to keep the server create piece and not remove the 
extension entirely. The concern is about a delay in the newly provisioned 
instance being associated with the desired security groups. With the extension, 
the instance gets the desired security groups before the instance is active (I 
think). Without the extension, the client would receive the active instance and 
then call neutron to associate it with the desired security groups.

Would such a delay in associating with security groups be a problem?


It seems like getting around this would be as simple as:

a. Create the port in neutron.
b. Associate a security group with the port.
c. Boot the instance with the port.

In general I'm a fan of doing all of the network creation and volume creation 
in neutron and cinder before booting the instance. Unfortunately I think this 
is pretty unfriendly to our users. One possibility is to move the smarts into 
the client side (i.e. have it talk to neutron and cinder), but I think that 
alienates all of the people using openstack who are not using python-novaclient 
or python-openstack client.
The API user is developer too, it shouldn't too difficult for them. I 
prefer move the smarts into client
side. I'm open with two way. And I will comment in my patch for notice 
reviewer vote for their flavor way before review my patch.


Since we are still supporting v2 this is a possibility for the v3 api, but if 
you can't do basic operations in v3 without talking to multiple services on the 
client side I think it will prevent a lot of people from using it.

Its clear to me that autocreation needs to stick around for a while just to 
keep the apis usable. I can see the argument for pulling it from the v3 api, 
but it seems like at least the basics need to stick around for now.

Vish


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] VMwareAPI sub-team status update 2013-08-19

2013-08-19 Thread Shawn Hartsock

Greetings stackers!

August 22nd is fast approaching. Here's the reviews in flight. We have 5 ready 
for a core reviewer to take a look. One needing some attention from someone who 
knows VMware's APIs and 8 that are in need of work/discussion. I noticed that 
there was some issues with Jenkins earlier, some of you may have been caught in 
that. Don't use 'recheck no bug' until you've read the failures and made sure 
it has nothing to do with your patch.

Ready for core reviewer:
* NEW, https://review.openstack.org/#/c/37819/ ,'VMware image clone strategy 
settings and overrides'
https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy
core votes,0, non-core votes,4, down votes, 0
* NEW, https://review.openstack.org/#/c/33100/ ,'Fixes host stats for 
VMWareVCDriver'
https://bugs.launchpad.net/nova/+bug/1190515
core votes,0, non-core votes,5, down votes, 0
* NEW, https://review.openstack.org/#/c/30628/ ,'Fix VCDriver to pick the 
datastore that has capacity'
https://bugs.launchpad.net/nova/+bug/1171930
core votes,0, non-core votes,7, down votes, 0
* NEW, https://review.openstack.org/#/c/33504/ ,'VMware: nova-compute crashes 
if VC not available'
https://bugs.launchpad.net/nova/+bug/1192016
core votes,0, non-core votes,5, down votes, 0
* NEW, https://review.openstack.org/#/c/40298/ ,'Fix snapshot in 
VMWwareVCDriver'
https://bugs.launchpad.net/nova/+bug/1184807
core votes,0, non-core votes,6, down votes, 0

Needs VMware API expert review:
* NEW, https://review.openstack.org/#/c/42024/ ,'VMWare: Disabling linked clone 
doesn't cache images'
https://bugs.launchpad.net/nova/+bug/1207064
core votes,0, non-core votes,0, down votes, 0

Needs discussion/work (has -1):
* NEW, https://review.openstack.org/#/c/37659/ ,'Enhance VMware Hyper instance 
disk usage'
https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
core votes,0, non-core votes,2, down votes, -1
* NEW, https://review.openstack.org/#/c/39720/ ,'VMware: Added check for 
datastore state before selection'
https://bugs.launchpad.net/nova/+bug/1194078
core votes,0, non-core votes,4, down votes, -1
* NEW, https://review.openstack.org/#/c/40105/ ,'VMware: use VM uuid for volume 
attach and detach'
https://bugs.launchpad.net/nova/+bug/1208173
core votes,1, non-core votes,7, down votes, -1
* NEW, https://review.openstack.org/#/c/41387/ ,'VMware: Nova boot from cinder 
volume'
https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support
core votes,0, non-core votes,2, down votes, -1
* NEW, https://review.openstack.org/#/c/40245/ ,'Nova support for vmware cinder 
driver'
https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support
core votes,0, non-core votes,2, down votes, -1
* NEW, https://review.openstack.org/#/c/41657/ ,'Fix VMwareVCDriver to support 
multi-datastore'
https://bugs.launchpad.net/nova/+bug/1104994
core votes,0, non-core votes,0, down votes, -1
* NEW, https://review.openstack.org/#/c/30282/ ,'Multiple Clusters using single 
compute service'

https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
core votes,0, non-core votes,2, down votes, -2
* NEW, https://review.openstack.org/#/c/34903/ ,'Deploy vCenter templates'

https://blueprints.launchpad.net/nova/+spec/deploy-vcenter-templates-from-vmware-nova-driver
core votes,0, non-core votes,2, down votes, -2

Meeting info:
* https://wiki.openstack.org/wiki/Meetings/VMwareAPI


# Shawn Hartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Replacing Glance DB code to Oslo DB code.

2013-08-19 Thread Ben Nemec
 

On 08/19/13 20:34, Joshua Harlow wrote: 

 Just a related question, 
 
 Oslo 'incubator' db code I think depends on eventlet. This means any code 
 that uses the oslo.db code could/would(?) be dependent on eventlet. 
 
 Will there be some refactoring there to not require it (useful for projects 
 that are trying to move away from eventlet). 
 
 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/session.py#L248
  [1]

Glancing through that file, it looks like the greenthread import is only
used for playing nice with other greenthreads. It should be pretty easy
to make it conditional so we don't require it, but will use it if it's
available.

 -Ben 

Links:
--
[1]
https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/session.py#L248___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev