Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-18 Thread Dan Smith
Sorry for the delay in responding to this...

   * Moved the _obj_classes registry magic out of ObjectMetaClass and into 
 its own method for easier use.  Since this is a subclass based 
 implementation,
 having a separate method feels more appropriate for a factory/registry
 pattern.

This is actually how I had it in my initial design because I like
explicit registration. We went off on this MetaClass tangent, which buys
us certain things, but which also makes certain things quite difficult.

Pros for metaclass approach:
 - Avoids having to decorate things (meh)
 - Automatic to the point of not being able to create an object type
   without registering it even if you wanted to

Cons for metaclass approach:
 - Maybe a bit too magical
 - Can make testing hard (see where we save/restore the registry
   between each test)
 - I think it might make subclass implementations harder
 - Definitely more complicated to understand

Chris much preferred the metaclass approach, so I'm including him here.
He had some reasoning that won out in the original discussion, although
I don't really remember what that was.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-18 Thread Christopher Armstrong
On Mon, Nov 18, 2013 at 3:00 PM, Dan Smith d...@danplanet.com wrote:

 Sorry for the delay in responding to this...

* Moved the _obj_classes registry magic out of ObjectMetaClass and into
  its own method for easier use.  Since this is a subclass based
 implementation,
  having a separate method feels more appropriate for a
 factory/registry
  pattern.

 This is actually how I had it in my initial design because I like
 explicit registration. We went off on this MetaClass tangent, which buys
 us certain things, but which also makes certain things quite difficult.

 Pros for metaclass approach:
  - Avoids having to decorate things (meh)
  - Automatic to the point of not being able to create an object type
without registering it even if you wanted to

 Cons for metaclass approach:
  - Maybe a bit too magical
  - Can make testing hard (see where we save/restore the registry
between each test)
  - I think it might make subclass implementations harder
  - Definitely more complicated to understand

 Chris much preferred the metaclass approach, so I'm including him here.
 He had some reasoning that won out in the original discussion, although
 I don't really remember what that was.


It's almost always possible to go without metaclasses without losing much
relevant brevity, and improving clarity. I strongly recommend against their
use.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-18 Thread Jay Pipes

On 11/18/2013 04:58 PM, Christopher Armstrong wrote:

On Mon, Nov 18, 2013 at 3:00 PM, Dan Smith d...@danplanet.com
mailto:d...@danplanet.com wrote:

Sorry for the delay in responding to this...

* Moved the _obj_classes registry magic out of ObjectMetaClass
and into
  its own method for easier use.  Since this is a subclass
based implementation,
  having a separate method feels more appropriate for a
factory/registry
  pattern.

This is actually how I had it in my initial design because I like
explicit registration. We went off on this MetaClass tangent, which buys
us certain things, but which also makes certain things quite difficult.

Pros for metaclass approach:
  - Avoids having to decorate things (meh)
  - Automatic to the point of not being able to create an object type
without registering it even if you wanted to

Cons for metaclass approach:
  - Maybe a bit too magical
  - Can make testing hard (see where we save/restore the registry
between each test)
  - I think it might make subclass implementations harder
  - Definitely more complicated to understand

Chris much preferred the metaclass approach, so I'm including him here.
He had some reasoning that won out in the original discussion, although
I don't really remember what that was.


It's almost always possible to go without metaclasses without losing
much relevant brevity, and improving clarity. I strongly recommend
against their use.


++

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-18 Thread Morgan Fainberg
On Mon, Nov 18, 2013 at 1:58 PM, Christopher Armstrong
chris.armstr...@rackspace.com wrote:
 On Mon, Nov 18, 2013 at 3:00 PM, Dan Smith d...@danplanet.com wrote:

 Sorry for the delay in responding to this...

* Moved the _obj_classes registry magic out of ObjectMetaClass and
  into
  its own method for easier use.  Since this is a subclass based
  implementation,
  having a separate method feels more appropriate for a
  factory/registry
  pattern.

 This is actually how I had it in my initial design because I like
 explicit registration. We went off on this MetaClass tangent, which buys
 us certain things, but which also makes certain things quite difficult.

 Pros for metaclass approach:
  - Avoids having to decorate things (meh)
  - Automatic to the point of not being able to create an object type
without registering it even if you wanted to

 Cons for metaclass approach:
  - Maybe a bit too magical
  - Can make testing hard (see where we save/restore the registry
between each test)
  - I think it might make subclass implementations harder
  - Definitely more complicated to understand

 Chris much preferred the metaclass approach, so I'm including him here.
 He had some reasoning that won out in the original discussion, although
 I don't really remember what that was.


 It's almost always possible to go without metaclasses without losing much
 relevant brevity, and improving clarity. I strongly recommend against their
 use.


I think this is simple and to the point.  ++  Metaclasses have their
places, but it really makes it hard to clearly view what is going on
in a straight forward manner.  I would prefer to keep metaclass use
limited (wherever possible) with the exception of abstract base
classes (which are straightforward enough to understand).  I think the
plus of avoiding decorating things isn't really a huge win, and
actually i think takes clarity away.

--Morgan Fainberg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-18 Thread Dan Smith
 I think the plus of avoiding decorating things isn't really a huge
 win, and actually i think takes clarity away.

Hence the (meh) in my list :)

This wasn't really a sticking point when we were getting reviews on the
original base infrastructure, so I'm surprised people are so vehement
now. However, as I said, I prefer explicit registration, so I'm fine
with changing it in nova and it sounds like consensus here on the list
would affect it going into oslo anyway.

Lets give Chris a chance to see this before we get too settled, but it
sounds otherwise unanimous :)

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-14 Thread Clayton Coleman
With no feedback on remotability so far (we can touch base at F2F next week) 
here's a representative sample of the object model with sqlalchemy 
implementations approach, without some of the bits that would enable 
remotability (primarily the ability to rehydrate an object with the correct 
sqlalchemy metadata). [1]

  [1] 
https://github.com/smarterclayton/solum/commit/d9d83aac0fc98663f1daa9914edeb52faf0cc37d

Highlights:

  Imported nova object code, made the following changes to 
solum/objects/base.py:

  * Used DomainObject instead of Object, also suggested were OsloObject or 
OSObject.  I like DomainObject but I like Fowler, what can I say...

  * DomainObject has the metaclass, but is a subclass of AbstractDomainObject 
which contains all of the core mixin logic.  Allows subclasses that have
their own metaclasses to avoid having to do metaclass hijinks by using
AbstractDomainObject and objects.base.register_obj_class


https://github.com/smarterclayton/solum/blob/d9d83aac0fc98663f1daa9914edeb52faf0cc37d/solum/objects/base.py#L344

  * Moved the _obj_classes registry magic out of ObjectMetaClass and into 
its own method for easier use.  Since this is a subclass based 
implementation,
having a separate method feels more appropriate for a factory/registry
pattern.


https://github.com/smarterclayton/solum/blob/d9d83aac0fc98663f1daa9914edeb52faf0cc37d/solum/objects/base.py#L103

  * Takes a dependency on rpc serializer, debating whether this was
good or bad (not everyone using objects needs rpc?)


  solum/objects/__init__.py

  * solum.objects.load() allows the framework to load a configured set of 
implementation subclasses that can change. Nova doesn't have
subclasses of their objects yet so this seemed like the simplest possible
change, without introducing a new magic factory.  Called by 
prepare_service() when API is started up, after config is parsed.

  * Also provides solum.objects.registry.Application which abstracts knowing 
about the implementation subclass in play

  * solum.objects.new_schema() and transition_schema() are global helpers
for determining whether you should be reading newly introduce/changed 
fields from the schema, or whether you should only be writing them.
I plan to have a better example in test tomorrow.


https://github.com/smarterclayton/solum/blob/d9d83aac0fc98663f1daa9914edeb52faf0cc37d/solum/objects/__init__.py


  Application object interface (fields, docs, version) in 
solum/objects/application.py


  Specific sqlalchemy implementation solum/objects/sqlalchemy/__init__.py

  * As simple as I could make it - simply calls register_obj_class in its
load()


https://github.com/smarterclayton/solum/blob/d9d83aac0fc98663f1daa9914edeb52faf0cc37d/solum/objects/sqlalchemy/__init__.py


  Common SQLAlchemy code in solum/objects/sqlalchemy/models.py

  * SolumBase mixin abstracts all the common CRUD that would be part of the 
base solum/objects/application.py interfaces.  Subclasses (actual model
objects) would override as necessary.

  * Abstracts save() logic that must set new schema transition elements (copy
fields when transitioning)


https://github.com/smarterclayton/solum/blob/d9d83aac0fc98663f1daa9914edeb52faf0cc37d/solum/objects/sqlalchemy/models.py#L89


https://github.com/smarterclayton/solum/blob/d9d83aac0fc98663f1daa9914edeb52faf0cc37d/solum/objects/sqlalchemy/models.py

  
  Application implementation in solum/objects/sqlalchemy/application.py

  * I don't have examples of transition schema, but you could use an if 
statement 
in the class initializer:

  id = Column(Integer, primary_key=True)
  if objects.new_schema():
  name = Column(String, nullable=True)

This would allow you to also set sqlalchemy synonym calls when renaming 
columns
for each version of the schema.  Would like to make it more testable, but 
if 
you assume a restart between each old - transitioning - new the static
initialization could be associated with a schema check call in load() that 
blocks
service initialization if the incorrect mode or schema is present.


https://github.com/smarterclayton/solum/blob/d9d83aac0fc98663f1daa9914edeb52faf0cc37d/solum/objects/sqlalchemy/application.py


The rest of the code needs to be rebased against russell's import changeset, 
and dbsync isn't necessary (needs to be replaced with alembic when we hit that 
point).

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-14 Thread Monty Taylor


On 11/01/2013 02:21 PM, Clayton Coleman wrote:
 
 
 - Original Message -
 I was blindly assuming we want to pull in eventlet support, with the
 implicit understanding that we will be doing some form of timeslicing and
 async io bound waiting in the API... but would like to hear others weigh
 in before I add the monkey_patch and stub code around script startup.

 I'm not so sure that bringing in eventlet should be done by default. It
 adds complexity and if most/all of the API calls will be doing some call
 to a native C library like libmysql that blocks, I'm not sure there is
 going to be much benefit to using eventlet versus multiplexing the
 servers using full OS processes -- either manually like some of the
 projects do with the workers=N configuration and forking, or using more
 traditional multiplexing solutions like running many mod_wsgi or uwsgi
 workers inside Apache or nginx.


 What about callouts to heat/keystone APIs?

 Sure, it's possible to do that with eventlet. It's also possible to do
 that with a queueing system. For example, the API server could send an
 RPC message to a queue and a separate process could work on executing
 the individual tasks associated with a particular application build-pack.
 
 I guess I was asking because event patterns are generally more efficient for 
 memory than multiprocess, assuming that the underlying codebase isn't 
 fighting the event system at every step.  Are your concerns with eventlet 
 based on that mismatch (bugs, problems with eventlet across the various 
 projects and libraries that OpenStack uses) or more that you believe we 
 should start, at the very beginning, with the pattern of building everything 
 as a distributed ordered task flow since we know at least some of our 
 interactions are asynchronous?  There are at least a few network IO 
 operations that will be synchronous to our flows - while they are not likely 
 to be a large percentage of the API time, they may definitely block threads 
 for a period of time.
 

 The design of Zuul [1], especially with the relatively recent addition
 of Gearman and the nodepool orchestrator [2] that the openstack-infra
 folks wrote would, IMO, be a worthy place to look for inspiration, since
 Solum essentially is handling a similar problem domain -- distributed
 processing of ordered tasks.

 Best,
 -jay
 
 Familiar with Gearman, will look through nodepool/Zuul.

BTW - I was just brainstorming the other day that we might want to have
a few of our things- nodepool might be one of them - start to take
advantage of taskflow as well.

In general though, I think we can all agree that zuul can handle a
pretty high load. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-13 Thread Clayton Coleman


- Original Message -
 +1 for ease of iteration till we decide on the model
 (and so not worry about remotability right now :)).
 
 Just to verify that I understand the proposed strawman commit,
 it will use nova/object/* like hierarchy to define Solum specific domain
 objects
 in which specific methods would be exposed whose implementation will use
 sqlalchemy ORM calls internally.

Yeah, base abstract class / interface / field definitions, with a subclass 
implementation that is accessed via a factory/lookup table.  I had started on 
that track but wanted to get the remotability feedback early (I don't like 
mucking around with the sqlalchemy metaclasses early on unless it's something 
we view as critical).  The remotable path would separate the orm model and then 
do the same translation that happens today in nova/ironic with _from_db_object 
where we create two objects, then copy them back and forth a bunch.  The first 
implementation could be converted to the second, I don't see the second being 
something we'd convert to the first since it's more work.  I've also included 
examples of live schema update with the three states (old schema, new schema 
but write-only, new schema and no access to the old schema) for various types 
of changes (rename, split columns, add subtable relationship).

I could do both to compare but figured we could argue out remotability via ML.

 
 Sounds good to me.
 
 As long as we are able to discuss and debate the perceived advantages of the
 object approach
 (that it makes handling versioned data easier, and that it allows using sql
 and non-sql
  backends) we should be good.
 
 Btw, thanks for sending across link to the F1 paper.
 
 Regards,
 - Devdatta
 
 
 -Original Message-
 From: Clayton Coleman ccole...@redhat.com
 Sent: Wednesday, November 13, 2013 12:29pm
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Cc: devdatta kulkarni devdatta.kulka...@rackspace.com
 Subject: Re: [openstack-dev] [Solum] Some initial code copying for
 db/migration
 
 - Original Message -
  
  The abstraction will probably always leak something, but in general it
  comes
  down to isolating a full transaction behind a coarse method.  Was going to
  try to demonstrate that as I went.
  
 
 I've almost got something ready for feedback and review - before I do that I
 wanted to follow up with remotability and it's relative importance:
 
 Is remotability of DB interactions a prime requirement for all OpenStack
 services? Judging by what I've seen so far, it's primarily to ensure that DB
 passwords are isolated from the API code, with a tiny amount of being able
 to scale business logic independently.  Are there other reasons?
 
 For DB password separation, it's never been a huge concern to us
 operationally - do others have strong enough opinions either way to say that
 it continues to be important vs. not?
 
 For the separated scale behavior, at the control plane scale outs we suspect
 we'll have (2-20?), does separating the api and a background layer provide
 benefit?

 The other items that have been mentioned that loosely couple with
 remotability are versioned upgrades, but we can solve those in a combined
 layer as well with an appropriately chosen API abstraction.
 
 If remotability of DB calls is not a short term or medium term objective,
 then I was going to put together a strawman commit that binds domain object
 implementation subclasses that are tied to sqlalchemy ORM, but with the
 granular create/save/update calls called out and enforced.  If it is a
 short/medium objective, we'd use the object field list to copy from the ORM,
 with the duplicate object creation that it entails.  The former is easier to
 iterate on as we develop to decide on the model, the latter is easier to
 make remoteable (can't have the SQL orm state inside the object that is
 being serialized easily).  There's an argument the latter enforces stronger
 code guarantees as well (there's no way for people to use tricky orm magic
 methods added to the object, although that's a bit less of an issue with
 sqlalchemy than other models).
 
 Thoughts?
 
 
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-11 Thread Clayton Coleman


- Original Message -
  1) Using objects as an abstraction for versioned data:
 This seems like a good direction overall, especially from the
 point-of-view
 of backwards compatibility of consumers of the object. However, after
 looking through some
 of the objects defined in nova/objects/, I am not sure if I understand
 how
 this works. Specifically, it is not clear to me how might the consumer
 of the
 object be able to query different versions of it at runtime.
 
 The object registry is handled by the metaclass in objects/base.py. It
 automatically collects objects that inherit from NovaObject, and allows
 multiple versions of the same object to exist. We don't have anything
 that needs to specifically ask the registry directly for foo object
 version X, so there's no interface for doing that right now. We do,
 however, have incoming objects over RPC that need to be re-hydrated,
 with an is this compatible version check. We also have the ability to
 downlevel an object using the obj_make_compatible() method. We are
 planning to always roll the conductor code first, which means it can
 take the newest version of an object from the schema (in whatever state
 it's in) and backlevel it to the version being asked for by a remote RPC
 client.

For places where we may not have an RPC isolation layer, it's similar - the 
code knows what version of the schema to ask for, and the object abstraction 
hides the details of converting between older to newer.

We probably need to map out the scenarios where multiversion is enabled for 
live upgrade - that's the most critical place where you need to ask for 
specific versions.  The Google F1 live schema change paper has a good summary 
of the core issues [1] (and a great diagram on page 9) with live schema 
migration that apply to generic SQL dbs as well.  There are five distinct 
phases:

  1) new code is available that can read the old schema and the new schema, but 
continues to read the old schema
  2) additive elements of the new schema are enabled
  3) new code begins copying/deleting data as its read / updated, and a 
background process is converting the rest of the data
  4) new code starts reading/querying the new fields
  5) old schema data is dropped once all code is reading the new schema

The new code has to know whether it should read the new or old schema - it 
can't read the new schema (query by new column names, by updated data, etc) 
until all of the reads are complete and in place in #4.  That could be a config 
value, something in memory triggered by an admin, etc.

 
  2) Using objects as an abstraction to support different kinds of backends
 (SQL and non-SQL backends):
 - Again, a good direction overall. From implementation point-of-view
 though
 this may become tricky, in the sense that the object layer would need to
 be
 designed with just the right amount of logic so as to be able to work
 with either
 a SQL or a non-SQL backend. It will be good to see some examples of how
 this might
 be done if there are any existing examples somewhere.
 
 We don't have any examples of using a non-SQL backend for general
 persistence in Nova, which means we don't have an example of using
 objects to hide it. If what NovaObject currently provides is not
 sufficient to hide the intricacies of a non-SQL persistence layer, I
 think it's probably best to build that on top of what we've got in the
 object model.

The abstraction will probably always leak something, but in general it comes 
down to isolating a full transaction behind a coarse method.  Was going to try 
to demonstrate that as I went.

[1] 
http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/pubs/archive/41376.pdf

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-08 Thread devdatta kulkarni
There are several different points being raised here,
all very interesting. Trying to separate them below.

1) Using objects as an abstraction for versioned data:
   This seems like a good direction overall, especially from the point-of-view
   of backwards compatibility of consumers of the object. However, after 
looking through some
   of the objects defined in nova/objects/, I am not sure if I understand how
   this works. Specifically, it is not clear to me how might the consumer of 
the 
   object be able to query different versions of it at runtime.


2) Using objects as an abstraction to support different kinds of backends
   (SQL and non-SQL backends):
   - Again, a good direction overall. From implementation point-of-view though
   this may become tricky, in the sense that the object layer would need to be
   designed with just the right amount of logic so as to be able to work with 
either 
   a SQL or a non-SQL backend. It will be good to see some examples of how this 
might 
   be done if there are any existing examples somewhere.


3) From Solum's point-of-view, the concern around the potential downtime
   that may be incurred in the API-layer because of breaking object model 
changes,
   and so, investigating how to design this correctly right from the start.
   - This is a valid concern. We will have to design for this at sometime or 
other anyways.
   Doing this first up might be good with regards to understanding how the 
decision of
   using versioned objects would tie with the API layer implemented in 
Pecan+WSME.
   I don't have much experience with either, so would love to hear about it 
from those who do.


Best Regards,
Devdatta


-Original Message-
From: Clayton Coleman ccole...@redhat.com
Sent: Thursday, November 7, 2013 8:26pm
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Cc: Dan Smith d...@danplanet.com
Subject: Re: [openstack-dev] [Solum] Some initial code copying for db/migration

- Original Message -

  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L420

  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L43
 
 This API and these models are what we are trying to avoid exposing to
 the rest of nova. By wrapping these in our NovaObject-based structures,
 we can bundle versioned data and methods together which is what we need
 for cross-version compatibility and parity for the parts of nova that
 are not allowed to talk to the database directly.
 
 See the code in nova/objects/* for the implementations. Right now, these
 just call into the db_api.py, but eventually we want to move the actual
 database implementation into the objects themselves and hopefully
 dispense with most or all of the sqlalchemy/* stuff. This also provides
 us the ability to use other persistence backends that aren't supported
 by sqlalchemy, or that don't behave like it does.
 
 If you're going to be at the summit, come to the objects session on
 Thursday where we'll talk about this in more detail. Other projects have
 expressed interest in moving the core framework into Oslo so that we're
 all doing things in roughly the same way. It would be good to get you
 started on the right way early on before you have the migration hassle
 we're currently enjoying in Nova :)
 
 --Dan
 

The summit session was excellent - next step for me is to look through what the 
right abstraction is going to be for objects that keeps the db details properly 
isolated and the API surface on /objects suitably coarse (in line with the long 
discussion in Nova about non-SQL backends, the consensus of which is that the 
domain object model needs to abstract whole interaction flows, vs granular 
steps).  I'll try to have some example code to debate after I get back from 
summit.

Even assuming Solum has a fairly small persistence model, in the long run I 
believe it's fair to say that the ability to perform live upgrades will become 
critical for all operators.  One of the side effects of supporting potentially 
millions of applications (at the high end, and not an unreasonable estimate for 
hosted environments) is that any period of downtime at the API level will 
prevent users from making deployments, which is a direct line-of-business 
concern.  Designing around live upgrades - specifically, the requirement that 
code must be aware of two versions of a schema at the same time - implies that 
the domain model must be able to be aware of those versions on an object basis. 
 For reference, [1] and [2] contain some of the Nova discussion, and Nova in 
icehouse is going to be moving this way.  I'd prefer (it's important to Red 
Hat) to design for that from the beginning and be working towards that end.

Do folks have additional questions or concerns about my exploration of a 
versioned domain object model from day one?  Are there others who would like to 
embrace quick and dirty and explicitly ignore

Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-08 Thread Dan Smith
 1) Using objects as an abstraction for versioned data:
This seems like a good direction overall, especially from the point-of-view
of backwards compatibility of consumers of the object. However, after 
 looking through some
of the objects defined in nova/objects/, I am not sure if I understand how
this works. Specifically, it is not clear to me how might the consumer of 
 the 
object be able to query different versions of it at runtime.

The object registry is handled by the metaclass in objects/base.py. It
automatically collects objects that inherit from NovaObject, and allows
multiple versions of the same object to exist. We don't have anything
that needs to specifically ask the registry directly for foo object
version X, so there's no interface for doing that right now. We do,
however, have incoming objects over RPC that need to be re-hydrated,
with an is this compatible version check. We also have the ability to
downlevel an object using the obj_make_compatible() method. We are
planning to always roll the conductor code first, which means it can
take the newest version of an object from the schema (in whatever state
it's in) and backlevel it to the version being asked for by a remote RPC
client.

 2) Using objects as an abstraction to support different kinds of backends
(SQL and non-SQL backends):
- Again, a good direction overall. From implementation point-of-view though
this may become tricky, in the sense that the object layer would need to be
designed with just the right amount of logic so as to be able to work with 
 either 
a SQL or a non-SQL backend. It will be good to see some examples of how 
 this might 
be done if there are any existing examples somewhere.

We don't have any examples of using a non-SQL backend for general
persistence in Nova, which means we don't have an example of using
objects to hide it. If what NovaObject currently provides is not
sufficient to hide the intricacies of a non-SQL persistence layer, I
think it's probably best to build that on top of what we've got in the
object model.

 3) From Solum's point-of-view, the concern around the potential downtime
that may be incurred in the API-layer because of breaking object model 
 changes,
and so, investigating how to design this correctly right from the start.
- This is a valid concern. We will have to design for this at sometime or 
 other anyways.
Doing this first up might be good with regards to understanding how the 
 decision of
using versioned objects would tie with the API layer implemented in 
 Pecan+WSME.
I don't have much experience with either, so would love to hear about it 
 from those who do.

The ironic guys were mentioning that they'd like to have something a
little more native for WSME integration. I too am ignorant here, but I
think it sounded like they wanted some general way to take an object and
transform it into what gets exposed to the API client. Perhaps just a
pattern of standard methods on such objects would be sufficient. For
nova, maybe:

  class NovaAPIViewableThingy(NovaObject):
  def obj_api_view(self, context):
  if context.is_admin():
  return show_admin_stuff()
  else:
  return show_usery_stuff()

?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-07 Thread Clayton Coleman
- Original Message -

  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L420

  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L43
 
 This API and these models are what we are trying to avoid exposing to
 the rest of nova. By wrapping these in our NovaObject-based structures,
 we can bundle versioned data and methods together which is what we need
 for cross-version compatibility and parity for the parts of nova that
 are not allowed to talk to the database directly.
 
 See the code in nova/objects/* for the implementations. Right now, these
 just call into the db_api.py, but eventually we want to move the actual
 database implementation into the objects themselves and hopefully
 dispense with most or all of the sqlalchemy/* stuff. This also provides
 us the ability to use other persistence backends that aren't supported
 by sqlalchemy, or that don't behave like it does.
 
 If you're going to be at the summit, come to the objects session on
 Thursday where we'll talk about this in more detail. Other projects have
 expressed interest in moving the core framework into Oslo so that we're
 all doing things in roughly the same way. It would be good to get you
 started on the right way early on before you have the migration hassle
 we're currently enjoying in Nova :)
 
 --Dan
 

The summit session was excellent - next step for me is to look through what the 
right abstraction is going to be for objects that keeps the db details properly 
isolated and the API surface on /objects suitably coarse (in line with the long 
discussion in Nova about non-SQL backends, the consensus of which is that the 
domain object model needs to abstract whole interaction flows, vs granular 
steps).  I'll try to have some example code to debate after I get back from 
summit.

Even assuming Solum has a fairly small persistence model, in the long run I 
believe it's fair to say that the ability to perform live upgrades will become 
critical for all operators.  One of the side effects of supporting potentially 
millions of applications (at the high end, and not an unreasonable estimate for 
hosted environments) is that any period of downtime at the API level will 
prevent users from making deployments, which is a direct line-of-business 
concern.  Designing around live upgrades - specifically, the requirement that 
code must be aware of two versions of a schema at the same time - implies that 
the domain model must be able to be aware of those versions on an object basis. 
 For reference, [1] and [2] contain some of the Nova discussion, and Nova in 
icehouse is going to be moving this way.  I'd prefer (it's important to Red 
Hat) to design for that from the beginning and be working towards that end.

Do folks have additional questions or concerns about my exploration of a 
versioned domain object model from day one?  Are there others who would like to 
embrace quick and dirty and explicitly ignore this issue until we have a Solum 
prototype running?  

[1] https://etherpad.openstack.org/p/NovaIcehouseSummitUpgrades
[2] https://etherpad.openstack.org/p/NovaIcehouseSummitObjects

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-02 Thread Russell Bryant
On 11/01/2013 05:50 PM, Michael Still wrote:
 On Sat, Nov 2, 2013 at 3:30 AM, Russell Bryant rbry...@redhat.com wrote:
 
 I also would not use migrate.  sqlalchemy-migrate is a dead upstream and
 we (OpenStack) have had to inherit it.  For new projects, you should use
 alembic.  That's actively developed and maintained.  Other OpenStack
 projects are either already using it, or making plans to move to it.
 
 This is something I wanted to dig into at the summit in fact, mostly
 because I'm not sure I agree... Sure migrate is now an openstack
 project, but so is olso and we're happy to use that. So I don't think
 it being abandoned by the original author is a strong argument.

I think it is.  If someone else is actively developing and maintaining
something that serves our needs, we should absolutely be using it
instead of something we have to maintain ourselves.  That leaves us to
focus on what's specific to OpenStack.  I mean, it's pretty much the
reason we have thousands of reusable software projects out there...

 Its not clear to me what alembic gives us that we actually want...
 Sure, we could have a non-linear stream of migrations, but we already
 do a terrible job of having a simple linear stream. I don't think
 adding complexity is going to make the world any better to be honest.

Let's assume it stays linear.  The above argument is still eonugh to
convince me to move.

 These are the kind of issues I wanted to discuss in the nova db summit
 session if people are able to come to that.

Sounds good.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-02 Thread Russell Bryant
On 11/01/2013 12:46 PM, Clayton Coleman wrote:
 I was also going to throw in migrate as a dependency and put in the glue
 code for that based on common use from ironic/trove/heat.  That'll pull in
 a few openstack common and config settings.  Finally, was going to add a
 solum-dbsync command a la the aforementioned projects.  No schema will be
 added.

 I also would not use migrate.  sqlalchemy-migrate is a dead upstream and
 we (OpenStack) have had to inherit it.  For new projects, you should use
 alembic.  That's actively developed and maintained.  Other OpenStack
 projects are either already using it, or making plans to move to it.

 
 Thanks, did not see it in the projects I was looking at, who's the 
 canonical example here?
 

It looks like at leat ceilometer and neutron are using alembic right now.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Russell Bryant
On 11/01/2013 11:14 AM, Clayton Coleman wrote:
 - Original Message -
 Noorul Islam K M noo...@noorul.com writes:

 Adrian Otto adrian.o...@rackspace.com writes:

 Team,

 Our StackForge code repo is open, so you may begin submitting code for
 review. For those new to the process, I made a will page with links to
 the repo and information about how to contribute:

 https://wiki.openstack.org/wiki/Solum/Contributing


 1. .gitreview file is missing, so I submitted a patch

 https://review.openstack.org/#/c/54877

 
 Once all the gitreview stuff is cleaned up I was going to do some purely 
 mechanical additions.
 
 I heard a few +1 for sqlalchemy with the standard OpenStack abstraction:
 
 solum/db/api.py
   manager abstraction for db calls
 solum/db/sqlalchemy/api.py
   sqlalchemy implementation

I wouldn't just copy this layout, personally.

We should look at getting some of the nova object work into
oslo-incubator.  It provides a nice object model to abstract away the
database API details.  You really don't want to be returning sqlalchemy
models to the rest of the code base if you can get away with it.

If we were starting the Nova database integration work from scractch
today, I'm not sure we'd have db.api and db.sqlalchemy.api.  It seems
like it would make more sense to add the db.api equivalents to our
objects, and sub-class them to add specific database support.

 I was also going to throw in migrate as a dependency and put in the glue code 
 for that based on common use from ironic/trove/heat.  That'll pull in a few 
 openstack common and config settings.  Finally, was going to add a 
 solum-dbsync command a la the aforementioned projects.  No schema will be 
 added.

I also would not use migrate.  sqlalchemy-migrate is a dead upstream and
we (OpenStack) have had to inherit it.  For new projects, you should use
alembic.  That's actively developed and maintained.  Other OpenStack
projects are either already using it, or making plans to move to it.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration (was: Stackforge Repo Ready)

2013-11-01 Thread Clayton Coleman
- Original Message -
 
 Once all the gitreview stuff is cleaned up I was going to do some purely
 mechanical additions.
 
 I heard a few +1 for sqlalchemy with the standard OpenStack abstraction:
 
 solum/db/api.py
   manager abstraction for db calls
 solum/db/sqlalchemy/api.py
   sqlalchemy implementation
 
 I was also going to throw in migrate as a dependency and put in the glue code
 for that based on common use from ironic/trove/heat.  That'll pull in a few
 openstack common and config settings.  Finally, was going to add a
 solum-dbsync command a la the aforementioned projects.  No schema will be
 added.
 
 Objections?
 

I was blindly assuming we want to pull in eventlet support, with the implicit 
understanding that we will be doing some form of timeslicing and async io bound 
waiting in the API... but would like to hear others weigh in before I add the 
monkey_patch and stub code around script startup.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Clayton Coleman


- Original Message -
 On 11/01/2013 11:14 AM, Clayton Coleman wrote:
  - Original Message -
  Noorul Islam K M noo...@noorul.com writes:
 
  Adrian Otto adrian.o...@rackspace.com writes:
 
  Team,
 
  Our StackForge code repo is open, so you may begin submitting code for
  review. For those new to the process, I made a will page with links to
  the repo and information about how to contribute:
 
  https://wiki.openstack.org/wiki/Solum/Contributing
 
 
  1. .gitreview file is missing, so I submitted a patch
 
  https://review.openstack.org/#/c/54877
 
  
  Once all the gitreview stuff is cleaned up I was going to do some purely
  mechanical additions.
  
  I heard a few +1 for sqlalchemy with the standard OpenStack abstraction:
  
  solum/db/api.py
manager abstraction for db calls
  solum/db/sqlalchemy/api.py
sqlalchemy implementation
 
 I wouldn't just copy this layout, personally.
 
 We should look at getting some of the nova object work into
 oslo-incubator.  It provides a nice object model to abstract away the
 database API details.  You really don't want to be returning sqlalchemy
 models to the rest of the code base if you can get away with it.
 
 If we were starting the Nova database integration work from scractch
 today, I'm not sure we'd have db.api and db.sqlalchemy.api.  It seems
 like it would make more sense to add the db.api equivalents to our
 objects, and sub-class them to add specific database support.

Is what you're referring to different than what I see in master:

  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L420
  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L43

?  My assumption was that the db.api manager would be handling that 
translation, and we would define db.api as returning object models, vs 
sqlalchemy models (even if initially they looked similar).  Would the 
abstraction for each model be split into different classes then (so that there 
would be one implementation per model, per backend)?  What about cross model 
operations?

If I describe the model used in other projects as:

  manager class
translates retrieval requests into impl-specific objects
saves impl-specific objects
handles coarse multi object calls

  API
#fetch_somethings(filter)
#save_something

would you say that your model is:

  abstract model class
has methods that call out to an implementation (itself a subclass?) and 
returns subclasses of the abstract class

  Something
#fetch(filter)
#save

SqlAlchemySomething
  #fetch(filter)
  #save

?

 
  I was also going to throw in migrate as a dependency and put in the glue
  code for that based on common use from ironic/trove/heat.  That'll pull in
  a few openstack common and config settings.  Finally, was going to add a
  solum-dbsync command a la the aforementioned projects.  No schema will be
  added.
 
 I also would not use migrate.  sqlalchemy-migrate is a dead upstream and
 we (OpenStack) have had to inherit it.  For new projects, you should use
 alembic.  That's actively developed and maintained.  Other OpenStack
 projects are either already using it, or making plans to move to it.
 

Thanks, did not see it in the projects I was looking at, who's the canonical 
example here?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Jay Pipes

On 11/01/2013 12:33 PM, Clayton Coleman wrote:

- Original Message -


Once all the gitreview stuff is cleaned up I was going to do some purely
mechanical additions.

I heard a few +1 for sqlalchemy with the standard OpenStack abstraction:

solum/db/api.py
   manager abstraction for db calls
solum/db/sqlalchemy/api.py
   sqlalchemy implementation

I was also going to throw in migrate as a dependency and put in the glue code
for that based on common use from ironic/trove/heat.  That'll pull in a few
openstack common and config settings.  Finally, was going to add a
solum-dbsync command a la the aforementioned projects.  No schema will be
added.

Objections?



I was blindly assuming we want to pull in eventlet support, with the implicit 
understanding that we will be doing some form of timeslicing and async io bound 
waiting in the API... but would like to hear others weigh in before I add the 
monkey_patch and stub code around script startup.


I'm not so sure that bringing in eventlet should be done by default. It 
adds complexity and if most/all of the API calls will be doing some call 
to a native C library like libmysql that blocks, I'm not sure there is 
going to be much benefit to using eventlet versus multiplexing the 
servers using full OS processes -- either manually like some of the 
projects do with the workers=N configuration and forking, or using more 
traditional multiplexing solutions like running many mod_wsgi or uwsgi 
workers inside Apache or nginx.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Joshua Harlow
I think there is a summit topic about what to do about a good 'oslo.db'
(not sure if it got scheduled?)

I'd always recommend reconsidering just copying what nova/cinder and a few
others have for there db structure.

I don't think that has turned out so well in the long term (a 6000+ line
file is not so good).

As for a structure that might be better, in taskflow I followed more of
how ceilometer does there db api. It might work for u.

- https://github.com/openstack/ceilometer/tree/master/ceilometer/storage
- 
https://github.com/stackforge/taskflow/tree/master/taskflow/persistence/bac
kends

I also have examples of alembic usage in taskflow, since I also didn't
want to use sqlalchemy-migrate for the same reasons russell mentioned.

- 
https://github.com/stackforge/taskflow/tree/master/taskflow/persistence/bac
kends/sqlalchemy

Feel free to bug me about questions.

On 11/1/13 9:46 AM, Clayton Coleman ccole...@redhat.com wrote:



- Original Message -
 On 11/01/2013 11:14 AM, Clayton Coleman wrote:
  - Original Message -
  Noorul Islam K M noo...@noorul.com writes:
 
  Adrian Otto adrian.o...@rackspace.com writes:
 
  Team,
 
  Our StackForge code repo is open, so you may begin submitting code
for
  review. For those new to the process, I made a will page with
links to
  the repo and information about how to contribute:
 
  https://wiki.openstack.org/wiki/Solum/Contributing
 
 
  1. .gitreview file is missing, so I submitted a patch
 
  https://review.openstack.org/#/c/54877
 
  
  Once all the gitreview stuff is cleaned up I was going to do some
purely
  mechanical additions.
  
  I heard a few +1 for sqlalchemy with the standard OpenStack
abstraction:
  
  solum/db/api.py
manager abstraction for db calls
  solum/db/sqlalchemy/api.py
sqlalchemy implementation
 
 I wouldn't just copy this layout, personally.
 
 We should look at getting some of the nova object work into
 oslo-incubator.  It provides a nice object model to abstract away the
 database API details.  You really don't want to be returning sqlalchemy
 models to the rest of the code base if you can get away with it.
 
 If we were starting the Nova database integration work from scractch
 today, I'm not sure we'd have db.api and db.sqlalchemy.api.  It seems
 like it would make more sense to add the db.api equivalents to our
 objects, and sub-class them to add specific database support.

Is what you're referring to different than what I see in master:

  
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L4
20
  
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py
#L43

?  My assumption was that the db.api manager would be handling that
translation, and we would define db.api as returning object models, vs
sqlalchemy models (even if initially they looked similar).  Would the
abstraction for each model be split into different classes then (so that
there would be one implementation per model, per backend)?  What about
cross model operations?

If I describe the model used in other projects as:

  manager class
translates retrieval requests into impl-specific objects
saves impl-specific objects
handles coarse multi object calls

  API
#fetch_somethings(filter)
#save_something

would you say that your model is:

  abstract model class
has methods that call out to an implementation (itself a subclass?)
and returns subclasses of the abstract class

  Something
#fetch(filter)
#save

SqlAlchemySomething
  #fetch(filter)
  #save

?

 
  I was also going to throw in migrate as a dependency and put in the
glue
  code for that based on common use from ironic/trove/heat.  That'll
pull in
  a few openstack common and config settings.  Finally, was going to
add a
  solum-dbsync command a la the aforementioned projects.  No schema
will be
  added.
 
 I also would not use migrate.  sqlalchemy-migrate is a dead upstream and
 we (OpenStack) have had to inherit it.  For new projects, you should use
 alembic.  That's actively developed and maintained.  Other OpenStack
 projects are either already using it, or making plans to move to it.
 

Thanks, did not see it in the projects I was looking at, who's the
canonical example here?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Clayton Coleman


- Original Message -
 On 11/01/2013 12:33 PM, Clayton Coleman wrote:
  - Original Message -
 
  Once all the gitreview stuff is cleaned up I was going to do some purely
  mechanical additions.
 
  I heard a few +1 for sqlalchemy with the standard OpenStack abstraction:
 
  solum/db/api.py
 manager abstraction for db calls
  solum/db/sqlalchemy/api.py
 sqlalchemy implementation
 
  I was also going to throw in migrate as a dependency and put in the glue
  code
  for that based on common use from ironic/trove/heat.  That'll pull in a
  few
  openstack common and config settings.  Finally, was going to add a
  solum-dbsync command a la the aforementioned projects.  No schema will be
  added.
 
  Objections?
 
 
  I was blindly assuming we want to pull in eventlet support, with the
  implicit understanding that we will be doing some form of timeslicing and
  async io bound waiting in the API... but would like to hear others weigh
  in before I add the monkey_patch and stub code around script startup.
 
 I'm not so sure that bringing in eventlet should be done by default. It
 adds complexity and if most/all of the API calls will be doing some call
 to a native C library like libmysql that blocks, I'm not sure there is
 going to be much benefit to using eventlet versus multiplexing the
 servers using full OS processes -- either manually like some of the
 projects do with the workers=N configuration and forking, or using more
 traditional multiplexing solutions like running many mod_wsgi or uwsgi
 workers inside Apache or nginx.
 

What about callouts to heat/keystone APIs?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Jay Pipes

On 11/01/2013 01:39 PM, Clayton Coleman wrote:



- Original Message -

On 11/01/2013 12:33 PM, Clayton Coleman wrote:

- Original Message -


Once all the gitreview stuff is cleaned up I was going to do some purely
mechanical additions.

I heard a few +1 for sqlalchemy with the standard OpenStack abstraction:

solum/db/api.py
manager abstraction for db calls
solum/db/sqlalchemy/api.py
sqlalchemy implementation

I was also going to throw in migrate as a dependency and put in the glue
code
for that based on common use from ironic/trove/heat.  That'll pull in a
few
openstack common and config settings.  Finally, was going to add a
solum-dbsync command a la the aforementioned projects.  No schema will be
added.

Objections?



I was blindly assuming we want to pull in eventlet support, with the
implicit understanding that we will be doing some form of timeslicing and
async io bound waiting in the API... but would like to hear others weigh
in before I add the monkey_patch and stub code around script startup.


I'm not so sure that bringing in eventlet should be done by default. It
adds complexity and if most/all of the API calls will be doing some call
to a native C library like libmysql that blocks, I'm not sure there is
going to be much benefit to using eventlet versus multiplexing the
servers using full OS processes -- either manually like some of the
projects do with the workers=N configuration and forking, or using more
traditional multiplexing solutions like running many mod_wsgi or uwsgi
workers inside Apache or nginx.



What about callouts to heat/keystone APIs?


Sure, it's possible to do that with eventlet. It's also possible to do 
that with a queueing system. For example, the API server could send an 
RPC message to a queue and a separate process could work on executing 
the individual tasks associated with a particular application build-pack.


The design of Zuul [1], especially with the relatively recent addition 
of Gearman and the nodepool orchestrator [2] that the openstack-infra 
folks wrote would, IMO, be a worthy place to look for inspiration, since 
Solum essentially is handling a similar problem domain -- distributed 
processing of ordered tasks.


Best,
-jay

[1] https://github.com/openstack-infra/zuul
[2] https://github.com/openstack-infra/nodepool

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Clayton Coleman


- Original Message -
  I was blindly assuming we want to pull in eventlet support, with the
  implicit understanding that we will be doing some form of timeslicing and
  async io bound waiting in the API... but would like to hear others weigh
  in before I add the monkey_patch and stub code around script startup.
 
  I'm not so sure that bringing in eventlet should be done by default. It
  adds complexity and if most/all of the API calls will be doing some call
  to a native C library like libmysql that blocks, I'm not sure there is
  going to be much benefit to using eventlet versus multiplexing the
  servers using full OS processes -- either manually like some of the
  projects do with the workers=N configuration and forking, or using more
  traditional multiplexing solutions like running many mod_wsgi or uwsgi
  workers inside Apache or nginx.
 
 
  What about callouts to heat/keystone APIs?
 
 Sure, it's possible to do that with eventlet. It's also possible to do
 that with a queueing system. For example, the API server could send an
 RPC message to a queue and a separate process could work on executing
 the individual tasks associated with a particular application build-pack.

I guess I was asking because event patterns are generally more efficient for 
memory than multiprocess, assuming that the underlying codebase isn't fighting 
the event system at every step.  Are your concerns with eventlet based on that 
mismatch (bugs, problems with eventlet across the various projects and 
libraries that OpenStack uses) or more that you believe we should start, at the 
very beginning, with the pattern of building everything as a distributed 
ordered task flow since we know at least some of our interactions are 
asynchronous?  There are at least a few network IO operations that will be 
synchronous to our flows - while they are not likely to be a large percentage 
of the API time, they may definitely block threads for a period of time.

 
 The design of Zuul [1], especially with the relatively recent addition
 of Gearman and the nodepool orchestrator [2] that the openstack-infra
 folks wrote would, IMO, be a worthy place to look for inspiration, since
 Solum essentially is handling a similar problem domain -- distributed
 processing of ordered tasks.
 
 Best,
 -jay

Familiar with Gearman, will look through nodepool/Zuul.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Clayton Coleman

- Original Message -
 I think there is a summit topic about what to do about a good 'oslo.db'
 (not sure if it got scheduled?)

Will look.

 
 I'd always recommend reconsidering just copying what nova/cinder and a few
 others have for there db structure.
 
 I don't think that has turned out so well in the long term (a 6000+ line
 file is not so good).
 
 As for a structure that might be better, in taskflow I followed more of
 how ceilometer does there db api. It might work for u.
 
 - https://github.com/openstack/ceilometer/tree/master/ceilometer/storage
 -

The Connection / Model object paradigm in Ceilometer was what I was assuming 
was recommended and was where mentially I was starting (it's similar but not 
identical to trove, ironic, and heat).  The ceilometer model is what I would 
describe as a resource manager class (Connection) that hides implementation (by 
mapping Sqlalchemy to the Model* objects).  So storage/base.py | 
storage/models.py define a rough domain model.  Russell, is that what you're 
advocating against (because of the size of the eventual resource manager class)?

Here's a couple of concrete storage interaction patterns

  simple application/component/sensor persistence with clean validation back to 
REST consumers
traditional crud, probably 3-8 resources over time will follow this pattern
best done via object model type interactions and then a direct persist 
operation

  elaborate a plan description for the application (yaml/json/etc) into the 
object model
will need to retrieve specific sets of info from the object model
typically one way
may potentially involve asynchronous operations spawned from the initial 
request to retrieve more information

  translate the plan/object model into a HEAT template
will need to retrieve specific sets of info from the object model
typically one way

  create/update a HEAT stack based on changes
likely will set the stack id into the object model
might return within milliseconds or seconds

  provision source code repositories
might return within milliseconds or minutes

  provision DNS
this can take from within milliseconds to seconds, and DNS is likely only 
visible to an API consumer after minutes.

  trigger build flows
this may take milliseconds to initiate, but minutes to complete

The more complex operations are likely separate pluggable service 
implementations (read: abstracted) that want to call back into the object model 
in a simple way, possibly via methods exposed specifically for those use cases.

I *suspect* that Solum will never have the complexity Nova does in persistence 
model, but that we'll end up with at around 20 tables in the first 2 years.  I 
would expect API surface area to be slightly larger than some projects, but not 
equivalent to keystone/nova by any means.

 https://github.com/stackforge/taskflow/tree/master/taskflow/persistence/bac
 kends
 
 I also have examples of alembic usage in taskflow, since I also didn't
 want to use sqlalchemy-migrate for the same reasons russell mentioned.
 
 -
 https://github.com/stackforge/taskflow/tree/master/taskflow/persistence/bac
 kends/sqlalchemy
 
 Feel free to bug me about questions.

Thanks

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Dan Smith
   https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L420
   
 https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L43

This API and these models are what we are trying to avoid exposing to
the rest of nova. By wrapping these in our NovaObject-based structures,
we can bundle versioned data and methods together which is what we need
for cross-version compatibility and parity for the parts of nova that
are not allowed to talk to the database directly.

See the code in nova/objects/* for the implementations. Right now, these
just call into the db_api.py, but eventually we want to move the actual
database implementation into the objects themselves and hopefully
dispense with most or all of the sqlalchemy/* stuff. This also provides
us the ability to use other persistence backends that aren't supported
by sqlalchemy, or that don't behave like it does.

If you're going to be at the summit, come to the objects session on
Thursday where we'll talk about this in more detail. Other projects have
expressed interest in moving the core framework into Oslo so that we're
all doing things in roughly the same way. It would be good to get you
started on the right way early on before you have the migration hassle
we're currently enjoying in Nova :)

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Michael Still
On Sat, Nov 2, 2013 at 3:30 AM, Russell Bryant rbry...@redhat.com wrote:

 I also would not use migrate.  sqlalchemy-migrate is a dead upstream and
 we (OpenStack) have had to inherit it.  For new projects, you should use
 alembic.  That's actively developed and maintained.  Other OpenStack
 projects are either already using it, or making plans to move to it.

This is something I wanted to dig into at the summit in fact, mostly
because I'm not sure I agree... Sure migrate is now an openstack
project, but so is olso and we're happy to use that. So I don't think
it being abandoned by the original author is a strong argument.

Its not clear to me what alembic gives us that we actually want...
Sure, we could have a non-linear stream of migrations, but we already
do a terrible job of having a simple linear stream. I don't think
adding complexity is going to make the world any better to be honest.

These are the kind of issues I wanted to discuss in the nova db summit
session if people are able to come to that.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Clayton Coleman


- Original Message -

  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L420

  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L43
 
 This API and these models are what we are trying to avoid exposing to
 the rest of nova. By wrapping these in our NovaObject-based structures,
 we can bundle versioned data and methods together which is what we need
 for cross-version compatibility and parity for the parts of nova that
 are not allowed to talk to the database directly.
 
 See the code in nova/objects/* for the implementations. Right now, these
 just call into the db_api.py, but eventually we want to move the actual
 database implementation into the objects themselves and hopefully
 dispense with most or all of the sqlalchemy/* stuff. This also provides
 us the ability to use other persistence backends that aren't supported
 by sqlalchemy, or that don't behave like it does.
 
 If you're going to be at the summit, come to the objects session on
 Thursday where we'll talk about this in more detail. Other projects have
 expressed interest in moving the core framework into Oslo so that we're
 all doing things in roughly the same way. It would be good to get you
 started on the right way early on before you have the migration hassle
 we're currently enjoying in Nova :)
 

Good idea, I'll dig through the code on the plane :) 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev