On 08/01/2013 11:40 AM, Dolph Mathews wrote:

On Wed, Jul 31, 2013 at 5:00 AM, Henry Nash <hen...@linux.vnet.ibm.com <mailto:hen...@linux.vnet.ibm.com>> wrote:

    Hi Adam,

    Wanted to just give you more detail on the issue I keep pressing
    on for your change (https://review.openstack.org/#/c/36731/).

    For extensions which create their own "private" tables, I totally
    get it.  I'd like, however, to understand what happens for a more
    complex extension.  Let's imagine an (only-partially) hypothetical
    example of an extension that does (all of) the following:

    1) It adds or changes the use of some columns in existing core
    tables, and has migrations and code that goes along with that.
    2) It adds a new "private" table, and has all the code to handle that


I see the need for quotations here as a big red flag: what exactly is the use case we're solving for? The quotes imply to me that we're simply moving code around within the git repo as a refactor with no real gain.

The code in question was a table that linked endpoints to projects. This won't flay, as both endpoints nad projects are in separate backends. Theoretically, both could come from a backing store other than SQL. The solution to this particular problem is:

As an extension, put it in a table with values to that point to the public IDs for endspoints and projects.
xor
Make the table part of the assignments backend, and give it a fk constraint to proejcts
xor
Make the table part of the catalog backend and give it a fk constraint to endpoints.

As it is going in as an extension, it has to be the first option.




If the goal is to allow an extension to back to some sql store that is not shared by the rest of keystone (which is I thought how this conversation started?), then I'm not clear on how separate migration repos would be the first step in that direction. If the extension owns it's own sql store and therefore it's sqlalchemy engine, only then does it makes sense (to me) to bundle the extension with it's own migration repo. That would make "private" tables /actually /private, given that they'd be configurable with unique sqlalchemy connection strings, etc.

What it does it splits the migration version for the extension from the rest of Keystone. We could drop the Endpoint table all together, but the endpoint-filtering plugin would still work. Core tables don't care about extensions.

It will also greatly reduce the rebase ovehead of database migrations patches.

My patch does run the migrations for all extension, but it does not have to do so. It might make sense, either as part of this patch or as a follow on, to provide command line switches to show the set of extensions that have repos, and to show the versions of those extensions in the database pointed to by the connection string. There is another blueprint for supporting multiple SQL sources, and, with that, we could potentially map extension to different databases than the core repo. We can also add a command line parameter that explicitly specifies the extension for which to run the migrations.


    3) New APIs etc. to create new REST calls to drive the extension

    It is part 1) in the above that I am trying to understand how we
    would implement in this new model.  What I am imagining is that
    the best way to do 1) is that you would break (at least part of
    it) out of the extension and it would be a core patch.  This would
    cover modifications to core columns and changing any core code to
    make sure that such changes were benign to the rest of core (and
    indeed any other extensions).  Migrations for this part of the
    schema change would be in the core repo.  Our new extension would
    then build on this, have its private new table in its own repo and
    any unique code in contrib. Is that how you imagined this working?

    This hypothetical example is, of course, not too far from reality
    - the recent change I did for  inherited roles
    (https://review.openstack.org/#/c/35986/) is an example that comes
    close to the above - and it would seem to me that it would be much
    safer (from a code dependency point of view) to have the DB
    changes done separately and integrated into core - and the
    extensions just, in this case, use the advantages of the new
    schema to provide its functionality.

    Henry


    _______________________________________________
    OpenStack-dev mailing list
    OpenStack-dev@lists.openstack.org
    <mailto:OpenStack-dev@lists.openstack.org>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

-Dolph


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to