Hi Jay,

Comments below..

On 7/24/2017 8:01 AM, Jay Pipes wrote:
+Dan Smith

Good morning Mike :) Comments inline...

On 07/23/2017 08:05 PM, Michael Bayer wrote:
On Sun, Jul 23, 2017 at 6:10 PM, Jay Pipes <[email protected]> wrote:
Glad you brought this up, Mike. I was going to start a thread about this.
Comments inline.

On 07/23/2017 05:02 PM, Michael Bayer wrote:
Well, besides that point (which I agree with), that is attempting to change
an existing database schema migration, which is a no-no in my book ;)

OK this point has come up before and I always think there's a bit of
an over-broad kind of purism being applied (kind of like when someone
says, "never use global variables ever!" and I say, "OK so sys.modules
is out right ?" :)  ).

I'm not being a purist. I'm being a realist :) See below...

I agree with "never change a migration" to the extent that you should
never change the *SQL steps emitted for a database migration*. That
is, if your database migration emitted "ALTER TABLE foobar foobar
foobar" on a certain target databse, that should never change. No
problem there.

However, what we're doing here is adding new datatype logic for the
NDB backend which are necessarily different; not to mention that NDB
requires more manipulation of constraints to make certain changes
happen.  To make all that work, the *Python code that emits the SQL
for the migration* needs to have changes made, mostly (I would say
completely) in the form of new conditionals for NDB-specific concepts.
    In the case of the datatypes, the migrations will need to refer to
a SQLAlchemy type object that's been injected with the ndb-specific
logic when the NDB backend is present; I've made sure that when the
NDB backend is *not* present, the datatypes behave exactly the same as
before.

No disagreement here.

So basically, *SQL steps do not change*, but *Python code that emits
the SQL steps* necessarily has to change to accomodate for when the
"ndb" flag is present - this because these migrations have to run on
brand new ndb installations in order to create the database. If Nova
and others did the initial "create database" without using the
migration files and instead used a create_all(), things would be
different, but that's not how things work (and also it is fine that
the migrations are used to build up the DB).

So, I see your point here, but my concern here is that if we *modify* an existing schema migration that has already been tested to properly apply a schema change for MySQL/InnoDB and PostgreSQL with code that is specific to NDB, we introduce the potential for bugs where users report that the same migration works sometimes but fails other times.

I don't think that the testing issues should be a concern here because I've been working to make sure that the tests work with both InnoDB and NDB. It's a pain, but again, we are only talking about a handful of the services. Bottom line here is that if you are not using NDB, the changes have zero effect on your setup.


I would much prefer to *add* a brand new schema migration that handles conversion of the entire InnoDB schema at a certain point to an NDB-compatible one *after* that point. That way, we isolate the NDB changes to one specific schema migration -- and can point users to that one specific migration in case bugs arise. This is the reason that every release we add a number of "placeholder" schema migration numbered files to handle situations such as these.

The only problem with this approach is that it assumes you are on InnoDB to start out with, which is not the use case here. This is for new installations or ones that started out with NDB, so we have to start out with the base schema in the scripts working.

I understand that Oracle wants to support older versions of OpenStack in their distribution and that's totally cool with me. But, the proper way IMHO to do this kind of thing is to take one of the placeholder migrations and use that as the NDB-conversion migration. I would posit that since Oracle will need to keep some not-insignificant amount of Python code in their distribution fork of Nova in order to bring in the oslo.db and Nova NDB support, that it will actually be *easier* for them to maintain a *separate* placeholder schema migration for all NDB conversion work instead of changing an existing schema migration with a new patch.

And this is the whole point of the work that I'm doing. Getting upstream so that others can benefit and so that we don't have to waste cycles maintaining custom code. Instead, we do all of the work upstream and that will enable our customers to more easily upgrade from one release to another. FYI, we have been using NDB since version 2 of our product. We are working on version 4 right now.


All the best,
-jay

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--

Oracle <http://www.oracle.com/>
Octave J. Orgeron | Sr. Principal Architect
Certified Enterprise Architect
Oracle Linux OpenStack
Mobile: +1-720-616-1550 <tel:+17206161550>
500 Eldorado Blvd. | Broomfield, CO 80021
TOGAF 9.1 Certified <http://www.opengroup.org/certifications/togaf> Certified Oracle Enterprise Architect: Systems Infrastructure <http://www.oracle.com/us/solutions/enterprise-architecture/index.html> Green Oracle <http://www.oracle.com/commitment> Oracle is committed to developing practices and products that help protect the environment

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to