On Jan 15, 2014, at 8:07 AM, Dimitris Theodorou <[email protected]> wrote: > I've been working with the tometadata() function and explicitly specifying > schemas, and it looks like it will work. However I am stuck at the following > issue: When copying a table to a new schema, any copied foreign keys will > point to a table within the same new schema. I am in a situation where some > of my table's foreign keys point to tables in the shared schema (the schema > that contains common tables). Thus, the tometadata() fails with > aNoReferencedTableError when these keys are first accessed. Trying to work > around this one. the ideal case would be if we had hooks in the copy() process such that you could intercept when it copies each ForeignKeyConstraint. But that hook isn’t there. This is going to sound awful but I think you might try implementing a version of tometadata() yourself which ensures that the “schema” argument is passed, or not, to each ForeignKeyConstraint object, based on rules that are specific to your actual tables, perhaps that a particular set of table names in a set are known as the “schema-less” names. If you had a function “lookup_correct_schema(some_table_name)”, you could inject it like this: from sqlalchemy import Table, Index, ForeignKeyConstraint def _get_table_key(name, schema): if schema is None: return name else: return schema + "." + name def tometadata(table, metadata, schema): key = _get_table_key(table.name, schema) if key in metadata.tables: return metadata.tables[key] args = [] for c in table.columns: args.append(c.copy(schema=schema)) new_table = Table( table.name, metadata, schema=schema, *args, **table.kwargs ) for c in table.constraints: if isinstance(c, ForeignKeyConstraint): constraint_schema = lookup_correct_schema(c.elements[0].column.table.name) else: constraint_schema = schema new_table.append_constraint(c.copy(schema=constraint_schema, target_table=new_table)) for index in table.indexes: # skip indexes that would be generated # by the 'index' flag on Column if len(index.columns) == 1 and \ list(index.columns)[0].index: continue Index(index.name, unique=index.unique, *[new_table.c[col] for col in index.columns.keys()], **index.kwargs) return table._schema_item_copy(new_table) im not sure what other way the current hooks could help with, but this technique at least puts you right in the center of it. > > On Tuesday, January 14, 2014 11:22:36 PM UTC+1, Michael Bayer wrote: > > On Jan 14, 2014, at 4:56 PM, Dimitris Theodorou <[email protected]> > wrote: > >> Tried your approach but I can't get past the inspector wall. >> >> I set the search_path using connection.execute(), then call >> context.configure(connection, #etc..). In the diff between the metadata and >> the inspector results, my metadata will always contain schema-less tables >> (due to setting search_path in advance), while the inspector will either >> with include_schemas=True will report tables with schemas, which always ends >> up with the "Detected removed table" result or >> with include_schemas=False will report only the tables in the 'public' >> schema which are useless. >> Any pointers on configuring the inspector to disregard schemas or use the >> existing search_path on the connection? > > OK I know why that is, the engine is detecting the “default schema” when it > first connects, then the inspector actually takes schema into account and > uses it in the information schema queries, so it’s not going to find things. > > OK well there is a ticket to make what you want possible at the connection > level, it has a patch attached but needs more work, that would allow you to > say conn = connection.execution_options(default_schema=‘foo’); but that’s not > ready yet. There’s ways you can get this using execution events too, e.g. > you can intercept the inspector queries and inject a different schema name, > but that will be intricate. > > So lets go back to the metadata. You can make a copy of a given MetaData > object for a new schema, so maybe do it this way: > > meta = my_original_metadata > meta_schemax = MetaData() > for table in meta.tables.values(): > table.to_metadata(meta_schemax, schema=“schemax”) > > so then you’ll have another MetaData (or you can actually lump them all into > one MetaData() if you want) with multiple copies of the same Table with a > different schema. then if you do autogen with include_schemas, that would > do the entire thing in one pass (optionally using the “include_object” hook > to limit what schemas/tables are examined). > > sorry for the reversal here, if i were just trying it out I’d probably settle > on something sooner. > > >> >> On Tuesday, January 14, 2014 5:40:22 PM UTC+1, Michael Bayer wrote: >> this is a hard problem, but my notion is that since you are setting up the >> “schema” by manipulating search_path, you should be making sure this is set >> to what it needs to be before autogenerate runs. >> >> if you want to autogenerate for three schemas, you’d modify your env.py to >> run “context.run_migrations()” three times, each time setting “search_path” >> on the connection ahead of time to the appropriate value. That is, I think >> this should be approached like a “multiple database” problem, where you’d >> also have an alembic_version table in each schema separately. >> >> I would not get into setting “schema” on the MetaData object as it is more >> portable just to have it set up on the connection. >> >> >> On Jan 14, 2014, at 9:16 AM, Dimitris Theodorou <[email protected]> >> wrote: >> >>> My progress so far has been to configure my Metadata.schema before >>> importing my Declarative objects (which is going to associate ALL my tables >>> with that schema), and then filter on the tables that I know belong to that >>> particular schema. However that only adds a limited subset of the tables in >>> the created version script, I want to add all modifications in 1 script. >>> >>> On Tuesday, January 14, 2014 3:09:55 PM UTC+1, Dimitris Theodorou wrote: >>> Hi, >>> >>> Also posted this question at >>> http://stackoverflow.com/questions/21109218/alembic-support-for-multiple-postgres-schemas, >>> not sure what the best place to continue is. >>> >>> My problem is the following: >>> >>> I have one SQL alchemy model with no schema specifications. In my database >>> though I duplicate several tables over multiple schemas, which correspond >>> to different application users. Every schemas that contain subsets of the >>> SQL alchemy model's tables. The schema is set at the application run-time >>> to the proper value based on the logged-in user, with session.execute("SET >>> search_path TO client1,shared") for example. I also have a shared schema >>> which contains some tables; the rest of the tables are duplicated over >>> multiple schemas. >>> >>> I want to use Alembic's --autogenerate to migrate all my schemas. The >>> default --autogeneratebehavior is detecting multiple schemas that do not >>> exist in the model and ends up deleting the schemas and re-creating every >>> table in the default schema. >>> >>> I would really like to use --autogenerate though, with the proper plumbing >>> to set the schemas correctly. Any suggestions on if/how Alembic's API can >>> do this? >>> >>> Regards, >>> Dimitris >>> >>> -- >>> You received this message because you are subscribed to the Google Groups >>> "sqlalchemy-alembic" group. >>> To unsubscribe from this group and stop receiving emails from it, send an >>> email to [email protected]. >>> For more options, visit https://groups.google.com/groups/opt_out. >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "sqlalchemy-alembic" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> For more options, visit https://groups.google.com/groups/opt_out. > > > -- > You received this message because you are subscribed to the Google Groups > "sqlalchemy-alembic" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/groups/opt_out.
signature.asc
Description: Message signed with OpenPGP using GPGMail
