On Fri, 20 May 2011 00:52:28 -0400, Michael Bayer <[email protected]>
wrote:
>
> On May 19, 2011, at 5:24 PM, Faheem Mitha wrote:
>> Unfortunately, that is not true. (So I guess just leaving the
>> structure alone and switching dbs will not work.) There are 4
>> possible different database layouts. Also, there can be multiple
>> schemas in each database.
> So you have a model with a set of tables, the tables are split
amongst multiple schemas, say schemas A, B, and C. You then have
four types of databases, each of which have the identical set of
table designs, except the actual names of *just the schemas*,
I.e. A, B, and C, randomly change.
Well, let me try to be clear here, for the record. I have a bunch of
schemas across databases. Let us assume that all schemas are in one
database and there are k schemas. Then each of these k schemas contain
a dataset of 9 tables. Now this collection of 9 tables vary in
structure - there are 4 different possible structures/layouts for
these 9 tables (what are confusingly also called schemas), but the
table names used in all the different layouts are the same, say A, B,
C,,,,, I. So, a total of 9k tables. Most of the tables are the same,
just a couple of the tables differ. So, my application may switch from
one set of tables, to another set of tables. I hope it is not quite as
ridiculous as it sounds. Unfortunately I haven't had the help of a
database professional in this project.
> That seems ridiculous. I would absolutely name the schemas
> consistently. Or name the tables distinctly in the schemas so that
> search_path could be used. Very unfortunate that PG doesn't support
> synonyms.
Well, the code for handling these different layouts is essentially the
same, so I've mostly used the same routines across all, and
encapsulated the differences using object orientation. This would be
more difficult if I started using different names fot the different
tables.
Not sure what you mean by "name the schemas consistently".
> Since I know you aren't going for that, there's some other Python
> tricks all of which are more complex, or interfere with how mapper()
> works in such a way that I can't guarantee ongoing compatibility,
> than just wiping out everything with clear mappers and re-mapping.
> I would keep the MetaData for each set of tables in a dictionary and
> pull the appropriate set of tables out for each use, send them into
> mapper(). I'd create the three copies of each Table from the
> original one using table.tometadata(schema='newschema').
> # runs only once
> metadatas = {
> 'one':MetaData(),
> 'two':MetaData(),
> 'three':MetaData(),
> 'four':MetaData(),
>}
>
> # runs only once, per table
> def table(name, *args, **kw):
> t = Table(name, metadatas['one'], *args, **kw)
> t.tometadata(metadatas['two'], schema='two')
> t.tometadata(metadatas['three'], schema='three')
> t.tometadata(metadatas['four'], schema='four')
Wow, never heard of tometadata before. Google thought I might be
searching for metadata. :-)
At http://readthedocs.org/docs/sqlalchemy/rel_0_6_6/core/schema.html I
found this defn of tometadata
*****************************************************************
tometadata(metadata, schema=<symbol 'retain_schema>)
Return a copy of this Table associated with a different MetaData.
E.g.:
# create two metadata
meta1 = MetaData('sqlite:///querytest.db')
meta2 = MetaData()
# load 'users' from the sqlite engine
users_table = Table('users', meta1, autoload=True)
# create the same Table object for the plain metadata
users_table_2 = users_table.tometadata(meta2)
***********************************************************
Does tometadata just retrieve the new table by name? Not completely
clear from the context.
Not sure what is happening here. The idea is to get the equivalent of
t from a different metadata, yes? So wouldn't this be like
def table(name, number, *args, **kw):
t = Table(name, metadatas['one'], *args, **kw)
# Return the table corresponding to t from metadata[number]
return t.tometadata(metadatas[number], schema=number)
> then to map a class:
> mapper(cls, metadatas['two'].tables['some_table'])
This would need to loop over all tables, yes? Here tables is some
other directory? I think I'm missing some context. Off the top of my
head, something like
for cls, name in zip(classnames, tablenames):
mapper(cls, metadatas['two'].table(name, two))
perhaps?
Regards, Faheem
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/sqlalchemy?hl=en.