I have a couple of tables that I want to reflect. The first is a data table 
where one of the columns is a foreign key to the second table.

If I used SQLAlchemy declarative_base, a query might look something like 
this:

session.query(Client.name, Suburb.label).join(Suburb) # In the Client class 
there is an attribute suburb_id = Column(Integer, ForeignKey(Suburb.id))

However, this foreign key is not specified in the schema (we're using 
postgres 9.2) but we know all the columns that look like something_id are 
foreign keys, so I've been defining them that way using SQLAlchemy.


My problem is, although we have a fixed number of property tables (suburb, 
country, join_date, ...) - each data table (per client) can have a 
different set of columns.

This hasn't been much of a problem so far, since we only have a few *types* of 
client data tables, so the combinations have been limited. However, I'd 
like to cater for changes in the future.

If I reflect the table using SQLAlchemy, the resultant table will not have 
the ForeignKey columns compared to if I did it manually. Is there a way to 
add these in after reflection? 

Or is my only option to use reflected tables and explicit join conditions? 
Something like:

client_table_1 = Table('client_table_1', metadata, autoload=True, 
autoload_with=engine, schema='client_1')
session.query(client_table_1.c.name,Suburb.label).join(Suburb, 
client_table_1.c.suburb_id == Suburb.id) # Explicit joins only from now on


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to