On Jan 15, 2013, at 6:48 PM, Hetii wrote: > Welcome. > > I have a bit rare scenario. > In my case i need to be able to work with over 1500 remote databases and i > cannot change this fact. > > Reflecting all of them are not possible, because it consume to much time and > resources, so need generate model for them. > > Of course this process also take long time, and depends on dialect inspector > implementation. For example by getting single table columns definition in > mysql dialect its emitted one query, for postgresql four. > > Even when i dump all of them into declarative base model, its still huge > amount of data that need to be parsed and loaded. > > I want to ask if its possible to share table/column definition across > different database models to reduce amount of used resources? > > if someone have idea how to organize/optimize structure for such application > model then please share :)
are the schemas in all 1500 databases identical ? in that case you only need to reflect it once into a MetaData object, and then that object you can share with as many engines/connections as you want. the MetaData and your model are unique to a certain schema design, not a database connection. if the schemas are *not* identical, then we might do some reductionist thinking. Your app would have a SQL query that works against "all" of the databases, suggesting that it only cares about a "common denominator" of table/column definitions. In which case the schema you reflect from database #1 is still useable, you just wouldn't refer to any of those tables/columns that aren't global to all schemas. if the schemas are completely different, then there's no sharing to happen anyway. -- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
