On Jun 7, 2006, at 6:57 PM, Mike Orr wrote:

> Hi everyone.  I took another stab at my chemical database program with
> SQLAlchemy 0.2.2 and got the foreign-character and other issues
> straightened out.  My question now is, what does it mean that a
> DynamicMetaData can be bound to multiple engines?  Does that mean
> simultaneously, or switching from one to another?  I've got an API
>

it has a thread local "engine" property.  when you say  
dynamicmetadata.connect('something'), its connected to that engine  
within the current thread only.  other threads have to do their own  
connect(), else they will get a not-connected error.  when you call  
connect() again, it connects to a new engine within the current  
thread, and keeps the previous engine internally stored for when you  
connect() to it again.

the wisdom of the thread local variable is that one thread in an  
application can only do one thing at a time, so your code is not  
going to write to two different databases at once as long as you  
properly connect() within each thread before doing something.

the DynamicMetaData class itself is not very deep.  when a Table is  
associated with DynamicMetaData, all the "bound" operations use the  
"engine" attribute on the DynamicMetaData object.

> Is there a way to have the unbound metadata in the global variable,
> and then copy it to a bound metadata in the constructor?  Then I could
> have self._meta instead of self._engine.  I see how you can copy a
> table to another metadata but not how you can clone the whole
> metadata.  Or is there a better way to do this?
>

yes, you should probably deal with the MetaData object as your  
"contextual" object rather than the Engine, if you want to be able to  
switch it around.  you really only need to have one DynamicMetaData  
object for the whole thing, theres not much need to switch around and  
all....but if you want a Table to bind itself to another MetaData  
object you can use the tometadata() method on Table.

> By the way, the reason I'm inserting one by one rather than making a
> list of all 6112 records and inserting them at once is I get a "max
> allowable packet size exceeded" exception from MySQL.  Should
> SQLAlchemy perhaps chunk the request into reasonable packet sizes?  I
> could do it myself with say 20 or 50 records at a time, but that means
> doing busywork tasks the library could handle.

maybe.  SQLAlchemy is trying to be simple and stick to the business  
of generating SQL statements, not so much making arbitrary decisions  
and optimizing the performance of your application.  Splitting up a  
list of 6112 elements into groups of 50 and calling excute() that  
many times is not so hard, seems like its easier to do that then to  
learn a new set of switches and options..something like:

        for i in range(0, len(data), 50):
                statement.execute(data[i:i+49])

the executemany call is not even used with most db's except mysql at  
the moment, since it wants to get a total count of rows affected  
which you cant do with an executemany().  (it doesnt bother with  
mysql since the correct rows affected count is not available with  
mysql's python driver in any case).  this is a side effect of the  
ORM, and should probably be fixed so that the ORM doesnt use  
executemany(), then executemany() would work in the normal way, with  
the added documentation that you cant expect to get a rows affected  
count or be able to call last_inserted_ids() when you use it.




_______________________________________________
Sqlalchemy-users mailing list
Sqlalchemy-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/sqlalchemy-users

Reply via email to