In case this helps...

This reminds me slightly of some RFID work I did years ago.  We had a lot 
of reads coming in from different units, several reads per unit per second.

I found the best way to handle writing was to just access the db directly, 
but kept the ORM on the "read" side.

I recall a few things from that experience.  some of this might be 
applicable to you:

• doing a "Select else Insert" pattern was faster than "Insert All" and 
just letting the db constraint catch errors.  there were still a lot of 
exceptions from different transactions.  this is probably dependent on the 
database/version/size/etc.  but it's worth profiling.

• we had a low number of "devices", so just cached their data into the 
app's memory.  losing the extra select helped a bit.

* our transaction got pared down to looking like this:

     begin;
     select record ;
     insert record;
     commit;

* i think someone eventually migrated the db interaction to use a stored 
procedure.  

* we kept an in-memory array of recently seen transactions.  it was just a 
simple list that would have elements inserted at 0 and then truncated.  we 
checked that list first, then hit the database if it wasn't seen .

our performance greatly increased, and frustrations decreased, as we moved 
more logic out of the database and into python.

on another RFID project, the lead engineer i worked with had a similar 
approach... however to keep performance better, he used an in-memory 
database to record data and then did periodic flushes to the real database 
after the data was "smoothed" out ( to compensate for known hardware issues 
).  

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to