>
> I'm going to try a call to engine.dispose() in the child after the fork,
> which should invalidate all the connections in the pool and force the pool
> to start making new ones and see how that works out.
>

Update: seems to work.

The combination of polymorphic mapping and the processing module actually
makes a pretty nifty job dispatch engine. Each type of job gets put into the
persistent queue table as a subclass of a Job class, each with their own
respective .run() methods. When the child wakes up, it can reconstitute the
job object using session.load(Job, idjob), and via polymorphic loading, you
get the right class of job. Just call .run() on the new instance and it's
off and running. I get about 20 child processes/sec here on a 256MB VMware
guest running Ubuntu, including the fork(), child database reconnect time,
the job object fetch, and two sql update operations per child.

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to