On Tue, Jun 3, 2014 at 8:08 PM, Marko Rauhamaa <ma...@pacujo.net> wrote:
> Chris Angelico <ros...@gmail.com>:
>> Okay, but how do you handle two simultaneous requests going through
>> the processing that you see above? You *MUST* separate them onto two
>> transactions, otherwise one will commit half of the other's work.
> I will do whatever I have to. Pooling transaction contexts
> ("connections") is probably necessary. Point is, no task should ever
> I deal with analogous situations all the time, in fact, I'm dealing with
> one as we speak.
Rule 1: No task should ever block.
Rule 2: Every task will require the database at least once.
Rule 3: No task's actions on the database should damage another task's
state. (Separate transactions.)
Rule 4: Maximum of N concurrent database connections, for any given value of N.
The only solution I can think of is to have a task wait (without
blocking) for a database connection to be available. That's a lot of
complexity, and you know what? It's going to come to exactly the same
thing as blocking database queries will - your throughput is defined
by your database.
It's the same with all sorts of other resources. What happens if your
error logging blocks? Do you code everything, *absolutely everything*,
around callbacks? Because ultimately, it adds piles and piles of
complexity and inefficiency, and it still comes back to the same
thing: stuff can make other stuff wait.
That's where threads are simpler. You do blocking I/O everywhere, and
the system deals with the rest. Has its limitations, but sure is