On Tue, May 18, 2010 at 10:19:32AM -0500, Black, Michael (IS) scratched on the wall: > Interesting...but that logic means that later processes might get > their results before earlier ones.
Yes, but that's true regardless. We're talking about parallel operation on what may or may not be MP/MC systems. Most of this is at the whim of the OS scheduler, since we're not using queued locks or mutexes, it's just a free-for-all grab. > You'll get fairer resolution of busy contention with a fixed timeout. > Just do 10ms 50 times. That way the first guy in should get the > first results. Not at all. Connect A has a lock. Connection B needs it, fails to get it, sleeps. Connection A releases lock. Connection C comes in, finds lock open, grabs it. Connection B wakes, fails to get lock, sleeps. Without queued locks, you'll get mixed ordering no matter how you do it. You're also assuming the timers are extremely accurate. That's not likely. There is also a pretty gray area for "fair" here. In most free-for-all systems, "fair" is "anything that currently wants the lock has an equal chance of getting it." It doesn't matter who has been waiting longest. That's the exact opposite of a queued locking system, which is typically a FIFO (or First-Request, First-Grant or Longest-Wait, First-Grant). When fighting for a resource, linear or exponential back-offs are usually a very Good Idea. They're used all over the place in networking protocols (as are exponential ramp-ups). Especially in a single-core environment, it is better to back off a bit and actually allow the current holder to finish it's work, then to incessantly grab the CPU and pound on the lock, slowing down the holder, and causing more and more processes to back up behind you as you all spend more and more CPU time trying to grab the lock. With a back-off, the chance of a pile-up is reduced. Although the deviation might be a bit bigger, on average you'll still get similar response times (if the CPU is busy, the CPU is busy...), if not better, because back-offs tend to avoid pile-ups (or at least limit the damage done by them) and your CPU will still be processing database requests, rather than context switching between a ton of connections that are just waiting for a lock.. -j > What I am doing basically is this: 1 + 2 + 3 + 4 ... n, so on the 2 try it > is 3 millisecondes, on the 10 try it is 55 millseconds, once the total gets > to about 500 milliseconds, it bailes. I believe I actually start where n is > 9 and goes until n is 18. > _______________________________________________ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users -- Jay A. Kreibich < J A Y @ K R E I B I.C H > "Our opponent is an alien starship packed with atomic bombs. We have a protractor." "I'll go home and see if I can scrounge up a ruler and a piece of string." --from Anathem by Neal Stephenson _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users