James G. Sack (jim) wrote:

I'm sure it sounds more complicated than it is.. but it just /feels/
fragile!

It is. It's called double checked locking. And it fails except under specific circumstances involving the underlying read/write consistency.

In addition, the election process is subject to "livelock". If two processes are each trying to get elected quickly, they will starve each other.

Now, it sounds like the throughput he is running is fairly low. Consequently, these problems are pretty unlikely to occur.

This is the main problem with concurrency--"close enough" isn't.

Most problems in programming allow you to decompose the problem. Occasionally, you need to tear out an algorithm and replace it with one that's asymptotically faster. That normally isn't unreasonable.

With concurrency, if you get the wrong architecture, there is no evolving. Once you hit the throughput wall, you often have to tear it *all* up and start from scratch.

Most people think the main problems with concurrency are deadlock and inconsistency. They really aren't--you just extend the lock. Deadlock and inconsistency can generally be analyzed and managed (although it isn't always easy--see "priority inversion" for an example).

Livelock tends to be the more complex issue issue. Fortunately, most people really aren't pushing their systems very hard and never encounter it.

-a

--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to