> I spent a few weekends studying that paper earlier this year in order to > see if anything could be stolen for Python; my general impression was "not > easily" at the least. One notable feature of the presented concept was > that when code would otherwise block, they *rolled it back* to the last > nonblocking execution point. In a sense, they ran the code backwards, > albeit by undoing its effects. They then suspend execution until there's a > change to at least one of the variables read during the forward execution, > to avoid repeated retries.
I haven't spent the weekends on the paper yet (but it looks like that is what it would take), but I had the impression that they were talking about the lock-free techniques such as the ones used in Java 5. Basically, you start a write operation "in the background" without locking the data structure, so reads can continue while the calculation is taking place but before the result is committed. When the result is ready, an atomic "test and write" operation is used to determine whether any other task has modified the value in the meantime, and if not to commit the new value. If another task did modify the value, then the calculation begins anew. That was my take, but I haven't studied everything about STM yet, so I'm probably missing something. The one thing about this paper is that it seems to be an orthogonal perspective to anything about concurrency that *I* have seen before. > Oddly enough, this paper actually demonstrates a situation where having > static type checking is in fact a solution to a non-trivial problem! It > uses static type checking of monads to ensure that you can't touch > untransacted things inside a transaction. Yes, because of some of my diatribes against static checking people get the impression that I think it's just a bad thing. However, I'm really trying to get across the idea that "static type checking as the solution to all problems is a bad idea," and that the cost is often much greater than the benefit. But if there really is a clear payoff then I'm certainly not averse to it. In general, I really *do* like to be told when something has gone wrong -- I think there's a huge benefit in that. But if I can learn about it at runtime rather than compile time, then that is often a reasonable solution. So with concurrency, I would like to know when I do something wrong, but if I am told at runtime that's OK with me as long as I'm told. Bruce Eckel http://www.BruceEckel.com mailto:[EMAIL PROTECTED] Contains electronic books: "Thinking in Java 3e" & "Thinking in C++ 2e" Web log: http://www.artima.com/weblogs/index.jsp?blogger=beckel Subscribe to my newsletter: http://www.mindview.net/Newsletter My schedule can be found at: http://www.mindview.net/Calendar _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com