On Jun 30, 2009, at 11:50 PM, Krukow wrote:
> I like the pragmatics of :min-history, and I believe it would be
> sufficient for many scenarios. However, I suspect we are now moving
> closer to the situation that Cliff Click was predicting [1] where as a
> programmer you need more precise knowledge about the STM
> implementation to understand the behavior and tune its performance.


While that may be true for the time being I think that Rich's original  
response still holds water: that usually, correctness is more  
important than performance and correctness is the real win with STM.  
It's a nascent technology but that doesn't mean it's useless. Most  
people would probably rather utilize all four or eight of their cores  
at half the theoretical efficiency with no loss of correctness than  
have to create an old fashioned threading model with all the bugs and  
headaches that they entail, trying to get closer to that dream of 100%  
utilization. This is fast food parallelism. It isn't as optimal or as  
hard won but from a cost vs. benefit perspective the choice is  
obvious, especially when comparing to single threaded programming.

STM might be new, but I think an analogy to garbage collection is  
valid. We don't have all the best STM implementation algorithms down  
yet; certainly this is an active research area. But even back when GC  
was extremely new it was already a win for LOC and correctness. Over  
time the technologies got better and the performance question has  
mostly faded away, and it's going to be the same with STM. But only if  
the focus is on correctness first and performance second.

I say mostly faded away because there will always be applications  
where GC cannot be used simply because it makes the system less  
predictable or because it incurs its own cost, such as realtime  
systems and extremely small-scale embedded devices. But most systems  
are not realtime and GC is quite prevalent, even though realtime  
performance characteristics seem generally desirable. It just isn't  
worth the tradeoff in desktop software. Lots of software will benefit  
from STM, even if the performance gains are minimal, even if it turns  
out to be provably impossible to push performance past half that of  
classically threaded programs.

Often there is a sound theoretical reason for throwing out an idea  
which turns out to occur so infrequently in the wild working on it is  
a net win anyway. Ethernet is a good example of a technology that  
shouldn't work well in the wild but which does. A stronger example  
would be Microsoft's Terminator project, which, in attempting to  
create termination proofs for programs through static analysis, flies  
in the face of CS's most famous mathematical proof. It turns out the  
problem is important enough that even the partial, imperfect solutions  
we are damned to would be useful in practice (as long as you don't try  
and run it through itself, of course ;)

http://research.microsoft.com/en-us/um/cambridge/projects/terminator/

The rest of that original thread is a great read, by the way, and  
thanks for bringing it up!

—
Daniel Lyons


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to