I'm a bit out of my depth here, but if we take this reasonable assumption as 
valid for a moment:

assumption: The massively multicore future will NOT have a single memory bus 
accessible by all cores, at least not without massive inefficiency if you 
use that as primary memory bus.


Then, doesn't it stand to reason that parallelizing in the small is doomed 
in any case? parallellizing in the large, i.e. firing up 1 
thread^H^H^H^Hcore for each incoming request for a webserver, or, running an 
IDE 'clean project' by giving each core 1 file to parse, would be easy in a 
world where each core has to pay heavily if it wants to communicate with 
other cores. Parallellizing in the small on the other hand seems almost 
insurmountably difficult given such a restriction.

Also, to those saying that functional is still useful: You're turning the 
argument on its head; the point is: Even accepting the assumption that 
parallellizing your code at the fine-grained level requires FP techniques, 
there's no need to learn FP just because of the 'threat' of multi-core - 
because the speedups you could get are roughly on the scale of 
non-algorithmic optimizing, and from long experience we've already learned 
that you shouldn't make decisions about your code structure based on that 
kind of speedup factor.

-- 
You received this message because you are subscribed to the Google Groups "The 
Java Posse" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/javaposse?hl=en.

Reply via email to