A notion struck me earlier today: I used to agree in principle on the party 
line espoused by just about every advocate of scala, clojure, and adding 
functional features to java, and even people like Brian Goetz, that the 
future is multicore and, and this is the one I might not agree with anymore: 
That apps need to take advantage of it.

Do they? What are we talking about here? Let's take a hypothetical box with 
64 cores. A bunch of these will be dedicated to running system-level 
maintainance jobs; the kernel will want one or two to manage processes and 
streamline disk caches and the like. Your music player and chat programs and 
such will gladly take 4ish. Also, you always lose some efficiency n the 
process of going multicore, so as a purely practical matter, highly 
efficient parallelizing "in the small" (i.e. using ParallelArray) will net 
you maybe, at best, a 40x speedup.

This isn't actually relevant for the vast majority of computer apps. Take 
for example regexp application: A thompson/NFA regexp engine will easily run 
anywhere from 500 to 100000000000000000000000x faster for a complex regexp 
with a lot of backtracking vs. a more naive implementation, and for simple 
regexps it just doesn't matter because they finish in nanoseconds anyway, 
parallelized or not. 40x is either irrelevant (because its so fast you 
barely notice), or its irrelevant (because due to algorithmic complexity 
this app isn't finishing no matter how nicely its parallelized). Sure, 40x 
in the abstract is nothing to sneeze at, but it's not nearly as important as 
it initially sounds, or at least, that's what I thought of earlier today.

Secondly, even without extensive "in the small" parallellization (which in 
java is annoying, because you really need closures and possibly language 
primitives to do parallelized foreaches and such), you can very easily 
parallelize in the large, especially in cases where speed is actually going 
to be relevant:

 - Web Servers can easily parallelize requests, especially if the framework 
is programmed with this in mind. For example, with appropriately configured 
transactions on an MVCC database such as postgres, using the 
'RetryException' mechanic, hundreds of threads can happily write and read to 
the same part of the same table without too many issues. This is already 
happening today.

 - Something like eclipse or netbeans can easily parallelize the 'Parse 
ASTs' phases. Eclipse in fact already first splits a source file up into 
parts (for example, 1 method declaration is one part), and will then parse 
each part separately. Parallelizing this requires nothing more than perhaps 
some help with a queue from java.util.concurrent. ParallelArray, closures, 
or other multi-core "in the small" language tooling isn't required at all to 
implement such a thing. Perhaps a message bus so that one part can continue 
to read in source files from disk and publish them as message blobs on the 
bus for another thread to pick these up and run with them without the need 
for any new language features.

 - Encoding/Decoding media or encryption algorithms will always remain a 
highly niche job where the rocket science nature of parallelizing just isn't 
a problem.


Am I missing something or have we been vastly overestimating the impact the 
'multi core future' will have on programming language design?

-- 
You received this message because you are subscribed to the Google Groups "The 
Java Posse" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/javaposse?hl=en.

Reply via email to