Hi All,
I've been observing some wailing and gnashing of teeth lately in
various programming communities around what to do about the rise of
multi-core processors.
For example, the Python folks recently had some heated debate on
Artima, for example, about something called the Global Interpreter
Lock or GIL that in some way limits the concurrency potential of
CPython. I don't quite understand the details, but there sure seemed
to be some fear that if Python didn't make multi-threading easier, it
may become less relevant.
Brian Goetz has brought up that we may have frog-boiled ourselves
into an bad situation by adopting the model of shared state with
locks in Java. In general the shared state/locks model makes
concurrent programs difficult to reason about, but in particular this
approach to concurrency isn't composable. You can't safely combine
different modules without understanding the details of what they do
with locks and how they will interact.
The Pragmatic Programmers recently published a book on Erlang, which
got a lot of people taking about Erlang. Erlang uses a shared nothing
model, with message passing between "processes" managed by "actors".
Processes can be implemented as threads I assume, or can be
distributed. One interesting thing about Erlang is that it tries to
unify the remote and local models, as far as I can tell. Not that
they haven't read a Note on Distributed Computing. I think that
instead of trying to make remote nodes look like local ones, they may
treat local ones as unreliable as remote ones.
I've been involved with a language called Scala lately, which has an
Erlang-like actors library. On the mailing list they keep talking
about issues with implementing remote actors. I as yet don't
understand these details either, but I keep getting this wierd
feeling that wheel reinvention is going on. They seem to be talking
about how to solve problems that Jini addressed almost 10 years ago.
So here's my question. I get the feeling that the trend to multi-core
architectures represents a disruptive technology shift that will
shake up the software industry to some extent. Does River have
something to offer here? If you expect the chips your software will
run on will have multiple cores, and maybe you don't know how many
until your program starts running, you'll want to organize your
software so it distributes processing across those cores dynamically.
Isn't JavaSpaces a good way to do that?
I think what it might mean is that you treat another core on the same
box running a worker thread the same as a worker thread across the
network. That way you have a uniform programming model, and when you
run out of cores, you just add more boxes and you get more worker
nodes. So it would be the opposite of the concept targeted by the
Note. Yes, you would use objects through a uniform interface, and
whether or not that object is implemented locally or remotely would
be an implementation detail of the object. But what you'd assume is
not that the thing is local (a thread on another core of the same
box) but remote.
Anyway, I was curious what everyone here thought. It may be a way to
position River in people's minds, give it a marketing story.
Bill
----
Bill Venners
President
Artima, Inc.
http://www.artima.com