Distributed programming is essentially a bunch of little sequential
program that interact, which is basically how people cooperate in the
real world. I think that is by far the most intuitive of any
concurrent programming model, though it's still a significant
conceptual shift from the traditional monolithic imperative program.

The Erlang people seem to say that a lot. The thing they omit to say,
though, is that it is very, very difficult in the real world!
Consider managing a team of ten people. Getting them to be ten times as
productive as a single person is extremely difficult -- virtually
impossible, in fact.

That's only part of the reasoning behind all of the little programs in Erlang. The one of the more important aspect is the concept of supervisor trees where you have processes that monitor* other processes. In the event that a child process fails, the parent process will try to perform a simpler version of what needs to occur until it is successful.

The other aspect is the concept of failing fast. It is assumed that a process that fails does not know how to resolve the issue, therefore it should just stop running and allow the parent process to do the right thing.

If you build your software the Erlang way, then you implicitly build software that is multi-core friendly. How well it uses multiple cores depends on the software that is written, however I believe that Erlang is supposed to be better than most other languages at obtaining something close to linear scaling across cores. Not 100% sure, though.

Does this mean that I believe distributed programming is easy in Erlang? Well, that depends on what you're doing, but I will say that being able to spawn functions on different machines is dirt simple. Doing it efficiently...well, that's where I think the programmer needs to know what they're doing.

Casey

* The monitoring is something implicit to the language.

Reply via email to