Hi Brandon,
On 26 Jan 2014, at 19:01, Brandon Allbery wrote:
> On Sun, Jan 26, 2014 at 1:43 PM, Tim Watson <watson.timo...@gmail.com> wrote:
> In Erlang, I can rpc/send *any* term and evaluate it on another node. That
> includes functions of course. Whether or not we want to be quite that general
> is another matter, but that is the comparison I've been making.
>
> Note that Erlang gets away with this through being a virtual machine
> architecture; BEAM is about as write-once-run-anywhere as it gets, and the
> platform specifics are abstracted by the BEAM VM interpreter. You just aren't
> going to accomplish this with a native compiled language, without encoding as
> a virtual machine yourself (that is, the AST-based mechanisms).
Yeah, I do realise this. Of course we're not trying to reproduce the BEAM
really, but what we /do/ want is to be able to do is exchange messages between
nodes that are not running the same executable. The proposal does appear to
address this requirement, at least to some extent. There may be complementary
(or better) approaches. I believe Carter is going to provide some additional
details viz his work in this area at some point.
Anything that reduces the amount of Template Haskell required to work with
Cloud Haskell is a "good thing (tm)" IMO. Not that I mind using TH, but the
programming model is currently quite awkward from the caller's perspective,
since you've got to (a) create a Static/Closure out of potentially complex
chunks of code, which often involves creating numerous top level wrapper APIs
and (b) fiddle around with the remote-table (both in the code that defines
remote-able thunks *and* in the code that starts a node wishing to operate on
them.
Also note that this problem isn't limited to sending code around the network.
Just sending arbitrary *data* between nodes is currently discouraged (though
not disallowed) because the receiving program *might* not understand the types
you're sending it. This is very restrictive and the proposal does, at the very
least, allow us to safely serialise, send and receive types that both programs
"know about" by virtue of having been linked to the same library/libraries.
But yes - there are certainly constraints and edge cases aplenty here. I'm not
entirely sure whether or not we'd need to potentially change the (binary)
encoding of raw messages in distributed-process, for example, in response to
this change. Currently we serialise a pointer (i.e., the pointer to the
fingerprint for the type that's being sent), and I can imagine that not working
properly across different nodes running on different architectures etc.
> Perhaps you should consider fleshing out ghc's current bytecode support to be
> a full VM?
After discussing this with Simon M, we concluded there was little point in
doing so. The GHC RTS is practically a VM anyway, and there's probably not that
much value to be gained by shipping bytecode around. Besides, as you put it,
the AST-based mechanisms allow for this anyway (albeit with some coding
required on the part of the application developer) and Carter (and others)
assure me that the mechanisms required to do this kind of thing already exist.
We just need to find the right way to take advantage of them.
> Or perhaps an interesting alternative would be a BEAM backend for ghc.
>
I've talked to a couple of people that want to try this. I'm intrigued, but
have other things to focus on. :)
Cheers,
Tim
_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users