On Wed, 17 Jun 1998, Hans Aberg wrote:
>   But I found it rather difficult to implement this style with POSIX (Java)
> threads: It is hard to guarantee that the computations does not hang. What
> I needed was to be able to guarantee that certain sequences in the
> implementation cannot the halted in the middle, but that is not possible
> with pre-emotive threads I think.

I think SPJ's excellent paper on concurrent Haskell answers this issue
(and all my prior questions about IPC).

But it does lead to other questions:
1. What implementations are supporting concurrent Haskell?  What modules
do I have to import to use it?  Hugs claims to support it, but I can't
figure out how to engage it.

2. Are Haskell processes/threads preemptive or cooperative?  Java's lack
of specification of this difference has resulted in code that hits 
race conditions only on some platforms.  Also, if you are using system
threads then you get the additional benefit that some operating systems
will automatically distribute them accross CPUs.

3. Do concurrent Haskell child-processes/threads survive the completion of
main?  In Java, only threads that are explicitly marked as Daemons,
survive after main completes.

3. The paper discusses a distributed implementation.  I am
currently writing code that will be distributed accross ~100 machines.
Can distributed Haskell support this?  Does it allow machines access to
local environment (e.g. a local file store for each server)?  How does it
handle failure of individual machines?  How do you pass lazy
datastructures around?  Where would I find out more?  

4. Has anyone used concurrent Haskell to write an HTTPd(or DBMS)?  If so,
how does it perform and where can I find it?   An HTTPd that is itself
distributed accross a network would be much more elegant and manageable
than our current hacks like TCP/IP redirectors and DNS round-robin.

5. How is the Glasgow Parallel Haskell's different from concurrent
Haskell?  My ignorant guess is that par is synchronous while forkIO is
asynchronous and that referential integrity means that the compiler can
paralellize a lot of functions without any hints from the programmer.  Is
that correct?

6. It seems like you can use concurrent Haskell to write a very elegant
and clean Linda-style parallel systems in Haskell.  How much conversation
is there between the Yale Haskell people and the Yale Linda people? 

-Alex-

___________________________________________________________________
S. Alexander Jacobson                   i2x Media  
1-212-697-0184 voice                    1-212-697-1427 fax



Reply via email to