Thanks for the very interesting references.  I would say, thanks for  
all the pain points, but that would sound ghoulish.

I have some rambling observations, presented as-is with no warranty,  
express or implied...

Regarding LambdaVM:

I think running the "mini interpreter" in the JVM makes good sense.   
With scoped continuations, you could run a mix of stack-ful and thunk- 
ful calls.  Basically, run normal stack-ful calls where it is likely  
to be non-blocking, and when this fails (perhaps due to an arbitrary  
control stack depth limit), snapshot the continuation, and throw the  
next "Code" thunk down to the mini interpreter.  Note that the  
standard case of "return nextCode()" from "Code.exec" is equivalent to  
snapshotting a null continuation and throwing "nextCode" to the mini  
interpreter; the "return" can be viewed as a strength-reduced throw  
through zero intermediate stack frames.

The LambdaVM paper claims that global variables cannot be registerized  
by any "JIT on the planet"; HotSpot can do this partially.  Assuming  
that the Code.exec calls can be inlined, the machine code will  
probably look like "compute argument value in reg T23, store to global  
G1, use value i T23, do not load G1".  The store buffer gets full, but  
there are no dependent loads.

The desire for the GC to tension links through LazyThunk.ind fields is  
interesting.  I have seen several uses cases for lazy data structure  
normalization (to be carried out by the GC).  Another one is string  
compaction, something like the Icon GC did.  Currently, JVMs suffer  
from strings whose underlying char arrays contain unused positions.   
Not sure how to design this.

The thunk state change is an interesting case to optimize.  I think it  
would be optimal, in a JVM-like system, to be able to change the  
thunk's type and "ind" field at the same time, in a transaction of  
some sort, especially if the hardware supports two-reference atomic  
updates.  This sort of thing may be a use case for what I call "split  
classes" where the object's underlying class field can be overloaded  
with different (but compatible) values.

Regarding CAL:

CAL manages thunk tensioning by having the mutator do it routinely at  
use points (accessor code in the using object, see 7.2).  This  
requires ubiquitous copies of the tensioning code; it's a sort of  
"field reference" aspect that has to be cut in everywhere.

It's interesting to me that CAL works hard to customize methods to  
take primitive types when possible (section 6).  For better or worse,  
JVMs support unboxed primitives, and there will probably always be  
some performance gain to statically removing box/unbox operations in  
generated bytecode.  It's for this reason that the JSR 292 APIs  
integrate primitive types at all points.

Regarding Haskell.NET:

They gave it up because the platform is strict and doesn't give much  
emulating non-strict nodes.  The CAL document shows what it looks like  
when you push through anyway (on the JVM, another strict platform).   
You can make it work, but you have to invent a large set of shadow  
types, like the Java wrapper types.  I wonder what it would look like  
to add the right "hooks" to the Java wrappers.  Probably unworkable,  
since they are all final (monomorphic implementation).  But perhaps  
interface injection could be used to introduce the extra "evaluate  
myself as myself" and a "get my reduced type" method to the standard  
wrappers.  Maybe there is some value in making a parallel set of  
wrapper interfaces, retroactively implemented by the standard wrappers.

It's very cool how C# and F# are (apparently to me) doubling down on  
the original platform investment in generators, supporting LINQ and  
asynchronous agents.

The thing I like the most in F# is the computation expressions, which  
let you build asynchronous workflows (which Don Syme talks about).  To  
me it looks like a start towards doing domain-specific languages on  
borrowed host-language syntax with static typing.  (As opposed to Ruby/ 
Groovy/Python where you get DSLs with dynamic typing.)  You can build  
agent machines in them, which is important, but also probably just the  
beginning of the interesting patterns that can be expressed as DSLs.   
Perhaps you could represent a lazy variant of F# (or Scala) as a  
computation expression, with enough "hooks" in controlling type for re- 
interpreting each type of subexpression.

-- John

On Jul 3, 2009, at 2:49 PM, Jon Harrop wrote:

> Don Syme described in an interview with Sadek Drobi some of the  
> lessons he
> learned from their work on Haskell.NET:
>
>  http://www.infoq.com/interviews/F-Sharp-Don-Syme

On Jun 30, 2009, at 6:10 AM, Tom Davies wrote:

> On Jun 30, 9:28 pm, Patrick Wright <[email protected]> wrote:
>> See also PDFs linked at bottom of WP entry, in particular "CAL for
>> Haskell Programmers".
>
> This document (not linked from the Wikipedia page) is probably worth
> lookingn at 
> http://resources.businessobjects.com/labs/cal/cal_runtime_internals.pdf

On Jun 29, 2009, at 4:48 PM, Neil Bartlett wrote:

> Take a look at the "things that suck" in the following description of
> LambdaVM, another effort to compile Haskell to the JVM by Brian
> Alliet:
>
>   http://wiki.brianweb.net/LambdaVM/Implementation


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "JVM 
Languages" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/jvm-languages?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to