Re: n+k patterns

1993-05-18 Thread kff



Lennart Augustsson <[EMAIL PROTECTED]> writes:

> PS. I'd like to start the "Ban n+k patterns"-movement, any followers?

Count on me! I guess this makes it an organized movement ... two persons
(movement) that know about each other (organized).

Cheers,
Karl-Filip




Re: Cyclic Types

1993-02-27 Thread kff

It is not clear to me how we would construct models for cyclic types
(then, I'm just starting to learn how to do it for acyclic types :-).
That is: What do the domains for these types look like? The reason I'm
worrying about this is that when I do abstract interpretation over
structured datatypes (such as lists, trees, ASTs), I need to construct
finite, abstract domains that model the infinite, concrete domains.
I do this from the type (i.e. type expression) associated with the
domain, and I have always used the finite, acyclic property of the
type expressions to guarantee that I get finite abstract domains that
really model the underlying concrete ones. The same seems to be true
of all abstract interpretations that construct their domains from
types, but maybe techniques meant for untyped languages can be used.

On a more cheerful, less please-don't-do-that note, could the type
checker perhaps transform the types of the program, perhaps by
introducing new algebraic datatypes, back to an acyclic form? So that
I can deal with this in the type checker and forget about it later?
Like, in the case of Guy's example:

data FooTree a b  =  Leaf b | Node a (FooTree a b) a (FooTree a b)

Now, let's say we use this in such a way that the type checker finds
the type:

  mu a . [FooTree a Int]  -- Ints in the leaves

Then the type checker could (for the benefit of later passes) break
the cycle by creating the new algebraic datatype A defined by:

data A = A_Constructor [FooTree A Int] ! 

where the "!" indicates that the constructor A_Constructor is strict
in its argument. This means that we don't actually have to implement
the constructor at run-time, so the introduction of this type carries
no performance penalty. 

Now, of course, we could find the type

mu a . [FooTree a Float]

somewhere else during typechecking and construct a corresponding type,
but a better idea would be to try to construct the most general cycle
breaking algebraic datatype instead. In this case (I think) it would
be:

data A b = A_Constructor [FooTree (A b) b]

Then the first type would be (A Int) and the second (A Float).

In order for this to work, the typechecker would have to transform the
program both by generating these new type declarations and by
inserting the appropriate constructors in the code (in this case
A_Constructor). Right off the top of my head (which is where the idea
came from) I don't know if this is possible, there might be some
subtle reason why the type system of cyclic types is in some sense
stronger than the acyclic system, but since this technique can in
general be used to code a model for the untyped lambda calculus in
e.g. Haskell, it seems likely to me that it would work.

Cheers,

 Karl-Filip


Make the cyclic case acyclic, and the acyclic case fast!




Re: What's the most efficient way to build an array?

1992-08-10 Thread kff


On building more than one array at a time:

You could introduce references into the language. This would
make it possible to write code that manipulates more than one
array at a time. There may however creep up some problems
if those references were to escape the continuation passing
region of the code. These can be addressed by the
type system, I think. I first heard about this from one of
Paul Hudaks students (can't recall his name, though) who was
working on something he called 'heap transformers' which was
essentially references + continuations.

Cheers,
  Karl-Filip


Make the frequent case fast, and the fast case frequent!




Re: What's the most efficient way to build an array?

1992-08-09 Thread kff


Now, *there's* an interresting question ...

Personally, as an implementor, I'd vote for the second version,
i.e. I think that future Haskell compilers are more likely to
do away with the thunks than do update in place analysis. Why?
Because if we really want sequentialization of the accesses and
reuse of storage, we can (soon) use array transformers (cps), 
linear typesystems or perhaps array monads to express this at
the source level. On the other hand, the optimizations that are
needed in order to eliminate the thunks in the second version,
namely driving (related to partial evaluation and Wadlers
deforrestration) and dependency analysis (or index analysis or
alias analysis), are either very useful for other purposes
or well known from the parallelizing Fortran compiler world.

#ifdef NIT_PICKING

Isn't the type signature you give incorrect? The parammeters
m and n must have type a (where (a,a) is the index types of
the arrays).

#endif 

My SEK 0.02 ...

Karl-Filip


Make the frequent case fast, and the fast case frequent!