On Tue, Sep 09, 2008 at 08:50:02AM -0700, Larry Wall wrote:
: At the moment the design of Perl 6 (unlike certain FP languages) is
: that any dependence on the *degree* of laziness is erroneous, except
: insofar as infinite lists must have *some* degree of laziness in order
: not to use up all your memory immediately.  But any finite list can
: do any amount of batching it pleases, so there's no foolproof way to
: tell a lazy finite list from an eager list.  Modifying the inputs to
: a lazy list is basically asking for undefined behavior.
: This seems to me like the most multicore-friendly default, though
: perhaps not the most programmer friendly.  It seems to me that any
: programming involving cyclic lazy feeds must somehow have an "eager"
: in it somewhere to prevent indeterminacy, but I don't know how to
: write a compiler that enforces that offhand.

Perhaps there is some middle ground.  I was thinking about the
relationship of iterators with reactive programming this morning, and
it occurred to me that we tend to think of iterators as having
two states when perhaps they have more.  In the imperative mindset,
when you ask an iterator if it has more elements, it either says "yes" or
"no".  But in the reactive mindset, the two valid answers are "yes" and
"I dunno yet; hang on...".

If we are to combine these paradigms, then iterators have at least three
valid answers:

    No, there is nothing to return here, ever (see "EOF").
    Yes, there's something here that is easy to return, and if you ask
        me nicely I will return it immediately without blocking.
    I don't know if there's anything to return because I'd have to ask
        someone else, and that is likely to block.

So when you put something into a list context, some of the values
will be considered "easy", and some will be considered "hard".
The basic question is whether we treat those the same or differently
from a referential point of view.  Obviously there's an argument for
consistency on the one hand, but on the other, it's rather like the
difference between value semantics and object semantics, which is
another paradigm crossing idea.

A given array may has some values that are easy and some that are
hard.  The easy ones are the values that have already been calculated,
presumably, plus maybe any values that the first iterator spec says are
easy (and if those are all easy, go on to the next iterator spec).

My basic point is that it's easy to arrange single assignment semantics
for the easy values, and hard for the hard values.  The thing that
makes it harder for the hard values is that you have to avoid having
two different iterators asking the same sub-iterator for values.

Suppose we have

    my @a = 1..10;      # assume easy
    my @b = =$socket;   # assume hard

    for @a,@b {...}

If the for list sees @b as hard and just copies @b's iterator into its
own list for lazy evaluation, then it's going to end up iterating the
socket without loading @b.  Contrariwise if @b is eagerly evaluated it
might clobber the other iterator in the for list.  So those iterators
must be kept unified.  It's not enough to copy out the iterators
from the array; access to the hard elements of the array must still
be modulated by the array container.  A snapshot of the array container
will not do in this case.

So I think when you ask to interpolate an array into a list, the array
must return an iterator that refers to itself, not an iterator that
merely snapshots its values.

And I guess the fundamental underlying constraint is that a list cannot
be considered immutable unless its feeds can be considered immutable,
at least in some kind of idempotent sense.  This conflicts with the
whole point of reactive programming, which is that you have to react
because don't know what's coming.

This seems like it's close to the fundamental difficulty we're trying
to solve here.  And maybe the solution is much like the value/object
distinction, where lists get to *decide* for themselves where they
switch from easy eager immutable semantics to hard lazy reactive semantics.
And if we're pretty clear about the boundary between those, it
could work out much like the boundary between DFAable regexes and
side-effectful regexes as defined in S05.  And maybe it's even the
same boundary in some theoretical sense.

As a first shot at that definition, I'll submit:

    1 .. $n     # easy
    1 .. *      # hard

On the other hand, I can argue that if the first expression is easy,
then the first $n elements of 1..* should also be considered easy, and
it's not hard till you try to get to the *.  :)

It could also be that I'm confusing things here, of course, and that
1..* is something easy and immutable that nevertheless cannot be
calculated eagerly.  More to think about...


Reply via email to