On Tue 28 Sep, Kevin Hammond wrote:
> >No, I like the current rules for dealing with overlapping patterns just fine.
> >But there's a difference between the rules used to disambiguate overlapping
> >patterns in the 'syntactic sugar', (which are conceptually sequential,
> >serial or whatever) and the semantics of the resulting case expressions
> >(which really are serial).
>
> Serial, but still not sequential! That is, they define an outcome but not
> an order (unless
> you're referring to the operational semantics of the STG machine). The
> compiler will
> rearrange the order of case-expressions if it's safe to do so, for example.
Are we talking about Haskell compilers here?
For example, && could be written..
a && b = case a of
True -> case b of
True -> True
False -> False
False -> False
or, transform by changing the order of case expressions
a && b = case b of
True -> case a of
True -> True
False -> False
False -> False
but my understanding of Haskell semantics is that this is illegal,
because in the first form above a will be evaluated but b may not be,
but in second b certainly will be evaluated but a may not be evaluated.
> >By this I mean you could keep the current rules for disambiguation, but
> >do the resulting un-abiguious pattern matching with something other than
> >case expressions.
>
> Or just reorder the case expressions (or do them in parallel) if you like,
> and if it's safe.
Again this statement has been qualified by 'if its safe'.
I don't understand what rules the compiler is using to make
this determination. Are they consistent with Haskell semantics?
> >So (a && b) = (b && a) is invalid
>
> Not exactly: it's valid if and only if both a and b are needed (in which case
> False && _|_ = _|_, so the asymmetry is eliminated)! This is the basis for
> a number of theorems.
I see what your getting at here. There are contexts where if a or b fails
then the whole expression fails, so if the expression also contains (a && b)
it can safely be replaced by (b && a). The trouble with these transformations
is that they are a property of things 'outside' &&, not 'inside' &&,
so aren't generally applicable (but are valid in the proper context,
I suppose).
> The semantics of _|_ is fundamental to non-strict languages (and isn't just
> operational!).
Huh? I agree, this is fundamental, but are you saying that all non-strict
languages have the _same_ semantics wrt to _|_? I assume that we require
programs to behave in a manner consistent with the language semantics.
So those semantics governing _|_ have to consistent with some form
of operational model. But if we change the operational model, we
change the semantics too, don't we?
Actually, I was wondering if someone could spell out exactly what
'non-strict semantics' are.
Is there only one kind of non-strict semantics?
Is conventional 'lazy' (normal order) reduction consistent with
non-strict semantics? (I guess so)
Is strict evaluation consistent with non-strict semantics? (I guess not)
Is parallel reduction and parallel pattern matching also consistent
with non-strict semantics? (I not guess)
If so, then it seems that non-strict semantics is ambiguous regarding
expressions like (_|_ && False), which doesn't seem like a very
satisfactory situation to me.
Regards
--
Adrian Hey