On 07-May-1998, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> 
> Adrian Hey writes:
> 
> > Consider the following function..
> 
> > demo1 :: (a -> b) -> Either a c -> Either b c
> > demo1 f (Left  a) = Left (f a)
> > demo1 _ (Right c) = Right c
> 
> > My first question is, can programmers safely assume that the
> > compiler is sufficiently smart to avoid duplicating the expression
> > (Right c) in the event that the second match above succeeds? 
> 
> > One might reasonably hope that the run time implementation of this
> > function will simply return the functions second argument in this
> > case.
> 
> Sadly I don't think one can assume this at all, and for a fairly
> good reason.  The argument (Right c :: Either a c) and the result
> (Right c :: Either b c) need not have the same runtime
> representation.

You can't be _certain_ that they will have the same runtime representations,
but I think this is likely to be true for most implementations.
Otherwise, how can the compiler generate (efficient) code for a polymorphic
function that takes an argument of type `Either t1 t2'?

In Mercury, such cases do have the same runtime representation
and the Mercury compiler will do the common subexpression elimination
in those cases.

> In a strict language like ML, for example, we might
> well have
> 
>       sizeof (Either a c) = sizeof tag + max (sizeof a, sizeof c)
> 
>       sizeof (Either b c) = sizeof tag + max (sizeof b, sizeof c)
> 
> and there is no reason for these to be the same.

If you're not doing destructive update,
the size of the type isn't going to matter --
all you need is for the data to have the
right representation.
For example, sizeof(Either a c) might be 8, and sizeof(Either b c)
might be 12, but sizeof(Right c) would be the same in either
case (and <= 8).

Indeed, if you have a discriminated union of things of different
sizes, then it is an important optimization to be able to
allocate a size smaller than the maximum for a type for data items
that don't occupy the maximum size.  No current Haskell implementations
do destructive update optimization, and I'm not aware of any plans
to add it to any of the existing implementations, but
even for implementations that do perform that kind of
optimization, I think the advantage of the space-saving optimization
mentioned above would probably outweigh any increased potential
for destructive update optization that might be enabled by
always allocating the full size even for data that doesn't
occupy the full size.  Thus I think it unlikely that an
implementation would adopt such a strategy.

-- 
Fergus Henderson <[EMAIL PROTECTED]>  |  "I have always known that the pursuit
WWW: <http://www.cs.mu.oz.au/~fjh>  |  of excellence is a lethal habit"
PGP: finger [EMAIL PROTECTED]        |     -- the last words of T. S. Garp.


Reply via email to