Tom Pledger:
> Florian Hars writes:
>  > efficiency improving no-ops like (fst (head x), snd (head x)): tail
>  > x !" (By the way, can anyone explain this section to me?)
..
> The (fst (head x), snd (head x)) : tail x construction assures the run
> time system that a parser for zero or more `a's is infallible

And this extra assurance tells us that we are not exactly talking about 
no-ops. Let me simplify the example a bit:

f :: (a,b) -> c
f x = .. x ..

g :: (a,b) -> (a,b)
g x = (fst x, snd x)

Both f and f.g operate on pairs - this seems to be guaranteed by the 
type system. But Haskell's type system only makes a slightly weaker 
guarantee for f, also referred to as partial correctness: if f's parameter 
is ever successfully computed, it will be a pair. In practice, the 
evaluation of f's parameter might fail to terminate, or terminate 
exceptionally (outside the expected type, by throwing an exception).

In these conditions, g is not quite a no-op: it takes a parameter that
will not normally be anything other than a pair and returns a result
that will (almost) definitely be a pair (almost, because the construction
of the result could still run out of resources). This still doesn't safeguard 
against resource problems or other exceptional conditions, but it does
safeguard against non-termination: if g's parameter fails to evaluate
successfully (is equivalent to bottom), then the components of g's 
result will not become available, but g's result will still be a pair.

This, in turn, can have an effect on the body of f: if some computation
in f's body (that is needed for the result) depends only on the structure 
of its parameter x, then the evaluation of (f x) will depend on the
evaluation of x, at least far enough to reveal the structure of x. 

In contrast, the evaluation of (f (g x)), under the same dependencies,
will not require the evaluation of x, because g guarantees the structure
of x to be that of a pair. This helps to decouple the evaluation of f from
that of its parameter, without otherwise changing the dataflow. Taking 
a more concrete example:

x :: (a,b) -- not quite a pair, but not anything else, either ;-)
x = x 

f :: (a,b) -> IO ()
f x = case x of { (_,_) -> print "hooray!" }

i :: (a,b) -> (a,b)
i x = x

g :: (a,b) -> (a,b)
g x = (fst x, snd x)

mainA = f (i x)

mainB = f (g x)

This should demonstrate the difference between no-ops and the
kind of constructions used for program optimizations. Both i and g
are identities, as far as their types can tell us, but they differ for
things at the borderline between promised structure and computed
structure, if that makes any sense.

Statically, the type (a,b) promises a pair without any guarantee that
a pair will be computed. Operationally, (g x) promises a pair before 
checking whether x actually evaluates to a pair. If it does, g is just 
an identity, but if it doesn't, g delivers more information than it gets. 
This is the source of the decoupling effect.

If we change f slightly, 

f' :: (Show a,Show b) => (a,b) -> IO ()
f' x = case x of { (a,b) -> do {print "hooray!";print a;print b} }

(and restrict x's type, to (Bool,Bool) say,) we get an example of 
how this decoupling of f' and x, via g, can influence resource usage. 
In (f' x), nothing can be done in the body of f' before x evaluates to
a pair, so everything in f' has to be put on hold until the evaluation of
x starts to deliver. In (f' (g x)), on the other hand, parts of the body 
can be executed, and their resources reclaimed, before the rest of 
the body demands the evaluation of x.

Others have mentioned that old equation: algorithm = logic + control.
One could say that non-no-ops, such as g above, allow us to add
control information without changing a working logic. 

My personal opinion in that matter is that FPLs enable us to focus 
on the logic aspect, but that we have neglected to provide good 
support for the other aspect of working programs. Working 
heap-profilers and work on operational semantics that permit the 
prediction of resource usage are good signs, showing that some 
researchers are working on this second aspect. 

What I'm not so sure about is how to make use of these results.
Rewriting programs to modify their control aspects mixes control
and logic aspects again. As Manuel said, that is better not having 
the choice at all. But I would prefer if I could specify both aspects
separately and then specify how they are to be combined into a
working program.

> Regards,
> a working Informix-4GL programmer  :-)

Ha! Living proof that there is hope for real-world programmers :-)

In rephrasing your answer, I didn't quite use all the technical terms 
you expected, so perhaps there's hope for us academics as well?-)


I don't want to get into the propagandistic aspects of this thread,
but earlier instances of this discussion (yes, there have been a few)
left one important message behind: 

* stop discussing why FP isn't used by others for all their projects, 
* start using it yourself for the kind of projects for which it is useful
* (by now, this includes quite a large set, and it is still growing).

I did not learn Perl, for instance, because others were discussing
its qualities, but because I had some problems for which it seemed
suited. And I continued using it not because I thought it was an 
elegant language, but because it was suited for those problems 
and kept developing to keep up with programmer's practical needs.
I only use it for some kinds of problems, and it is certainly not the
only language I use. So why should I expect others to use FPLs 
for everything, or to drop all other languages they find useful?

In this sense,
Claus

-- we could also adopt that old advertizing wisdom:
If you like what you can do with FPLs, tell your friends.
If you have any complaints about FPLs, tell their implementers.



Reply via email to