On Nov 24, 2009, at 9:46 AM, Andrew Walenstein wrote:

On 23 Nov 2009, at 04:23, Richard O'Keefe wrote:
functional or logic programming.  Since they didn't seem to be
familiar with the fairly wide gap between a typical first-year
model of how an imperative language and what _really_ happens
(e.g., apparently non-interfering loads and stores can be
reordered both by the compiler and the hardware, loads from main
memory can be 100 times slower than loads from L1 cache, &c),

Why is what "really" happens relevant?

Person X has a mental model of phenomenon Y.
Person X says "I find Y intuitive".
X's model is WRONG, it observably fails to match reality
in ways that lead X to make mistakes.
Of what value is X's claim of intuitiveness then?

As I understand it, the heart and soul of the "imperative"
paradigm is mutable variables and assignment statements.
Anecdotally, the model most textbooks I've seen present
and the model most students seem to acquire (variable =
location, one thing happens after another in the order
commanded by the program) observably fail to match
reality.  It's notorious that programs compiled using
'gcc' may work but when compiled using 'gcc -O4' may not.

Indeed, one correspondent in the other mailing list made
the explicit claim that the imperative paradigm is
intuitive for people because it matches what the hardware
is doing, which it doesn't.

In my mind it is perfectly acceptable to talk about an abstract machine and language for programming it and ask useful questions about its usability without worrying about implementation.

Of course, AS LONG AS THE IMPLEMENTATION RESPECTS THE SEMANTICS OF
THE ABSTRACT MACHINE.  If we are talking about 'commodity' languages
like Fortran, C, C++, Java, then the *actual* implementations that
people use do NOT respect the model that (anecdotally) most programmers
seem to think they do.  (The implementations of Java and Ada, as far as
I'm aware, _do_ match the official models. I'm not complaining of errors
in the implementations.)

Let's take an example.  The JavaScript 'with' statement.

Suppose someone has the following model.

        Within the scope of 'with (x.y.z)' a reference to a variable
        w is treated as x.y.z.w iff w is a property of x.y.z.

That's a nice simple mental model, and it will get you a long way.
But it's WRONG.  Consider
        var y = {x: 1};
        with (y) {y = []; x}
At the time of reference to x, y.x does not exist.
But the statement correctly refers to the x field of the *former*
value of y.  I chose this example because I've seen this very model
of JavaScript with statements presented.

This is math and comp sci's hard-won benefit of creating abstraction boundaries. If we create an artificial minimal purely functional language without state, and an imperative-paradigm language with it, wouldn't the "which paradigm is more intuitive" question arguably come into sharper focus?

There are clearly at least four things that could be compared:
        pure lazy functional language
        pure strict functional language
        simple imperative language with *simple* memory model
        otherwise simple imperative language with *realistic* memory model

More directly, what "really" happens when a functional or logic program is running is that somewhere underneath is a distributed finite state machine fully operating within the imperative paradigm.

If you pierce the abstraction boundary you lose the privilege of hyping the advantages of the paradigm.

"The abstraction boundary" has to be so placed as to include all the
effects that the programmer could observe while staying within the
strict bounds of the language as explained to them.

Case in point: waving my hands a bit, the Fortran standards say that
passing overlapping variables to a subprogram isn't legal, so a
Fortran compiler can handle
        subroutine rcopy(dst, src, len)
            integer, intent(in) :: len
            real, intent(out)   :: dst(1:len)
            real, intent(in)    :: src(1:len)
            dst(1:len) = src(1:len)
        end
under the assumption that dst and src don't overlap.  It can load
src in blocks of 8, say, and store them in blocks of 8, which just
won't work if dst and src overlap.  If they do, and things go wrong,
it's the programmer's fault, by definition.  So "loads and stores
can be arbitrarily shuffled as long as the result would be correct
for non-overlapping variables" is on the *wrong* side of the
abstraction boundary for Fortran.

It's precisely the fact that people claim imperative programming
intuitive without being able to articulate a model that *correctly*
predicts the behaviour of even quite small pieces of code that got
me worked up enough to start this thread in the first place.
Studying languages that were simplified to the point where the
simplistic models worked would not, I think, help to clarify the
situation we actually find ourselves in.

Reply via email to