On Nov 24, 2009, at 10:23 PM, Lindsay Marshall wrote:

One thing that seems relevant to me here is that several of the examples given to show the "non-intuitiveness" (whatever that means) of languages are what I would class as exceptions : if the compiler re-orders my (correct) code and makes it perform in a way that I did not intend then that is a bug in the compiler - if you have to define the language in such a way that such things are allowable then your language is flawed. Languages should support the programmer not the machine!

The tricky point there is "[your] (correct) code".
Code that you *THINK* is correct according to a mistaken model
might be correct according to an accurate model or it might not.

The kinds of changes I was speaking of are ones that are explicitly
allowed by language standards and code that "breaks" or not depending
on exactly what the compiler does is defined by the standards as
incorrect code.

Some examples from C:

        f(getchar(), getchar());
                What order are the arguments evaluated in?
        a[i] = b[i++];
                What is assigned to what?
        long long x = -1; ... x = 0;
                Assuming that the initialisation completes, and
                there are no other assignments to x, under what
                circumstances might a value other than 0 or -1
                be observed?

As for rearranging code, this is not something that compiler
writers do on a whim.  It would be MUCH easier to write
compilers that don't.  I'm trying to write a compiler for a
transport-triggered architecture.  Instruction scheduling is
probably going to make a factor of 4 difference to performance.
Eliminating store instructions will have a huge impact, because
on the target machine a store is 40 times slower than an add.
Even on conventional machines, you can easily get a factor of
2 in performance by _not_ implementing a naive memory model.

One of the big concerns in the process leading up to the Ada 95
standard was how to strike the balance in allowing optimisation.
(Any language that has exceptions is peculiarly vulnerable to
optimisation problems.  Technically speaking there are flow edges
from almost _everywhere_ to exception handling.)  Forbidding it
would basically have killed the language.  (Yes, there are plenty
of languages where this is not so, see PERL for an example.)

HOW a language should support the programmer depends on what the
programmer is trying to do.  BCPL, for example, allowed the
construction of compilers on and for some rather small machines.
Fortran remains the language of choice for developing seriously
fast numerical code, and allowing heavy-duty optimisation is one
of the means necessary to support the programmer who is trying to
do that.


"intuitive" is either an empty description or one that is so highly personalised as to be meaningless. It falls far more into marketing than science.

Please, let's not argue about words.  Let's argue about semantics.
The fact of the matter is that people do have strong feelings about
what is intuitive and what is not, and that these feelings are not
idiosyncratic but widespread.  That doesn't mean they are universal.
It doesn't mean that they are not culture-bound.  But if we can
have studies of why people choose one political party of another,
we can have studies of why people report that one programming
approach is more "intuitive" than another.

One thing that maybe should be studied is what mental models developers
DO use for programming.  I remember hearing about an experiment to
study people's understanding of assignment statements, and I'm pretty
sure I heard about it here.  There's probably a lot of papers I should
read, sigh.  Suggestions?




Reply via email to