On Nov 24, 2009, at 1:20 AM, Derek M Jones wrote:

Richard,

Does anyone know whether there's any empirical evidence either way
for the hypothesis
   programmers find a programming language or paradigm
   "intuitive" to the degree that it resembles what they
   learned first
?

First of all your question suggests that there is a one side fits
all for programming languages.

I don't believe it does.  I certainly don't *believe* it, and as
certainly don't *practice* it, and never intended to suggest it.
Remember, it's the other people in the debate who were suggesting
that the "imperative paradigm" is intuitive for everybody.

Come to think of it, there may well be a form of selection bias here.
If educational institutions start by teaching imperative (F, Pascal,
C) or OO-imperative (Java) languages, then only the people who find
such languages easy to learn will go on to further study.

Experienced users find languages 'natural' or 'intuitive' because
they have had plenty of practice using them.

A very plausible suggestion.  Now we have three reasons for the
alleged superior 'intuitiveness' of imperative/OO-imperative languages:
        - selection bias
        - first language effect
        - practice effect

The following experiment showed that developers knowledge of
binary operator precedence correlates with occurrences in
source code:

www.knosof.co.uk/cbook/accu06.html

I read that paper some time back.

By the way, you might want to revise the web version to fix
the spelling mistakes, e.g., "parenthesIS" used when
"parenthesEs" is meant.

To my thinking the graphs in figure 1 show no effect of experience.
That means that interesting as the experiment was, it may not bear
on the question of whether there is empirical evidence that the
'imperative' paradigm is most intuitive.

The graph on the left of figure 1 _hinted_ that the population
might be heterogeneous (one group answering about 100 problems
and the other group about 175).  Extracting the numbers from
the graph, here's a histogram:

<<inline: hist.jpg>>




It was extremely puzzling that "%" scored so badly compared
with "*" and "/".  Could there be interference from % formats?

That study asked some questions that needed to be asked, but
I think it has to be considered as a pilot study, and the question
of the possible heterogeneity of the population means that I
don't regard it as having answered them yet.

Not that I don't appreciate why a lot of work it was.

The high error rate 33% raises another question.
How can these people be *experienced* developers if they are
answering basic operator precedence questions at a level that
wouldn't get them a pass in a first year lab?  Can it be that
the problems they were given aren't related to the kinds of
operator precedence issues they meet in practice, or that the
average number of operators per statement these days is
below one?  Would there be a substantial difference in
operator precedence skill level between High Performance
programmers (the people writing for supercomputers) and web
programmers (people using PHP, say)?  Or is the state of the
world's software infrastructure even worse than I had dreamed?

Thanks for providing your problems.  I meant to give them to
some 3rd year students this year, but forgot.  (I asked them to
read patents instead.  Produced a classful of rabid patent-haters.)

The following experiment showed that developers used variable
names to make precedence decisions:
www.knosof.co.uk/cbook/accu07.html

I hadn't seen this one.  I've just printed part 1 off and will read
it.  Looks interesting.  The link "Part 2 _download_ pdf" gives me
a page not found error.


Reply via email to