Alan Blackwell wrote:
1) What is the measure of 'intuitive' that you propose to use?
Asking for the measure is sage advice, I'd say, for a PhD candidate, but
the question that first flashes in my mind is whether intuition -- and
intuition in programming -- is a phenomena that can be identified
regardless of what measures one intends to use to snap its picture. The
literature I've bumped into seems to confine "intuition" into the
category of processes of knowing without conscious inference. Back to
measurement for a moment, such a phenomenon is -- in the spirit of
Ericsson and Simon's hypothesis about verbal protocols -- not something
that one can "directly" observe through protocols, but perhaps through
brain sensing (FMRI, etc.). This would place it a similar realm of
phenomena as Rasmussen's skill or rule-based behaviour
(non-introspectable, fast, parallel), or Klein's recognition-based
responses. And since intuition has a "feel" to many, it might be
interesting also to relate it to Norman's Visceral level of emotional
response.
Taken to heart, this definition of intuitive would imply that no
programmer will be able to (honestly) explain why they find the
computational model intuitive -- unless they happen to have a model for
how they manage to think without being able to introspect those thought
processes! This brings up the point that the argument raging may well
be not at all about intuitive computational models / languages, but
rather whether they are familiar to more people, or whether various
groups of humans have mental software that is better or worse at
thinking about these models / languages. "Intuitive" in that case is a
code word for "I have a lay understanding of cognition so don't know why
I find it easier".
On 23 Nov 2009, at 04:23, Richard O'Keefe wrote:
Another mailing list I'm on just had a bunch of people shouting
that imperative programming was obviously more intuitive than
functional or logic programming. Since they didn't seem to be
familiar with the fairly wide gap between a typical first-year
model of how an imperative language and what _really_ happens
(e.g., apparently non-interfering loads and stores can be
reordered both by the compiler and the hardware, loads from main
memory can be 100 times slower than loads from L1 cache, &c),
I found myself wondering if what they _really_ meant is "I learned
a simple model early on and find anything else different."
Why is what "really" happens relevant? In my mind it is perfectly
acceptable to talk about an abstract machine and language for
programming it and ask useful questions about its usability without
worrying about implementation. This is math and comp sci's hard-won
benefit of creating abstraction boundaries. If we create an artificial
minimal purely functional language without state, and an
imperative-paradigm language with it, wouldn't the "which paradigm is
more intuitive" question arguably come into sharper focus? Even if we
never got around to implementing a compiler for either language?
More directly, what "really" happens when a functional or logic program
is running is that somewhere underneath is a distributed finite state
machine fully operating within the imperative paradigm. If you pierce
the abstraction boundary you lose the privilege of hyping the advantages
of the paradigm.
Andrew