Hi,
This is an interesting debate.
I recently ran a study which looked at relatively novice programmers
learning micro visual programming languages (VPLs) in order to perform
various comprehension tasks. They were given two pre-tests from the
Kit of Factor Referenced Cognitive Tests: the paperfolding test, and
the pathfinding test. They were also asked to give a detailed account
of their prior programming experience. I was interested in seeing
which of these measures, if any, correlated with their performance.
My hunch was that the paperfolding test, which has been used in
previous visual programming experiments, was more of a measure of the
ability to perform mental 3-D manipulations, and this didn't
necessarily seem to me to be very relevant to visual programming. I
wondered if the pathfinding test might be relevant, given that visual
programs often involve the tracing of spaghetti-like arcs.
Given the small size and limited scope of the study, it would be
rather unwise to generalise, but I found that the best correlate with
performance seemed to be the subjects' prior programming experience,
even given the limited range of their experience. This was true both
for the number of languages they reported knowing, and the rating they
gave as to how well they knew each language.
There wasn't really any correlation between measures of visual and
spatial ability and program comprehension (this was also the case in a
previous study on VPLs which I carried out). It would be interesting
to see if this finding applies equally to other situations (program
construction, more expert programmers, etc.).
Anyway, it seems to me that this is an illustration of the point that
Alan was making, in other words, it's not a "can/can't learn" but a
"have learnt/have yet to learn" distinction. Given the initial steep
learning curve associated with learning to program, it doesn't seem
surprising to me that even a few months extra experience could result
in very big differences in performance.
Cheers,
Judith