How someone doing something on understanding these programs? I always
found that experiments on understanding programs were much easier than
ones on writing programs. Far less inter-subject variability. Now that
you have suitable materials, it might be possible for someone to have
a go at that? People like Judith Good and Paul Brna have worked out
As you point out, maintaining programs is a big part of their life-
cycle, so results on understanding would be valuable. And part of the
scaling issue for constructing big programs is the need to understand
what has already been created (even if by yourself).
As for the high variability and the apparent bi-modality: Keith
Stenning and others have argued that successful problem solving
depends heavily on the external representation adopted for the
problem. He and Richard Cox did a big study on students doing
scholastic aptitude problems, which are well normed I understand, and
showed very convincingly that only students who chose an appropriate
representation solved a given problem. I can't remember whether they
were able to get several problems per student and show that same
students were perfectly successful on other occasions when they did
choose an appropriate representation.
So I would suggest, as a possible explanation, that the two humps may
not be because some of your participants were in any way inferior, but
that they unluckily set off on the wrong track and never recovered.
Did you debrief them about strategies, or collect their working notes,
or anything like that? If so, you could check that out.
On 16 Feb 2011, at 18:46, Meredydd Luff wrote:
I did a pilot study along those lines a couple of years ago,
attempting to compare performance with the Actor model, Transactional
Memory and traditional mutex locking. (Not quite the precise
distinctions you're looking for, but at least in a similar ballpark.)
17 subjects attempted a simple unstructured grid problem in an
afternoon - measuring time taken, NCLOC, and subjective responses on a
The upshot was those metrics turned out to be pretty much completely
uncorrelated (r^2 < 0.07 in all cases), and the only results to reach
anything close to statistical significance on any metric were (1) that
people completed tasks faster on the second trial, and (2) that people
claimed to prefer the novel paradigms to mutex locking in the
after-task questionnaires. The former is crushingly obvious, and the
latter is subject to a number of obvious biases.
(I also noticed an odd bifurication in finishing times - subjects
either finished the task in half the time allotted or didn't manage it
at all, but this is irrelevant to your question :) )
The paper is entitled "Empirically Investigating Parallel Programming
Paradigms: A Null Result", and is available online at:
To my knowledge, at the time of writing, there was no work published
more recent or applicable to your query than the supercomputing
studies I cited in the Related Work section. (There has been a more
recent study of TM vs SMTL in an undergraduate-assignment setting - I
don't have the reference but could grub it up for you if you wanted.)
On Wed, Feb 16, 2011 at 4:10 PM, Russel Winder
Prompted by various discussions elsewhere, I am on the search for
experimental results and/or people doing or about to do experiments.
The questions all relate to the models of parallel software:
shared-memory multithreading, Actor Model, Dataflow Model,
Sequential Processes (CSP), data parallelism.
Question 1 is: is synchronous message passing easier for
work with than asynchronous message passing.
Question 2 is: are the case classes of Actor Model easier for
programmers to work with than the select statement of Dataflow
There are more but those are the two "biggies".
There is a lot of people who know precious little about psychology
advocacy research out there building up various "known facts" about
is and is not better from a cognitive perspective. Some of them base
this on observed anecdotal evidence which gives it some legitimacy,
of them are peddling their own beliefs.
So real experimental evidence from people who know what they are
would be most welcome.
Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net
41 Buckmaster Road m: +44 7770 465 077 xmpp:
London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
The Open University is incorporated by Royal Charter (RC 000391),
an exempt charity in England & Wales and a charity registered in
Scotland (SC 038302).
73 Huntington Rd, York YO31 8RL