Thanks! So many references but... if I can certainly start going
through them, being you the experts I would really appreciate a "start
from this" indication. If not, being my present need very urgent, I
risk to take into account only a small fraction of these works, maybe
not the most relevant to my present interest (grouping students in two
similar groups with respect to their aptitude to learn computer
programming).
Thanks
Stefano
Stefano Federici
-------------------------------------------------
Università degli Studi di Cagliari
Facoltà di Scienze della Formazione
Dipartimento di Scienze Pedagogiche e Filosofiche
Via Is Mirrionis 1, 09123 Cagliari, Italia
-------------------------------------------------
Cell: +39 349 818 1955 Tel.: +39 070 675 7815
Fax: +39 070 675 7113
========================================================
Dear all,
Please, please let's not re-invent the wheel -- or perhaps reiterate
our own ignorance. We actually know very little about true indicators
of programming aptitude. There are some correlations with spatial
reasoning, and some with accurate articulation of process in one's
native language. The point is that programming is not 'one thing' -
it's a complex, composite interaction of skills. There have been a
number of relevant studies (of varying quality) in the past decade
alone, not to mention the proprietary instruments developed by
personnel services in industry. Rountree, Rountree & Robins did a
good literature review a few years ago. Here are a few relevant
pointers (there is almost certainly more recent work, too):
Rountree, N., Rountree, J., Robins, A. & Hannah, R. Interacting
factors that predict success and failure in a CS1 course. SIGCSE
Bulletin, 36(4), 101 - 104 (2004)
Nathan Rountree, Janet Rountree and Anthony Robins Predictors of
Success and Failure in a CS1 Course (2002) SIGCSE Bulletin vol. 34,
no. 4.
Vikki Fix, Susan Wiedenbeck, Jean Scholtz (1993) Mental
representations of programs by novices and experts. Proceedings of the
SIGCHI conference on Human factors in computing systems.
M. McCracken, V. Almstrum, D. Diaz, M. Guzdial, D. Hagan, Y.B.-D.
Kolikant, C. Laxer, L. Thomas, I. Utting, and T. Wilusz. (2001) A
multinational, multi-institutional study of assessment of programming
skills of first-year CS students. Proceedings of ITiCSE.
B. Cantwell Wilson & S. Shrock (2001) Contributing to Success in an
Introductory Computer Science Course: A Study of Twelve Factors SIGCSE
Symposium
Graham Daniel & Kevin Cox (2003) Computing Courses: Testing for
Student Aptitude Web Tools Newsletter
http://webtools.cityu.edu.hk/news/newsletter/aptitude.htm
Mayer, R. E. (1989) The psychology of how novices learn computer
programming. In E. Soloway & J. C. Spohrer (Eds.) Studying the novice
programmer (pp 129-159) Hillsdale, NJ Lawrence Elbaum.
Matt Roddan (2002) The Determinants of Student Failure and Attrition
in First Year Computer Science
http://www.psy.gla.ac.uk/~steve/localed/roddenpsy.pdf
As part of the BRACE project (http://www.cs.otago.ac.nz/brace/), a
whole collection of CS Ed researchers looked into this and conducted a
multi-institution study. (The list above is from the reading list for
BRACE) We used four diagnostic tasks:
i) The Biggs Study Process Questionnaire (Biggs et
al, 2001). The revised questionnaire assesses deep and surface
approaches to learning in a given context.
ii) The Paper Folding Test (VZ-2) is from the ETS
Kit of Referenced Tests for Cognitive Factors (Ekstrom et al, 1976).
The test is designed to measure visualisation and spatial reasoning.
iii & iv) The description of a phone book search and
a sketch-map giving directions across campus: two common-place
examples to convey programming concepts and make them relevant to
students (drawing on the work of Paul Curzon, 2002). The tasks assess
students? ability to articulate a simple and familiar search and
decision strategy accurately.
Here are pointers to some of the resultant publications:
Simon, Cutts, Q., Fincher, S., Haden, P., Robins, A., Sutton, K.,
Baker, B., Box, I., de Raadt, M., Hamer, J., Hamilton, M., Lister, R.,
Petre, M., Tolhurst, D., Tutty, J. (2006) The ability to articulate
strategy as a predictor of programming skill. Australian Computer
Science Communications, 28(5):181-188. ISSN 1445-1336.
Simon, Fincher, S., Robins, A., Baker, B., Box, I., Cutts, Q., de
Raadt, M., Haden, P., Hamer, J., Hamilton, M., Lister, R., Petre, M.,
Sutton, K., Tolhurst, D., Tutty, J. (2006). Predictors of success in a
first programming course. Australian Computer Science Communications,
28(5):189-196. ISSN 1445-1336.
de Raadt, M., Hamilton, M., Lister, R., Tutty, J., Box, I., Cutts,
Q., Fincher, S., Haden, P., Petre, M., Robins, A., Simon, Sutton, K.,
Tolhurst, D., Baker, B., Hamer, J. (2006). Do map drawing styles of
novice programmers predict success in programming? A multi-national,
multi-institutional study. Australian Computer Science Communications
28(5):213-222. ISSN 1445-1336.
Please don't overlook the work published in CS Ed conferences such as
SIGCSE, ACER, ICER and ITiCSE.
Best wishes,
Marian
Prof. Marian Petre,
Director of Research,
Royal Society Wolfson Research Merit Award Holder
Computing Department
Open University
Milton Keynes MK7 6AA
UK
phone: +44 1908 65 33 73
fax: +44 1908 65 21 40
CRC website: crc.open.ac.uk
Petre website: mcs.open.ac.uk/mp8/
On 18/03/2011 17:00, Thomas Green wrote:
How deeply do you want to go into this?
1) If you're trying to set up balanced groups for a study, then you
only need to know about factors that will give a sizeable noise
level if they are not balanced across groups. That's what I thought
you wanted to do, am I right?
2) If you want to know what the state of knowledge is about factors
that might, possibly, have some relationship to learning to program,
even if only a small one, then it's a whole different question. I
would recommend looking at work by Mark Eisenstadt on 'everyday
programming' (or some similar title), at a big review by John Pane,
and at a whole heap of material on Logo. But that's a big big review
issue.
If you're sticking with (1), you can stop worrying so much. Some few
years ago Jarinee Chatatrichart found that of a very large number of
possible factors that she studied, the only one with a significant
contribution was whether people had used Lego blocks when they were
little. And even that didn't have much effect.
Much better to worry about whether you've designed the experiment
right. For example, how good is the interface? If there's something
horrible in it, then the interference from that will drown every
other effect. Really good experimenters, like Patricia Wright, used
to run at least two pilot studies before starting the main study, to
ensure that all the shallow problems were ironed out.
Also, how good are the instructions? Make SURE that people can
understand them. Get people to read them and explain them back to
you. Anything they find hard, REWRITE IT.
So that's I recommend you to do. Run two people in each condition of
your study, then TALK TO THEM and ask what they found hard. Then FIX
IT. Then do it again until they stop complaining about little things
that you hadn't intended to be problems.
Thomas
On 18 Mar 2011, at 16:45, Stefano Federici wrote:
I see. But don't you think that, among those people that don't know
anything about programming, someone being very good at punctuation
could perform better at programming? I'm thinking to the classical
Logo example to draw a square:
73 Huntington Rd, York YO31 8RL
01904-673675
http://homepage.ntlworld.com/greenery/
--
The Open University is incorporated by Royal Charter (RC 000391), an
exempt charity in England & Wales and a charity registered in Scotland
(SC 038302).