I too have been thinking about the issue and the responses but,
because of my memory research background, I do not conceptualize
the situation in traditional psychometric terms (e.g., I don't believe in "g"
and related concepts, instead I try to think of organized knowledge
structures).  This means that a student getting questions
such as "when did Wundt open the first academic psychology
laboratory" and "What is priming" may or may not be correlated
depending upon how thoroughly a student studied the material
(i.e., a dedicated practice model where I assume that a student
studies the material on a week-by-week basis and then tries to
integrate the material before exam time -- yes, I know, I
live in a fantasy world).  One then has to ask why one would get
a high correlations across items/questions in different domains:
is it due to some reified construct such as "g" or whether the
person has good time management skill, perseverance, and
organizational skills which will affect responses to all items.

Also, we know a fair amount about the factors that affect memory,
such as recognition memory is a more sensitive measure than
recall, concrete/high imagability words are better remembered
than abstract/low imagability words, and so on.  As a task for
teachers,  take a single fact/concept/problem that you want
to use on a test and then manipulate how it is presented to
vary the "difficulty" of understanding the question being used
and strategy for determining the solution if it is not just a
purely memory process.

Think about the Wason selection task and why some versions
of the task are difficult while other versions are not (i.e., the
deep structure of the problem remains the same but the surface
structure varies).  It also highlight how "positive knowledge"
(such as the use of modus ponens) is easier/preferable to
using "negative knowledge" (such as the use of modus
tollens).

It might also be worth looking at an old article by Steven Schwartz
on how problem representation affect problem solving; see:
http://psycnet.apa.org/journals/xge/91/2/347/

-Mike Palij
New York University
[email protected]



-------------- Original Message ---------------
On Sat, 05 May 2012 22:56:09 -0700, Jim Clark wrote:
Hi
Carol's query and people's responses got me thinking about an issue I've
noticed with tests over the years.  Every once in a while I have to put my item
results into spss to correct an error on the answer key.  But when I run
reliabilities, I generally get relatively low values for alpha given the number
of items ... e.g., .68 or so for a 50 item test (with 2 items removed because
of perfect performance).  With far fewer items (less than 20), I can easily get
.80 or higher for measuring psychological traits.

I wondered if item difficulty, which several people mentioned, might be a
factor.  So I correlated across 48 items in the above example the difficulty
(proportion correct) and the item-total correlation from Cronbach's alpha.
r**2 was about 8% for a quadratic relationship.  And it was clear in the
scattergram that item-total rs were highest in the middle and approached 0 at
either end.  But there was still lots of variation in the middle.

So it appears (perhaps obvious?) that measuring knowledge in a course is quite
different than measuring psychological traits.  Which should have some
implications, I guess for how we design our tests, such as how we sample from
the material in the course.  What would happen, for example, if I had enough
items to get reliabilities by chapters or sections?  Would I find better alphas
at that level?

Take care
Jim


James M. Clark
Professor of Psychology
204-786-9757
204-774-4134 Fax
[email protected]

>>> Carol DeVolder <[email protected]> 04-May-12 11:34 pm >>>
Hi,
As I sit here trying to do anything but grade or write exams, a thought
occurred to me. Often, when one constructs an exam over several chapters,
the questions are mixed up so that those over the same chapter aren't
grouped together. Is this really necessary? It seems that it merely serves
to add one more layer of confusion to the process. Or am I the only one who
does this?
Carol

---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=17660
or send a blank email to 
leave-17660-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to