A couple of points:

(1) I mostly do smaller classes these days and organize questions on
exams by text chapter/reading or thematically.  I also include page
numbers for each item so that when students get the exam back, they
know where to look in the text/reading/notes for the material relevant to
the question.  The rare student has told me that knowing what page the
material was on was helpful.

(2)  I am have not used item "difficulty" to select items but I have
used two other criteria: (a)  did I cover the material in class and
(b) importance of the material regardless if covered in class (i.e.,
students were told to focus on material in the text or readings).
I try to avoid "know everything in the chapter" type of recommendations
(though I am tempted to say this in statistics classes).

-Mike Palij
New York University
[email protected]


-------------------  Original Messages ----------
On Fri, 04 May 2012 14:10:57 -0700, Claudia Stanny wrote:

When I have a large class and create multiple versions of the exam, I
randomize the questions on the multiple forms, which mixes up the questions
across chapters.

For smaller classes, I keep questions from each chapter together.

I didn't notice that it made a difference in average class  performance in
the large classes when the questions were mixed rather than blocked (as
they had been before I started creating multiple forms).

In spite of this evidence that it might not matter, I still like the idea
of maintaining the context of a topic and keep related questions together.
  (It also helps me detect when I have questions that are too similar or a
question that provides the answer for another question.)

Another test construction question:

I tried selecting questions based on item difficulty/type (as identified in
a publisher's test bank or based on my judgment and previous class
performance for questions I write myself).  I tried to select questions so
that 50% were fact-based with the remainder a combination of conceptual and
application questions.  I also decided to ensure that 50% of the questions
were considered "easy," about 40% "moderate," and no more than 10%
"difficult."   Has anyone tried structuring the test questions in this way?

The item analysis on my exam items this term have been quite interesting.
What are your thoughts about using data from an item analysis to redefine
item difficulty?


On Fri, May 4, 2012 at 3:34 PM, Carol DeVolder <[email protected]>wrote:

>
>
>
>
>
> Hi,
> As I sit here trying to do anything but grade or write exams, a thought
> occurred to me. Often, when one constructs an exam over several chapters,
> the questions are mixed up so that those over the same chapter aren't
> grouped together. Is this really necessary? It seems that it merely serves
> to add one more layer of confusion to the process. Or am I the only one who
> does this?
> Carol
>
>
>

---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=17632
or send a blank email to 
leave-17632-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to