From: David A. Bandel <[EMAIL PROTECTED]>

: More to the point, why?
:
: If the objective is to help folks teach the test, I question that.  If
: students are taught Linux, they will have no problem with the exam.  If
: they are just taught to pass the test (because that's what the course or
: book was designed to do), they'll have trouble with Linux.
:
: What problem are we solving here?  If it's just to classify it for
: ourselves, we just need to flesh out two or three outlines and choose.
: If the objective is to make it easier for learning centers and authors,
: I think they should be doing this each one for themselves.

The objective of the objectives? ;-)  I think the issues are:

* making intuitive sense to test-takers when we provide more
  detailed, area or sub-area scores feedback on the score report
  printed after testing
* as easy as possible for sys admins (and item writers) to quickly
  understand
* using some job-related deeper structure rather than surface
  characteristics
* making it easy for authors and trainers to teach

I'm tempted to say that those are in order... but I think they are all
different facets of the same goal.

And let me be clear abot the last bullet:  If we chose a dumb grouping,
trainers and authors might have to loop back and forth through the material
or through the objectives...  There is no reason to make them do this and if
that was how we grouped, we would almost certainly not be meeting the other
bullets either.  But it doesn't mean that we would in any way change the
content to effect teaching the material.  The content is driven by our job
analysis research which should be accurately reflected in these objectives.

Make sense?

Now, I'll admit I haven't studied the very detailed posts that have been
made today...  maybe we already have good groupings.  I will take a look
soon.

-Alan

--
This message was sent from the lpi-examdev mailing list.
Send `unsubscribe lpi-examdev' in the subject to [EMAIL PROTECTED] 
to leave the list.

Reply via email to