Stan Brown wrote:

> John Kane <[EMAIL PROTECTED]> wrote in sci.stat.edu:
> >So you are saying that getting the right answer is not important?
>
> No, of course it's important. But getting the right answer for the
> wrong reasons is bad, since one may not be so lucky next time when,
> say, calculating a 99% confidence interval for the tensile strength
> of a set of steel cables destined to be used on a bridge.

Very true and I was being deliberatly  provocative.  Howeever I still cannot
see penalizing someone for gerttaingt the right anwser no matter how arried
at.

> > If one can not see the work then one assumes blind luck?
>
> Blind luck happens surprisingly often. Also common is two errors
> that cancel each other. We cannot count on students being lucky or
> making two equal and opposite errors in the real world.

Err, you have the stats? I really think you may be right in some cases but
assuming a random error rate to give corerect answers, .     Humm.  Mind you I
have done exactly that. Wrong algorthim but it worked for a number of
examples  in high school :(

>
>
> >However it would seem important to make this explicit.
>
> As indeed I have done, in the course syllabus distributed at first
> meeting, numerous times in class, and at the top of nearly every
> exam paper. "You must show your work" in various phrasings.
>
> >   While I can see the
> >point I am definitely not in agreement unless of course the test says
> >explicitly that  "The answer is not all that important.  Just show how you
> >would approach the problem and why"  In that case I am totally in
> >agreement.
>
> I don't think I ever said the answer is not important; if I did say
> so I didn't mean to. The right answer is important, but after all
> the purpose of an exam is to demonstrate mastery of the subject, and
> a bare answer with no supporting work doesn't really do that, does
> it?
>

I quite honestly think that the exam must say that "the majority of the marks
are given for the process not the answer"   to make this acceptable. Given the
chance for miscalulations this is very reasonable.   If so, then I have not
the slightest objection.  "Tell me how" is not the same as "What is" .

>
> And this brings me back to why I actually posted my query: how do we
> evaluate students who do poorly on exams but may in fact be able to
> do well in real-world situations where they must use the material?
> Saying it another way, students AA and BB both answered questions
> poorly on an exam. Perhaps one (or both) may be quite likely to
> apply correct statistical techniques correctly in the real world.
> How do we know? How can we do a better job of evaluating students
> than merely setting and marking written timed exams?

Yes there are ways  at least at senior levels but they are  probably not
economially feasible.  Let them consult for local companies and then judge the
results.  Close, very Close supervision is required however!


 ------------------
John Kane
The Rideau Lakes, Ontario Canada




=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to