I already suggested this (having a practical component compliment the
written test) and it was squashed by the LPI psychometrician (forgot the
name...sorry) who indicated that written tests (if implemented properly) can
be better measurements of a candidates competency than a practical
implementation exam combined with a written. I know that posture seems to
fly in the face of field pragmatics/logic, not to mention the success of
Cisco's CCIE and Red Hats RHCE programs, (I took the latter, and it was
quite challenging) but nevertheless, that's where that debate ended. It
seems that the more legitimate roadblock would be one of expense, logistics
and institutional inclination. These latter factors, especially the last,
probably make a written format more likely and thus appropriate format for
the LPI and CompTIA Linux+ initiatives.

Stephen Holcomb
Sr. Managing Partner - Technology Education Integration and Consulting
Next Generation Education Services (NEXES)
[EMAIL PROTECTED]

----- Original Message -----
From: "David Weeks" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, February 13, 2001 8:38 AM
Subject: Re: It's that time again... Job Analysis Survey


> Weighing in for the first time, I'd be interested is assisting with these
> certification exams.
>
> I've long been a critic of certifications.  They do not mean competence in
a
> live environment, and too many computer science graduates hit the market,
and
> all it wants to know is if they are "certified."
>
> The operative phrase above is: "live environment."  We need to test
applicants
> on their ability to specify, implement, operate and maintain a real
computing
> environment.  If our applicants can't do that, then they're no better than
the
> CNEs and MCSEs of ill repute.
>
> This can be done.
>
> What do you think?
>
> David Weeks
>
>
> On Sun, 11 Feb 2001, you wrote:
> > From: Michael Dowling <[EMAIL PROTECTED]>
> > To: <[EMAIL PROTECTED]>
> > Sent: Sunday, February 11, 2001 2:23 PM
> > Subject: Re: It's that time again... Job Analysis Survey
> >
> >
> > : Why don't the exams get chosen dynamically from a large pool of
questions?
> > : If there are something in the order of 50 times as many total
questions as
> > : are actually used, then the questions would not need to be kept secret
at
> > : all.
> > :
> > : Has this idea been considered?
> >
> > I have forwarded your question along with some comments to LPI's staff
for
> > discussion.  There are psychometric tools for doing something like this
and
> > LPI initially tabled it in favor of just getting exams up and running.
I
> > have asked if now is a good time to reconsider.  This methodology is
> > actually used by Novell and was at one time by Microsoft (I don't know
why,
> > but they quietly stopped about 2 years ago.. anyone know why or if
they've
> > resumed?)
> >
> > But LPI's use would be a good deal less ambitious than your idea.  I
think
> > it is impractical to have a pool 50 times larger than the test.
> >
> > The first issue with having such a large pool is, Where we would come up
> > with, essentially, 49 more exams-worth of questions?  Note that many
more
> > items are written than are used.  These items have to be edited and
tested
> > (see below).  Creating quality exam questions is a significant
bottle-neck
> > to producing exams.  If it were this easy, we would have been done a
long
> > time ago.
> >
> > (Although if someone has a creative idea... I'm all ears).
> >
> > And then having this enormous corpus of items, we would need to pilot
test
> > them (have people answer them in a fashion as similar to that of actual
> > test-takers as is possible but not give them scores, at least not until
> > later).  We did do this for the 101 and 102 exams and I think many
people
> > found the "beta period" during which they were waiting for scores to be
> > onerously long.  If we had 50 times the questions, we would need for (A)
> > people to sit a an exam that was 50 times longer or (B) we would need 50
> > times more people or (C) we would need 50 times more people.  I think
all of
> > these are infeasible.
> >
> > Finally, the way LPI sets cut-scores involves judgments about "minimally
> > qualified test-takers" passing individual items.  If the pool was 50
times
> > larger, we would either need to include 50 times more items (infeasible)
or
> > sample items (possible, but less "air-tight").  Or devise a different
> > method.
> >
> > Possibly there are other challenges.  It would be totally cool if we
could
> > manage it.
> >
> > Note that the test items would still need to be secret.  If I read your
note
> > correctly, you are saying that with such a large pool we could publish
the
> > items and not fear (because there would be too many to memorize).  But I
> > don't think that would ever happen.  I can elaborate on the many, many
> > reasons why I think that would be a bad idea.
> >
> > So again, I'm eager to hear anyone's creative innovations.  LPI may well
> > pursue a less ambitious approach someday.  Thanks for bringing this
topic
> > back up.
> >
> > -Alan
> >
> >
> > --
> > This message was sent from the lpi-examdev mailing list.
> > Send `unsubscribe lpi-examdev' in the subject to [EMAIL PROTECTED]
> > to leave the list.
>
> --
> This message was sent from the lpi-examdev mailing list.
> Send `unsubscribe lpi-examdev' in the subject to [EMAIL PROTECTED]
> to leave the list.
>


--
This message was sent from the lpi-examdev mailing list.
Send `unsubscribe lpi-examdev' in the subject to [EMAIL PROTECTED] 
to leave the list.

Reply via email to