On Feb 6, 2008 3:09 PM, Alan McKinnon <[EMAIL PROTECTED]> wrote:
> On Wednesday 06 February 2008, G. Matthew Rice wrote:
> > [EMAIL PROTECTED] writes:
> > > I'm a little bit confused when it comes to fill in the blank
> > > questions on the LPI 102 exam.

> > I'm the person the heads up the exam development at LPI.  If you can
> > contact me directly, I am more than happy to go over the items in
> > question to see if the wording can be corrected to be more explicit.

> If there are still some vague questions in the pool that you saw, please
> give Matt full details and help LPI fix this oversight.

Matthew,

I can't imagine you would expect us to be able to comment on or e-mail you
regarding any one question in particular, especially where wording and
ambiguity is involved! I put a great amount of effort into commenting and
making suggestions during the exam using the 'comment on this question'
feature. In fact, I nearly exhausted my test time due to the amount of
commenting I did! I would estimate I commented on well more than half of
the exam questions I received. Though the test program crashed during
my last exam, could I expect that these comments were still received?

I am only able to find documentation that only roughly outlines that the
evaluation of these comments were performed during the beta period.
I am unable to find any documentation regarding your current process
for evaluation of exam question comments.

Could you please let us know how the in-exam comments are evaluated,
by whom, and by what process? Are these printed and evaluated by a
group, a single person, or stored for optional review? Is there any sort
of "open, closed" issue-style tracking system regarding these comments?

I'd also like to take this opportunity to express my disagreement with the
unbalanced number of questions my exam score was evaluated with
regarding the innd software package. If I recall correctly, I received more
than twice the number of questions on innd than sendmail under the
same exam category. A simple survey of monster.com for 'sendmail'
reveals 205 results, where 'usenet' reveals 5, and innd none. How did
any psychometric assessment result in innd being included in the exam
at a greater proportion than sendmail, which is unarguably more
common in the work field?

Is this simply the rare chance of random yet categorized selections from a
pool of questions resulting in uneven distribution of exam subjects,
particularly in terms of assessment of software knowledge? Or is unfair
distribution of uncommon subject matter intentional, as documented in
FAQ section 2.14, regarding the intentional placement of obscure
questions for the purpose of retaining a failure rate high enough to
validate the professionalism of those who pass the exam? Or is it a
combination of both?

Finally (last question!), who were or are the members of the
psychometric staff vaguely referenced on your online site? I am
looking for documentation along the lines of this,
http://www.bsdcertification.org/index.php?NAV=News&Item=pr029

Thank you for your time Matthew, I know these questions are a mouthful.
I patiently await your reply!

Jeff Quast
_______________________________________________
lpi-discuss mailing list
[email protected]
http://list.lpi.org/cgi-bin/mailman/listinfo/lpi-discuss

Reply via email to