On Wed, May 16, 2007 at 11:58:39PM +0200, Sander van Vugt wrote:
> Just proctered the 101 and 102 last weekend for 13 guys in the
> Netherlands. Amongst them was a true Linux expert, who commented "I'm
> sure I've failed". His - IMHO correct - remark: Why should I know the
> difference between blah -d and blah -D if in the real world I would do
> blah --help to find out how it works and use the command appropriately
> within 30 seconds? I couldn't agree to him more, especially after
> listening to some examples about the questions that these people have
> had. Please allow me to elaborate a little.

I agree entirely.  They are dumb questions.  They are just as bad as
computer courses that expect you to memorize the order of function calls
to certain system calls so you can write perfect source code on a
written test.  And will penalize you for missing a ';' somewhere.  It
doesn't show if you know what you are doing or not.  Being able to fix
it when the compiler tells you it isn't right is at least as important.

> "hands on" certifications seem to make so much more sense these days.
> Take the Novell CLP/CLE or the Red Hat RHCE exams, these measure real
> working knowledge of real working systems. I know, this isn't possible
> because we (LPI) want to be able to take exams everywhere, even if no
> infrastructure is available, and do it for a reasonable price as well.
> So that's a dead end. 

I personally liked the way brainbench does their tests.  You get a
certain number of seconds to answer each question, and you can go look
up the man page or whatever else you have handy to find the answer.  If
you know what you are doing and how to solve problems, you can look up
the answer to how to do something with a specific command in 10 seconds.
That is what I would do if I had to do it for real on a system after
all, so why shouldn't the test be the same way.  Being a written test
with no references available is in my opinion the biggest flaw in LPI at
this time.

> Any alternative that would work? Well, maybe there is. Has anyone of you
> ever taken a Microsoft test? In their more advanced tests, they have
> scenario's and try to measure real world knowledge. For example: let's
> say we want to make a question in which we want to measure knowledge of
> the tar command. You can go two directions:
> 
> 1) A user wants to make a compressed backup of his home directory. What
> command would he use?
> 
> a.    tar -zfvx .
> b.    tar -cz .
> c.    tar -czf blah .
> 
> Another way is by making it a tiny little scenario in which we don't
> measure knowledge of options, but the ability to use the command (I
> know, the question is lame, it's the idea that counts):
> 
> 2) A user has problems making an archive of his home directory. Every
> time he tries to do so, he gets an error message: "Cowardly refusing to
> create an empty archive". Which is the most likely cause for this error?
> 
> a.    he tries to make a tar for a directory that is empty
> b.    he has forgotten the -f option to specify the file he wants to make
> the backup for
> c.    he has forgotten a specification like . at the end of the command to
> indicate what exactly he has forgotten. 
> 
> I know, asking questions in this way makes LPI 1 easier compared to what
> it is now. This adds to the thought that Matt has had recently, that
> maybe the level should be somewhat lower. If we *really* want to make
> LPIC-1 a junior level admin certification, we shouldn't ask about
> options no one ever uses, we should ask about things that are used in
> real life. Small scenario questions are so much more real.

Scenario questions are also harder to come up with, and if you want to
be really good you have to accept written answers which suddenly makes
marking require someone with actual knowledge too and makes the marking
subjective in some cases.

I for example don't see the point to a question about 'how do you do X
with sendmail' given my real life answer would be 'do not use sendmail
in the first place.  Use something good with a good security record and
sane configuration syntax.'

> To finish this, a fact that shocked me. Just to do a self test, I did
> some of those preparations that you can find on the internet myself. I'm
> not new to Linux, I'm using it since 1993 and I hold all relevant Linux
> certifications and passed LPIC-1 a long time ago. Also I'm a trainer,
> preparing junior admins for their jobs as a Linux admin at least a week
> a month, doing that for more than seven years now. Guess what my results
> were? Yes, I failed. Either I must be a very stupid person / drank to
> much beer the evening before, or something really is wrong. 

Well I only barely passed the LPI101 test I took at a linux show some
years ago.  Not long before that I took the brainbench linux test and
scored in the top 15 people who have ever taken the test in the world.
Most people who know me would say I know a lot about linux and
administration of linux and that I am very good and figuring out how to
solve problems.  The LPI test reminded me of some of the worst CS tests
I have ever seen, with questions that have no business being asked in
the setting they were asked.  All they test is whether you can memorize
various command syntaxes, not whether you can actually solve real
problems.

--
Len Sorensen
_______________________________________________
lpi-examdev mailing list
[email protected]
http://list.lpi.org/cgi-bin/mailman/listinfo/lpi-examdev

Reply via email to