On 2019-04-19 11:33, Alan McKinnon wrote:
On 2019/04/19 10:04, Simone Piccardi wrote:
Il 16/04/19 14:45, Mark Clarke ha scritto:
I would suggest that its not an either or approach. We could have a
part that is multiple choice and a practical part. The practical part
doesn't have to be under exam conditions. It could be a task like
write a bash script that does x or some other assignemtn. The student
is given 2 days to do the task and submit the script/assignement and
the testing can be automated.
And how do you avoid having the student getting "help" from a friend?
That's an excellent point.
Another is how will an automated tester account for every variation
that the candidate might have or do? Perhaps a candidate might
validate an IP Address (sensible) and naturally uses Python with
netaddr. Automated testing is likely to fail and the assignment,
whilst correct, is marked wrong. Now manual intervention is needed and
that means salaries. The cost of an exam just multiplies many times.
There is a whole movement and industry around automated testing. It
drives the continuous development and delivery world. In fact manually
checking is considered to be fraught with issues, both in marking papers
and, especially, software/system testing. Issues such as lack of
consistency, accussation of bias and incompetence etc. If you want to
make it complicated you can. Of course the questions should limit what
you can and can't do but in any event you test the outcome not the way
one got there.
--
Mark Clarke
ð± +2711-781 8014
ð www.JumpingBean.co.za
_______________________________________________
lpi-examdev mailing list
[email protected]
https://list.lpi.org/mailman/listinfo/lpi-examdev