Dominique Hazael-Massieux wrote:
Le jeudi 12 novembre 2009 à 17:35 +0100, Marcos Caceres a écrit :
On the other hand, automated test generation can generate a large number
of test cases and is less prone to human errors. But, at the same time,
it cannot test some things that are written in the prose. For example, a
AU must not fire Storage events when first populating the preferences
attribute. This is impossible to express in IDL.

I complete agree that manual tests bring a lot of value, but I think it
would be unwise to refuse automated tests that express exactly what the
spec expresses — in particular, they can be extremely useful to detect
bugs in the WebIDL defined in the specs, bugs that are extremely
unlikely to be detected through manual testing.

Like I said, we are certainly not rejecting automated testing, we (me) are just not up to that stage yet. I completely agree with you that it will help us find more potential bugs in the IDL itself.

In other words, I don’t see why manually and automatically created tests
are mutually exclusive, and I see very clearly how they can complete
each other.

I did not mean to imply that they are. They are certainly complimentary (even for P&C, I refined the ABNF by using the abnfgen app, which helped me find a lot of errors - so I certainly know the value that comes with automated test generation).

Kind regards,
Marcos

Reply via email to