I have looked at some of the recent posts, and get the sense that the test
scripts are ad hoc.  It seems to me that we ought to adopt a more formal
methodology, develop a test strategy, and base the initial majority of
scripts on testing for compliance with the RFCs.  Additional scripts can be
based upon specific needs, e.g., accommodating real-world compatibility with
specific applications.  We should have documented scripts that explain what
the standard is, what any special deviation(s) might be, what is being
tested, and why.

For these "telnet-based" protocols, I believe that we can (and should)
leverage regular expressions, rather than just plain canned strings for
comparing.  That way we can better handle response strings, and use data
from matched groups within our following commands.  I am also of a mind that
we should look at using BSF as a key component of our testing framework.
That would let us leverage simple templates as well as more complex scripts,
including those adapted from other suites.  Harmeet mentioned Python, which
can be used with BSF: http://www.lonsteins.com/articles/jython-bsf.html.
Done properly, this becomes a very useful general purpose test bed,
customized for a given protocol by additional beans and scripts.

Lastly, we needn't have an NIH approach to this sort of external testing.  I
am sure that we can find other tests to supplement those that we build.

Thoughts?

        --- Noel


--
To unsubscribe, e-mail:   <mailto:james-dev-unsubscribe@;jakarta.apache.org>
For additional commands, e-mail: <mailto:james-dev-help@;jakarta.apache.org>

Reply via email to