Hi,

One of the most frustrating parts of developing Cocoon is the lack of
a testsuite. Since I've started working with Cocoon, I've seen a
feature working fine now, and completely broken in the next day's CVS
snapshot. As a result I'm trying to come up with some framework to
have automated regression testing, and maybe performance testing as
well.

I've looked at the Cactus framework for server side unit testing
(http://jakarta.apache.org/cactus), and seems like a good start. I
also have some ideas on how one can extend Ant to be able to write
tests directly in its XML build file.

The Cactus and Ant based approaches are two different methodologies
for writing tests. Cactus provides the ability to have the tests
written in Java, using an extension to JUnit.

The Ant based idea I have allows one to write tests directly using Ant
tasks. It looks to me an easier approach for people that don't know
how to write Java, and is also easier for people that want a more
scripting-like approach to writing tests.

The two methodologies do not exclude each other. A person could
implement the tests using either Cactus or Ant, depending on how
complex the test is, or what is that person's preference.

I will not describe how one can use Cactus here. Please refer to its
Web site (http://jakarta.apache.org/cactus) for more information on
it.

The Ant based approach requires some extension tasks to be
defined. Most of them seem to be easy to implement, only one of them
is more difficult.

I've attached an example build.xml for Ant that could be used to
describe some tests and performance tests, so that you get an idea of
what I'm talking about. My current idea is to have the times taken to
perform the tests written into a database, with a timestamp, so that
we can see how the performance improves/degrades over long periods of
time.

Below is a list of Ant extension tasks:

<database> - describe the database used to store the results.

<test> - defines a new test. Each test has a name, and whether the
time taken to execute it should be logged in the database or not. The
assumption is that some tests are really just that, while others are
really performance measurements.

<url-get> - makes an HTTP GET request to an URL

<url-post> - makes an HTTP POST request to an URL

<check> - checks an XPath in the XML document returned by an <url-get>
or <url-post>.

<iterate> - creates a "for" loop so we can do very simple loops

<spawn> - creates a given number of threads and executes the tasks
specified as children. This could be used to simulate multiple clients
connecting to a URL.

With the exception of <check>, all the tasks seem to be easy to
implement.

My current plan is to use Cactus (in the Ant mode) as the framework to
drive the tests, and the above Ant methodology as part of their
build.xml driver. Cactus provides some nice features which are worth
reusing.

I would appreciate any thoughts you might have on this.

Greetings,
-- 
Ovidiu Predescu <[EMAIL PROTECTED]>
http://orion.rgv.hp.com/ (inside HP's firewall only)
http://sourceforge.net/users/ovidiu/ (my SourceForge page)
http://www.geocities.com/SiliconValley/Monitor/7464/ (GNU, Emacs, other stuff)


# This sets the default database to be used during the testing

## TODO: What is the database schema?

<database driver="org.hsqldb.Driver" url="hsqldb:localhost:9802" id="test-results">

<property name="url" value="http://localhost:8080/cocoon"/>

# the `name' attribute of <test> sets the name of the test
#
# the `time' attribute of <test> is a boolean indicating whether the
# time taken to execute the test should be logged in the database.
#
# <url-get> sends a GET request to a URL. The result returned by the
# server is assumed to be of XML content, and can be tested using the
# <check> element embedded inside <url-get>.
#
# <url-post> sends a POST request to a URL. Otherwise it behaves
# similar to <url-get>
#
# The <check> element allows checking the returned XML document using
# simple XPath expressions. The value of the <check> element, either a
# simple value or an XML fragment is checked against the selected
# XPath node.

## TODO: Check whether XML fragments that contain namespaces are
## correctly handled by Ant's SAX1 model.

<test name="xsp-reloading" time="yes">
  <filter token="header" value="123">
    <copy file="${src.dir}/xsp/simple.xsp" toFile="${dest.dir}/xsp/simple.xsp"/>
  </filter>
  <url-get href="$(url)/xsp/simple.xsp">
    <check select="/html/body/h1[2]" required="yes">123</check>
  </url-get>
  <filter token="header" value="abc">
    <copy file="${src.dir}/xsp/simple.xsp" toFile="${dest.dir}/xsp/simple.xsp"/>
  </filter>
  <url-get href="$(url)/xsp/simple.xsp">
    <check select="/html/body/h1[2]" required="yes">abc</check>
  </url-get>
</test>

# This is the warmup phase for the echoXMLServlet, a servlet that
# returns whatever was sent in the POST request. Such a servlet can be
# implemented in Cocoon as a pipeline composed of a StreamGenerator
# and an XMLSerializer.

<test name="echo-xml-servlet-warmup">
  <echo message="Warming up the echo XML servlet"/>
  <iterate name="i" start="0" end="50" increment="1">
    <url-post href="${url}/test/echo" content="contents/sample.xml"/>
  </iterate>
</test>

# This is the real load test for the echoXMLServlet

## TODO: How to log the number of concurent threads (clients) in the
## database? (This is related to the database schema.)

<test name="echo-xml-servlet" time="yes" depends="echo-xml-servlet-warmup">
  <iterate name="i" start="0" end="100" increment="1">
    <spawn threads="${i}">
      <url-post href="${url}/test/echo" content="contents/sample.xml"/>
    </spawn>
  </iterate>
</test>




---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, email: [EMAIL PROTECTED]

Reply via email to