On Jan 23, 2007, at 10:41 AM, Ethan Mallove wrote:

After our first experiments with MTT I'd like to ask a
couple of questions. I guess, if I put them here the
answers will go to the list archive as well.  maybe I can
write a wiki page for the first steps towards MTT, too.

This leads me to the first question? what's the
login/passwd for the wiki?  And how about the central data
base for results? probably that answers should not go to
the list.

There's anonymous read-only access to the wiki.

https://svn.open-mpi.org/trac/mtt/wiki

Also, if you have an SVN account with Open MPI, you should be able to login to the wiki and write to the pages, submit tickets, etc. I honestly don't remember if we set you up with an SVN account -- did we?

You need an HTTP account for sending up results to the
database. username's are organization names (e.g., cisco,
sun, lanl, etc.). What do you want as your username?
"tu-dresden" or "zih"?

These are suggestions, btw. Feel free to pick your organization's username, but we try to make them related to the actual organization name (e.g., mine is "cisco").

So far I'm using text output only. How can I set up an own
data base? There are some PHP files in the MTT tarball. Do
they initialize data base and tables and so on? is there
any script for this? Does is require mysql or postgres or
any of the two?

Postgres, but the server-side is already taken care of.
There's a centralized database setup at Indiana Univ, and
the MTT test results can be viewed here:
http://www.open-mpi.org/mtt.

We *could* tell you how to setup your own db at Dresden (that's the eventual goal -- let anyone host their own MTT database who wants to), but it would be great if you could use our central database to do more testing of OMPI.


When I did my own test scripts for a simple test
application I followed the example INI files provided and
some pieces of documentation and the Perl code. Is there a
"real" documentation about all bells and whistles? If not,
could we write one or at least provide a step-by-step
guide for beginners?

The wiki is a good starting point. There's links to a bunch
of stuff on the MTT home wiki page here:

https://svn.open-mpi.org/trac/mtt/wiki

The closest thing to a Beginner's Guide to using the MTT
client would be:

https://svn.open-mpi.org/trac/mtt/wiki/MTTOverview


When I want to do some tests whether a test run was
successful of not, which way would you suggest. Either
doing it in bash commands placed in the INI file or rather
write new Perl modules that will be specific to my
application? The trivial test case does part of both,
doesn't it?


B. new Perl modules :)

But the test suites that are listed in the
samples/ompi-core-templates.ini file already have modules in
place to Get (e.g., svn export), Build (e.g., make), Run
(mpirun), and Analyze (prepare test results for the
database).

If you add a *new* test suite to ompi-tests, the existing
modules should be able to do all of the above (with the
exception of Analyze, if the test outputs performance data).
You just might need to add some special conditions in the [Test
run: foo] section of the INI file to indicate e.g., whether
some tests are *supposed* to fail, some tests take longer
time span to run, etc.


Furthermore, there is a caching scheme for test results
between successive runs. MTT avoids test that were run
before. Where does it store this information? How to
explicitly clear it?


The cached data is stored as serialized Perl variables
down in the scratch directory.

The --force option "forces" MTT to run everything,
regardless of whether it's been run before.

Or you can whack the scratch directory and then all the cached data is also whacked.


As one of our current goals, I'm integrating VampirTrace
into the MTT test cycle. The most straight forward way
would be to have a test application tarball which includes
VampirTrace sources, wouldn't it? Would it be feasible to
have two download clauses in a single test-get phase and
two successive build clauses in a single test-build
phase? Could this work with the current macros?


I don't know of a user that's done two fetches/compiles in a
single section yet. Why do you need to download/build twice
for a single test suite?  (There's nothing preventing this,
just wondering.) I think what you want would look something
like this:

[Test get: VampirTrace]
url1 = http://www.foo.com
url2 = http://www.bar.com
module = Tarball

[Test build: VampirTrace]
module = Shell
shell_build_command = <<EOT
make this
make that
EOT

Note: all the test suites currently setup to be used with
MTT, are fetched with "svn export" out of the ompi-tests SVN
repository (except the trivial tests, which has its test
sources embedded in MTT).  We'd need to create a
Test/Get/Tarball.pm module, if you don't want VampirTrace
added to ompi-tests.

This shouldn't be a problem. And we'll need to add support for the multiple downloads, but that should not be hard at all. Let us know what you need and we can do it.

--
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems

Reply via email to