Do you have access to the ompi-tests repository?
What happens if you do this command outside of MTT?
$ svn export https://svn.open-mpi.org/svn/ompi-tests/trunk/onesided
You could also try using "http", instead of "https" in
your svn_url.
-Ethan
On Wed, Nov/14/2007 02:04:27PM, Karol Mroz wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hello everyone...
>
> I've been trying to get MTT set up to run tests on the SCTP BTL Brad
> Penoff and I (over at UBC) have developed. While I'm able to obtain a
> nightly tarball, build and install the Open MPI middleware, I'm unable
> to pull any tests for the various svn repositories. Currently I've tried
> pulling IBM and Onesided tests as shown in the sample
> ompi-core-template.ini file.
>
> Here is the output I see from the console when running with --verbose:
> - --
> *** MTT: ./mtt -f ../samples/ompi-core-template-kmroz.ini --verbose
> *** Reporter initializing
> *** Reporter initialized
> *** MPI get phase starting
> >> MPI get: [mpi get: ompi-nightly-trunk]
>Checking for new MPI sources...
>No new MPI sources
> *** MPI get phase complete
> *** MPI install phase starting
> >> MPI install [mpi install: gcc warnings]
>Installing MPI: [ompi-nightly-trunk] / [1.3a1r16706] / [gcc warnings]...
>Completed MPI install successfully
>Installing MPI: [ompi-nightly-trunk] / [1.3a1r16682] / [gcc warnings]...
>Completed MPI install successfully
>Installing MPI: [ompi-nightly-trunk] / [1.3a1r16723] / [gcc warnings]...
>Completed MPI install successfully
> *** MPI install phase complete
> *** Test get phase starting
> >> Test get: [test get: onesided]
>Checking for new test sources...
> - --
>
> As you can see, MTT seems to hang on 'Checking for new test sources.'
>
> I will attach a copy of the .ini file in hopes that someone may be able
> to point me in the right direction.
>
> Thanks in advance.
>
>
> ompi-core-template-kmroz.ini:
> - ---
> # Copyright (c) 2006-2007 Cisco Systems, Inc. All rights reserved.
> # Copyright (c) 2006-2007 Sun Microystems, Inc. All rights reserved.
> #
>
> # Template MTT configuration file for Open MPI core testers. The
> # intent for this template file is to establish at least some loose
> # guidelines for what Open MPI core testers should be running on a
> # nightly basis. This file is not intended to be an exhaustive sample
> # of all possible fields and values that MTT offers. Each site will
> # undoubtedly have to edit this template for their local needs (e.g.,
> # pick compilers to use, etc.), but this file provides a baseline set
> # of configurations that we intend you to run.
>
> # OMPI core members will need to edit some values in this file based
> # on your local testing environment. Look for comments with "OMPI
> # Core:" for instructions on what to change.
>
> # Note that this file is artificially longer than it really needs to
> # be -- a bunch of values are explicitly set here that are exactly
> # equivalent to their defaults. This is mainly because there is no
> # reliable form of documentation for this ini file yet, so the values
> # here comprise a good set of what options are settable (although it
> # is not a comprehensive set).
>
> # Also keep in mind that at the time of this writing, MTT is still
> # under active development and therefore the baselines established in
> # this file may change on a relatively frequent basis.
>
> # The guidelines are as follows:
> #
> # 1. Download and test nightly snapshot tarballs of at least one of
> #the following:
> #- the trunk (highest preference)
> #- release branches (highest preference is the most recent release
> # branch; lowest preference is the oldest release branch)
> # 2. Run all 4 correctness test suites from the ompi-tests SVN
> #- trivial, as many processes as possible
> #- intel tests with all_tests_no_perf, up to 64 processes
> #- IBM, as many processes as possible
> #- IMB, as many processes as possible
> # 3. Run with as many different components as possible
> #- PMLs (ob1, dr)
> #- BTLs (iterate through sm, tcp, whatever high speed network(s) you
> # have, etc. -- as relevant)
>
> #==
> # Overall configuration
> #==
>
> [MTT]
>
> # OMPI Core: if you are not running in a scheduled environment and you
> # have a fixed hostfile for what nodes you'll be running on, fill in
> # the absolute pathname to it here. If you do not have a hostfile,
> # leave it empty. Example:
> # hostfile = /home/me/mtt-runs/mtt-hostfile
> # This file will be parsed and will automatically set a valid value
> # for _max_np() (it'll count