On Thu, Sep/20/2007 01:55:49PM, Jelena Pjesivac-Grbovic wrote: > I already have that - but it did not generate anything > when I call it second time (it generates output the first > time when I execute mtt client with all sections). (I > even tried adding --section reporter to the command line) >
You did the right thing by adding "reporter" to --section. Try this: $ client/mtt ... --section 'run.*skampi reporter' ... If that still does not work. Could you run with --debug and reply with the output? -Ethan > On Thu, 20 Sep 2007, Jeff Squyres wrote: > > > Yes, you can add another [reporter] section to the INI file to have > > it give you a report locally. For example, something like this would > > give you a summary table at the end: > > > > [Reporter: text file] > > module = TextFile > > > > textfile_filename = > > > > # User-defined report headers/footers > > textfile_summary_header = <<EOT > > hostname: &shell("hostname") > > uname: &shell("uname -a") > > who am i: &shell("who am i") > > EOT > > > > textfile_summary_footer = > > textfile_detail_header = > > textfile_detail_footer = > > > > textfile_textwrap = 78 > > > > > > On Sep 20, 2007, at 12:40 PM, Jelena Pjesivac-Grbovic wrote: > > > >> Thanks, > >> I have another question about MTT reporting. I am not sure what > >> part am I > >> missing. > >> I executed tests on 8 nodes (included MPI Get, MPI Install, and all > >> other > >> phases) > >> and then I wanted to run 36 node test using something like: > >> ./client/mtt --scratch /home/pjesa/mtt/scratch \ > >> --file /home/pjesa/mtt/collective-bakeoff/samples/ompi-core-perf- > >> testing_openmpi.ini \ > >> --verbose --print-time --section 'run;skampi' > >> > >> The test runs but I receive no output/report. Is there some > >> additional > >> flag to specify that I want to the output/report at the end? > >> > >> Here is the stdout of the test: > >> -------- > >> ** MTT: ./client/mtt --scratch /home/pjesa/mtt/scratch --file > >> /home/pjesa/mtt/collective-bakeoff/samples/ompi-core-perf- > >> testing_openmpi > >> .ini --verbose --print-time --section run;skampi > >> *** Reporter initializing > >> *** Reporter initialized > >> *** MPI get phase starting > >> *** MPI get phase complete > >>>> Phase: MPI Get > >> Started: Thu Sep 20 11:41:42 2007 > >> Stopped: Thu Sep 20 11:41:42 2007 > >> Elapsed: 00:00:00 > >> Total elapsed: 00:00:00 > >> *** MPI install phase starting > >> *** MPI install phase complete > >>>> Phase: MPI Install > >> Started: Thu Sep 20 11:41:42 2007 > >> Stopped: Thu Sep 20 11:41:42 2007 > >> Elapsed: 00:00:00 > >> Total elapsed: 00:00:00 > >> *** Test get phase starting > >> *** Test get phase complete > >>>> Phase: Test Get > >> Started: Thu Sep 20 11:41:42 2007 > >> Stopped: Thu Sep 20 11:41:42 2007 > >> Elapsed: 00:00:00 > >> Total elapsed: 00:00:00 > >> *** Test build phase starting > >> *** Test build phase complete > >>>> Phase: Test Build > >> Started: Thu Sep 20 11:41:42 2007 > >> Stopped: Thu Sep 20 11:41:42 2007 > >> Elapsed: 00:00:00 > >> Total elapsed: 00:00:00 > >> *** Run test phase starting > >>>> Test run [skampi] > >>>> Running with [ompi-nightly-v1.2] / [1.2.4rc1r16161] / > >> [ompi/gnu-standard] > >> Using MPI Details [ompi] with MPI Install [ompi/gnu-standard] > >> Total of 1 tests to run in this section > >> Test: skampi, np=36, variant=1: Passed > >> .... > >> Test: skampi, np=36, variant=70: Passed > >> ### Test progress: 1 of 1 section tests complete (100%) > >> *** Run test phase complete > >>>> Phase: Test Run > >> Started: Thu Sep 20 11:41:42 2007 > >> Stopped: Thu Sep 20 12:13:06 2007 > >> Elapsed: 00:31:24 > >> Total elapsed: 00:31:24 > >>>> Phase: Trim > >> Started: Thu Sep 20 12:13:06 2007 > >> Stopped: Thu Sep 20 12:13:06 2007 > >> Elapsed: 00:00:00 > >> Total elapsed: 00:31:24 > >> *** Reporter finalizing > >> *** Reporter finalized > >> -------- > >> > >> > >> > >> On Tue, 18 Sep 2007, Jeff Squyres wrote: > >> > >>> On Sep 18, 2007, at 6:36 PM, Jeff Squyres wrote: > >>> > >>>> Doh -- looks like a corner case that I missed (if you use a > >>>> hostfile, > >>>> it should automatically set the resource manager to "none", I would > >>>> think). Did you set a value for "resource_manager" in the INI file? > >>>> If so, take it out or just set it to none. I'll file a bug to fix > >>>> this tomorrow (i.e., if you don't set it at all and are using a > >>>> hostlist/hostfile, the Right Thing happens). > >>> > >>> Oops! I missed your statement: > >>> > >>>>> I think I should set resource manager to none - but I am not sure > >>>>> where. > >>> > >>> You can set "resource_manager = none" in the MPI Details section. > >>> But MTT should be doing this for you automatically; I'll fix it > >>> tomorrow. > >>> > >>> -- > >>> Jeff Squyres > >>> Cisco Systems > >>> > >>> _______________________________________________ > >>> mtt-users mailing list > >>> mtt-us...@open-mpi.org > >>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users > >>> > >> > >> -- > >> Jelena Pjesivac-Grbovic, Pjesa > >> Graduate Research Assistant > >> Innovative Computing Laboratory > >> Computer Science Department, UTK > >> Claxton Complex 350 > >> (865) 974 - 6722 > >> (865) 974 - 6321 > >> jpjes...@utk.edu > >> > >> "The only difference between a problem and a solution is that > >> people understand the solution." > >> -- Charles Kettering > >> > >> _______________________________________________ > >> mtt-users mailing list > >> mtt-us...@open-mpi.org > >> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users > > > > > > -- > > Jeff Squyres > > Cisco Systems > > > > _______________________________________________ > > mtt-users mailing list > > mtt-us...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users > > > > -- > Jelena Pjesivac-Grbovic, Pjesa > Graduate Research Assistant > Innovative Computing Laboratory > Computer Science Department, UTK > Claxton Complex 350 > (865) 974 - 6722 > (865) 974 - 6321 > jpjes...@utk.edu > > "The only difference between a problem and a solution is that > people understand the solution." > -- Charles Kettering > > _______________________________________________ > mtt-users mailing list > mtt-us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users