Re: [MTT devel] [MTT svn] GIT: MTT branch master updated. 016088f2a0831b32ab5fd6f60f4cabe67e92e594

2014-06-23 Thread Mike Dubman
it seems that mpirun got no signal (no evidence in the log). mtt was
spinning and mpirun was a only process who left on the node.
It was unclear why mtt did not kill mpirun.
will try to extract perl stacktrace from mtt on tomorrow`s nightly run.


On Mon, Jun 23, 2014 at 2:59 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com
> wrote:

> On Jun 23, 2014, at 7:47 AM, Mike Dubman <mi...@dev.mellanox.co.il> wrote:
>
> > after patch, it killed child processes but kept mpirun ... itself.
>
> What does that mean -- are you saying that mpirun is still running?  Was
> mpirun sent a signal at all?  What kind of messages are being displayed?
>  ...etc.
>
> The commits fix important bugs for me and others.  Clearly, there's still
> something not right.  And of course I'm willing to track it down.  But I
> can't help you if you just say "it doesn't work."
>
> > before that patch - all processes were killed (and you are right,
> "mpirun died right at the end of the timeout" was reported)
>
> ...which led to many months of misleading ORTE debugging, BTW.  :-\
>  That's why this commit was introduced into MTT -- in the quest of finally
> fixing both the mysterious ORTE hangs and the erroneous timeouts/"mpirun
> died right at the end" messages.
>
> > but at least it left the cluster in the clean state w/o leftovers.
> > now many "orphan" launchers  are alive from previous invocations.
>
> Does "launchers" = mpirun?
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
> Link to this post:
> http://www.open-mpi.org/community/lists/mtt-devel/2014/06/0629.php
>


Re: [MTT devel] [MTT svn] GIT: MTT branch master updated. 016088f2a0831b32ab5fd6f60f4cabe67e92e594

2014-06-23 Thread Mike Dubman
after patch, it killed child processes but kept mpirun ... itself.

before that patch - all processes were killed (and you are right, "mpirun
died right at the end of the timeout" was reported) but at least it left
the cluster in the clean state w/o leftovers.
now many "orphan" launchers  are alive from previous invocations.


On Mon, Jun 23, 2014 at 2:18 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com
> wrote:

> There was actually quite a bit of testing before this was committed.  This
> commit resolved a lot of hangs across multiple organizations.
>
> Can you be more specific as to what is happening?
>
> The prior code was killing child processes before mpirun itself, for
> example, which has led MTT to erroneously report that mpirun died right at
> the end of the timeout without being killed.  This has been ongoing for
> many months, at a minimum.
>
>
>
>
> On Jun 23, 2014, at 4:37 AM, Mike Dubman <mi...@dev.mellanox.co.il> wrote:
>
> > this commit does more harm then good.
> > we experience following:
> >
> > - some child processes still running after timeout and mtt killed the
> job.
> >
> > before this commit - it worked fine.
> > please revert and test more.
> >
> >
> >
> > On Sat, Jun 21, 2014 at 3:30 PM, MPI Team <mpit...@crest.iu.edu> wrote:
> > The branch, master has been updated
> >via  016088f2a0831b32ab5fd6f60f4cabe67e92e594 (commit)
> >via  7fb4c6a4c9d71be127ea53bd463178510577f71f (commit)
> >via  381ba177d835a54c3197d846f5a4edfc314efe27 (commit)
> >via  cfdd29de2012eeb7592706f00dd07a52dd48cf6b (commit)
> >via  940030ca20eb1eaf256e898b83866c1cb83aca5c (commit)
> >   from  c99ed7c7b159a2cab58a251bd7c0dad8972ff901 (commit)
> >
> > Those revisions listed above that are new to this repository have
> > not appeared on any other notification email; so we list those
> > revisions in full, below.
> >
> > - Log -
> >
> https://github.com/open-mpi/mtt/commit/016088f2a0831b32ab5fd6f60f4cabe67e92e594
> >
> > commit 016088f2a0831b32ab5fd6f60f4cabe67e92e594
> > Author: Jeff Squyres <jsquy...@cisco.com>
> > Date:   Sat Jun 21 04:58:45 2014 -0700
> >
> > DoCommand: several fixes to kill_proc logic
> >
> > 1. Fix the kill(0, $pid) test to see if the process was still alive.
> >
> > 2. Rename _kill_proc() to _kill_proc_tree() to indicate that it's
> > really killing not only the PID in question, but also all of its
> > descendants.
> >
> > 3. In _kill_proc_tree(), change the order to kill the main PID first,
> > and ''then'' kill all the descendants.
> >
> > The main use case is when killing mpirun: if we kill mpirun's
> > descendants ''first'', mpirun will detect its childrens' deaths and
> > then cleanup and exit.  Later, when MTT finally gets around to
> killing
> > mpirun, MTT will detect that mpirun is already dead and therefore
> emit
> > a confusing "mpirun died right at end of timeout" message.  This is
> > misleading at best; it doesn't indicate what actually happened.
> >
> > However, if we kill mpirun first, it will take care of killing all of
> > its descendants.  MTT will therefore emit the right messages about
> > killing mpirun.  MTT will then redundantly try to kill a bunch of
> > now-nonexistent descendant processes of mpirun, but that's ok/safe.
> > We actually ''want'' this try-to-kill-mpirun's-descendants behavior
> to
> > handle the case when mpirun is misbehaving / not cleaning up its
> > descendants.
> >
> > 4. DoCommand() is used for more than launching mpirun, so pass down
> > $argv0 so that we can print the actual command name that is being
> > killed in various Verbose/Debug messages, not the hard-coded "mpirun"
> > string (which, in practice, was probably almost always correct, but
> > still...).
> > ---
> >  lib/MTT/DoCommand.pm | 78
> 
> >  1 file changed, 55 insertions(+), 23 deletions(-)
> >
> > diff --git a/lib/MTT/DoCommand.pm b/lib/MTT/DoCommand.pm
> > index 02cdb94..646ca31 100644
> > --- a/lib/MTT/DoCommand.pm
> > +++ b/lib/MTT/DoCommand.pm
> > @@ -2,7 +2,7 @@
> >  #
> >  # Copyright (c) 2005-2006 The Trustees of Indiana University.
> >  # All rights reserved.
> > -# Copyright (c) 2006-2013 Cisco Systems, Inc.  All rights reserved

Re: [MTT devel] [MTT svn] svn:mtt-svn r1637 - trunk/lib/MTT/Values/Functions/MPI

2014-04-07 Thread Mike Dubman
somehow we run it with both, --verbose not enough to understand the problem
and --debug is too much.

maybe --trace is here to rescue?



On Tue, Apr 8, 2014 at 1:36 AM, Jeff Squyres (jsquyres)
<jsquy...@cisco.com>wrote:

> Yes.
>
> The intent is that --debug is *very* verbose, and is generally only
> useful when something goes wrong.
>
> I run Cisco's automated MTT with only --verbose.
>
>
>
> On Apr 7, 2014, at 6:35 PM, Mike Dubman <mi...@dev.mellanox.co.il> wrote:
>
> > ohh.. it is just flooding the log with same data for every test launch.
> >
> > maybe we should have verbose level in mtt?
> >
> >
> > On Mon, Apr 7, 2014 at 6:30 PM, Jeff Squyres (jsquyres) <
> jsquy...@cisco.com> wrote:
> > Mike --
> >
> > Why did you comment these out?  By definition, --debug output should be
> a LOT of output.
> >
> >
> > On Apr 5, 2014, at 7:27 PM, <svn-commit-mai...@open-mpi.org> wrote:
> >
> > > Author: miked (Mike Dubman)
> > > Date: 2014-04-05 19:27:28 EDT (Sat, 05 Apr 2014)
> > > New Revision: 1637
> > > URL: https://svn.open-mpi.org/trac/mtt/changeset/1637
> > >
> > > Log:
> > > silence print
> > >
> > > Text files modified:
> > >   trunk/lib/MTT/Values/Functions/MPI/OMPI.pm | 4 ++--
> > >   1 files changed, 2 insertions(+), 2 deletions(-)
> > >
> > > Modified: trunk/lib/MTT/Values/Functions/MPI/OMPI.pm
> > >
> ==
> > > --- trunk/lib/MTT/Values/Functions/MPI/OMPI.pmMon Mar 17
> 14:14:47 2014(r1636)
> > > +++ trunk/lib/MTT/Values/Functions/MPI/OMPI.pm2014-04-05
> 19:27:28 EDT (Sat, 05 Apr 2014)  (r1637)
> > > @@ -331,7 +331,7 @@
> > >
> > > # Check the environment for OMPI_MCA_* values
> > > foreach my $e (keys(%ENV)) {
> > > -Debug("Functions::MPI::OMPI: Checking env key: $e\n");
> > > +#Debug("Functions::MPI::OMPI: Checking env key: $e\n");
> > > if ($e =~ m/^OMPI_MCA_(\S+)/) {
> > > my $v = $ENV{"OMPI_MCA_$1"};
> > > push(@params, "--env-mca $1 $v");
> > > @@ -339,7 +339,7 @@
> > > }
> > >
> > > $str = join(' ', @params);
> > > -Debug("Functions::MPI::OMPI: Returning MCA params $str\n");
> > > +#Debug("Functions::MPI::OMPI: Returning MCA params $str\n");
> > > $str;
> > > }
> > >
> > > ___
> > > mtt-svn mailing list
> > > mtt-...@open-mpi.org
> > > http://www.open-mpi.org/mailman/listinfo.cgi/mtt-svn
> >
> >
> > --
> > Jeff Squyres
> > jsquy...@cisco.com
> > For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> >
> > ___
> > mtt-devel mailing list
> > mtt-de...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
> >
> > ___
> > mtt-devel mailing list
> > mtt-de...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>


Re: [MTT devel] [MTT svn] svn:mtt-svn r1637 - trunk/lib/MTT/Values/Functions/MPI

2014-04-07 Thread Mike Dubman
ohh.. it is just flooding the log with same data for every test launch.

maybe we should have verbose level in mtt?


On Mon, Apr 7, 2014 at 6:30 PM, Jeff Squyres (jsquyres)
<jsquy...@cisco.com>wrote:

> Mike --
>
> Why did you comment these out?  By definition, --debug output should be a
> LOT of output.
>
>
> On Apr 5, 2014, at 7:27 PM, <svn-commit-mai...@open-mpi.org> wrote:
>
> > Author: miked (Mike Dubman)
> > Date: 2014-04-05 19:27:28 EDT (Sat, 05 Apr 2014)
> > New Revision: 1637
> > URL: https://svn.open-mpi.org/trac/mtt/changeset/1637
> >
> > Log:
> > silence print
> >
> > Text files modified:
> >   trunk/lib/MTT/Values/Functions/MPI/OMPI.pm | 4 ++--
> >   1 files changed, 2 insertions(+), 2 deletions(-)
> >
> > Modified: trunk/lib/MTT/Values/Functions/MPI/OMPI.pm
> >
> ==
> > --- trunk/lib/MTT/Values/Functions/MPI/OMPI.pmMon Mar 17
> 14:14:47 2014(r1636)
> > +++ trunk/lib/MTT/Values/Functions/MPI/OMPI.pm2014-04-05
> 19:27:28 EDT (Sat, 05 Apr 2014)  (r1637)
> > @@ -331,7 +331,7 @@
> >
> > # Check the environment for OMPI_MCA_* values
> > foreach my $e (keys(%ENV)) {
> > -Debug("Functions::MPI::OMPI: Checking env key: $e\n");
> > +#Debug("Functions::MPI::OMPI: Checking env key: $e\n");
> > if ($e =~ m/^OMPI_MCA_(\S+)/) {
> > my $v = $ENV{"OMPI_MCA_$1"};
> > push(@params, "--env-mca $1 $v");
> > @@ -339,7 +339,7 @@
> > }
> >
> > $str = join(' ', @params);
> > -Debug("Functions::MPI::OMPI: Returning MCA params $str\n");
> > +#Debug("Functions::MPI::OMPI: Returning MCA params $str\n");
> > $str;
> > }
> >
> > ___
> > mtt-svn mailing list
> > mtt-...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/mtt-svn
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>


Re: [MTT devel] mtt-relay patch - create pid file when run as daemon

2013-09-30 Thread Mike Dubman
/var/run is only writable to root, but script uses it explicitly.
maybe it is worse to add fallback if non-root user starts mtt-relay.


On Mon, Sep 30, 2013 at 2:08 PM, Christoph Niethammer wrote:

> Hello,
>
> As on many systems init scripts and the handling of services is based on
> pid files I extended the mtt-relay script as follows:
>
> If run with the --daemon option
> * Create file /var/run/mtt-relay.pid  if it does not exist and write the
> pid of the background process into it.
> * exit with return value 1 if /var/run/mtt-relay.pid file exists.
>
> Patch is attached.
>
> Best regards
> Christoph Niethammer
>
> --
>
> Christoph Niethammer
> High Performance Computing Center Stuttgart (HLRS)
> Nobelstrasse 19
> 70569 Stuttgart
>
> Tel: ++49(0)711-685-87203
> email: nietham...@hlrs.de
> http://www.hlrs.de/people/niethammer
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>


Re: [MTT devel] [MTT svn] svn:mtt-svn r1481 - in trunk: client lib/MTT/Reporter

2012-08-04 Thread Mike Dubman
Hi,

We are switching from datastore (feature we added a couple of years ago) to
MongoDB NoSQL DB to keep mtt results.

We are adding some "regression" capability based on  MTT and MongoDB
reporter:

- run mtt
- when mtt finishes, extract results for previous runs of the same test
with same parameters
- compare performance metrics and generate regression report (excel)
- attach regression report to the mtt email report

So, we are adding all lego-like utils to support this:

- save results to OO storage (for comfort using from Perl)
- create Analyzers for various well-known tests
- query results, group them and generate regression statistics, place
report into excel (mongo-query.pl)
- Generate report which can be attached to mtt report (breport.pl)


So, we have reporter and query tool for Mongo, which is simple and
customizable.

regards
M


On Wed, Aug 1, 2012 at 2:00 PM, Jeff Squyres <jsquy...@cisco.com> wrote:

> Mike --
>
> MongoDB is a NoSQL thingy, right?
>
> Can you describe this plugin a bit?  Do you guys have some kind of
> reporter for MongoDB?
>
>
> On Aug 1, 2012, at 5:46 AM, <svn-commit-mai...@open-mpi.org> wrote:
>
> > Author: miked (Mike Dubman)
> > Date: 2012-08-01 05:46:03 EDT (Wed, 01 Aug 2012)
> > New Revision: 1481
> > URL: https://svn.open-mpi.org/trac/mtt/changeset/1481
> >
> > Log:
> > add modified version mongobquery and MTTMongodb
> >
> > Added:
> >   trunk/client/mongobquery.pl   (contents, props changed)
> >   trunk/lib/MTT/Reporter/MTTMongodb.pm
> >
> > Added: trunk/client/mongobquery.pl
> >
> ==
> > --- /dev/null 00:00:00 1970   (empty, because file is newly added)
> > +++ trunk/client/mongobquery.pl   2012-08-01 05:46:03 EDT (Wed, 01
> Aug 2012)  (r1481)
> > @@ -0,0 +1,1018 @@
> > +#!/usr/bin/perl
> > +#
> > +# Copyright (c) 2009
> > +# $COPYRIGHT$
> > +#
> > +# Additional copyrights may follow
> > +#
> > +# $HEADER$
> > +#
> > +# Now that @INC is setup, bring in the modules
> > +
> > +#use strict;
> > +#use warnings;
> > +use LWP::UserAgent;
> > +use HTTP::Request::Common;
> > +use Data::Dumper;
> > +use File::Basename;
> > +use File::Temp;
> > +use Config::IniFiles;
> > +use YAML::XS;
> > +use MongoDB;
> > +use MongoDB::OID;
> > +use YAML;
> > +use YAML::Syck;
> > +use DateTime;
> > +
> > +###
> > +# Set variables
> > +###
> > +my $module_name=$0;
> > +my $module_path=$0;
> > +
> > +$module_name=~s/([^\/\\]+)$//;
> > +$module_name=$1;
> > +
> > +$module_path=~s/([^\/\\]+)$//;
> > +
> > +
> > +###
> > +# Main block
> > +###
> > +use Getopt::Long qw(:config no_ignore_case);
> > +
> > +my $opt_help;
> > +my $opt_server;
> > +my $opt_username;
> > +my $opt_password;
> > +
> > +my $opt_ping;
> > +my $opt_upload;
> > +my $opt_query;
> > +my $opt_view;
> > +my $opt_admin;
> > +
> > +my @opt_data;
> > +my @opt_raw;
> > +
> > +my $opt_gqls;
> > +my @opt_gqlf;
> > +my @opt_section;
> > +my $opt_dir;
> > +my $opt_no_raw;
> > +
> > +my $opt_dstore;
> > +my $opt_info;
> > +my $opt_format;
> > +my $opt_mailto;
> > +my $opt_regression_from;
> > +my $opt_regression_to;
> > +my $opt_regression_step;
> > +
> > +my @opt_newuser;
> > +
> > +GetOptions ("help|h" => \$opt_help,
> > +"server|a=s" => \$opt_server,
> > +"username|u=s" => \$opt_username,
> > +"password|p=s" => \$opt_password,
> > +"ping" => \$opt_ping,
> > +"upload" => \$opt_upload,
> > +"query" => \$opt_query,
> > +"view" => \$opt_view,
> > +"admin" => \$opt_admin,
> > +
> > +"data|S=s" => \@opt_data,
> > +"raw|R=s" => \@opt_raw,
> > +
> > +"gqls|L=s" => \$opt_gqls,
> > +"gqlf|F=s" => \@opt_gqlf,
> > +"section|T=s" => \@opt_section,
> > +"dir|O=s" => \$opt_dir,
> > +  

[MTT devel] mtt questions

2011-01-04 Thread Mike Dubman
Hi,

Do you know if there is mtt option to stop mtt execution if job`s failure
ratio succeeds specified value, something like:

[mtt]
stop_on_test_failures=1%

Also, are there any ini files examples/successes of how to use mtt with
non-MPI based applications?

Thanks

Mike


Re: [MTT devel] questions about MTT database from HDF

2010-11-07 Thread Mike Dubman
Hi,
Also, there is an MTT option to select Google Datastore as a storage backend
for mtt results.


Pro:
 - your data is stored in the Google`s cloud
 - You can access your data from scripts
 - You can create a custom UI for you data visualization
 - You can use Google`s default datastore querying tools
 - seamless integration with mtt
 - No need in DBA services
 - There are some simple report scripts to query data and generate Excel
files
 - You can define custom dynamic DB fields and associate it with your data
 - You can define security policy/permissions for your data

Cons:
 - No UI (mtt default UI works with sql backend only)

regards
Mike

On Thu, Nov 4, 2010 at 11:08 PM, Quincey Koziol  wrote:

> Hi Josh!
>
> On Nov 4, 2010, at 8:30 AM, Joshua Hursey wrote:
>
> >
> > On Nov 3, 2010, at 9:10 PM, Jeff Squyres wrote:
> >
> >> Ethan / Josh --
> >>
> >> The HDF guys are interested in potentially using MTT.
> >
> > I just forwarded a message to the mtt-devel list about some work at IU to
> use MTT to test the CIFTS FTB project. So maybe development between these
> two efforts can be mutually beneficial.
> >
> >> They have some questions about the database.  Can you guys take a whack
> at answering them?  (be sure to keep the CC, as Elena/Quincey aren't on the
> list)
> >>
> >>
> >> On Nov 3, 2010, at 1:29 PM, Quincey Koziol wrote:
> >>
> >>> Lots of interest here about MTT, thanks again for taking time to
> demo it and talk to us!
> >>
> >> Glad to help.
> >>
> >>> One lasting concern was the slowness of the report queries - what's
> the controlling parameter there?  Is it the number of tests, the size of the
> output, the number of configurations of each test, etc?
> >>
> >> All of the above.  On a good night, Cisco dumps in 250k test runs to the
> database.  That's just a boatload of data.  End result: the database is
> *HUGE*.  Running queries just takes time.
> >>
> >> If the database wasn't so huge, the queries wouldn't take nearly as
> long.  The size of the database is basically how much data you put into it
> -- so it's really a function of everything you mentioned.  I.e., increasing
> any one of those items increases the size of the database.  Our database is
> *huge* -- the DB guys tell me that it's lots and lots of little data (with
> blobs of stdout/stderr here an there) that make it "huge", in SQL terms.
> >>
> >> Josh did some great work a few summers back that basically "fixed" the
> speed of the queries to a set speed by effectively dividing up all the data
> into month-long chunks in the database.  The back-end of the web reporter
> only queries the relevant month chunks in the database (I think this is a
> postgres-specific SQL feature).
> >>
> >> Additionally, we have the DB server on a fairly underpowered machine
> that is shared with a whole pile of other server duties (www.open-mpi.org,
> mailman, ...etc.).  This also contributes to the slowness.
> >
> > Yeah this pretty much sums it up. The current Open MPI MTT database is
> 141 GB, and contains data as far back as Nov. 2006. The MTT Reporter takes
> some of this time just to convert the raw database output into pretty HTML
> (it is currently written in PHP). At the bottom of the MTT Reporter you will
> see some stats on where the Reporter took most of its time.
> >
> > How long the Reporter took total to return the result is:
> >  Total script execution time: 24 second(s)
> > How long just the database query took is reported as:
> >  Total SQL execution time: 19 second(s)
> >
> > We also generate an overall contribution graph which is also linked at
> the bottom to give you a feeling of the amount of data coming in every
> day/week/month.
> >
> > Jeff mentioned the partition tables work that I did a couple summers ago.
> The partition tables help quite a lot by partitioning the data into week
> long chunks so shorter date ranges will be faster than longer date ranges
> since they pull a smaller table with respect to all of the data to perform a
> query. The database interface that the MTT Reporter uses is abstracted away
> from the partition tables, it is really just the DBA (I guess that is me
> these days) that has to worry about their setup (which is usually just a 5
> min task once a year). Most of the queries to MTT ask for date ranges like
> 'past 24 hours', 'past 3 days' so breaking up the results by week saves some
> time.
> >
> > One thing to also notice is that usually the first query through the MTT
> Reporter is the slowest. After that first query the MTT database (postgresql
> in this case) it is able to cache some of the query information which should
> make subsequent queries a little faster.
> >
> > But the performance is certainly not where I would like it, and there are
> still a few ways to make it better. I think if we moved to a newer server
> that is not quite as heavily shared we would see a performance boost.
> Certainly if we added more RAM to the system, and potentially a 

[MTT devel] mtt not working on sles 11up2 perl 5.10.0

2010-01-27 Thread Mike Dubman
Hello guys,


mtt fails on sles11up2 with perl version 5.10.0 but works on other distros
as a charm.

The same minimalistic ini file which works on other distro`s fails on sles
with error:

>> Test Run [osu]
>> Running with [open mpi] / [1.3.3] / [openmpi]
   Using MPI Details [open mpi] with MPI Install [openmpi]
>>> Using group_reports
Can't use string ("2") as an ARRAY ref while "strict refs" in use at
/hpc/home/USERS/mttuserqa/work/svn/ompi/mtt/trunk/lib/MTT/Values.pm line
107.

Do you have any ideas what it may be?

P.S. mini.ini is attached.


mini.ini
Description: Binary data


Re: [MTT devel] [MTT svn] svn:mtt-svn r1320

2009-09-30 Thread Mike Dubman
it seems it can be retired. executable() covers more cases.
shell() can be the alias of executable() for backwards compatibility.

Also, DoCommand::CmdScript should be changed to DoCommand::Cmd inside
executable() to really cover more cases.
regards

Mike

On Tue, Sep 29, 2009 at 8:35 PM, Ethan Mallove wrote:

> Should () be deprecated? It looks awfully similar to
> ().
>
> -Ethan
>
> On Tue, Sep/29/2009 08:34:44AM, mi...@osl.iu.edu wrote:
> > Author: miked
> > Date: 2009-09-29 08:34:44 EDT (Tue, 29 Sep 2009)
> > New Revision: 1320
> > URL: https://svn.open-mpi.org/trac/mtt/changeset/1320
> >
> > Log:
> > applied Jeff,Ethan comments:
> > 1. rename on_stop,on_start to after_mtt_start_exec, before_mtt_start_exec
> > 2. treat *_mtt_start_exec params in the same way like others
> before/after_* params
> > 3. rename shell_script to executable
> > 4. fix DoCommand:CmdScript() to recognize shebang chars and do not add
> ":\n" if #! is present
> >
> >
> > Text files modified:
> >trunk/client/mtt  |15 +--
> >trunk/lib/MTT/DoCommand.pm| 5 +++--
> >trunk/lib/MTT/Values/Functions.pm | 2 +-
> >3 files changed, 17 insertions(+), 5 deletions(-)
> >
> > Modified: trunk/client/mtt
> >
> ==
> > --- trunk/client/mtt  (original)
> > +++ trunk/client/mtt  2009-09-29 08:34:44 EDT (Tue, 29 Sep 2009)
> > @@ -496,7 +496,8 @@
> >  MTT::Lock::Init($ini);
> >
> >  # execute on_start callback if exists
> > -_process_get_value_option("mtt,on_start", $ini);
> > + _do_step($ini, "mtt", "before_mtt_start_exec");
> > +
> >
> >  # Set the logfile, if specified
> >
> > @@ -565,7 +566,7 @@
> >  }
> >
> >  # execute on_stop callback if exists
> > -_process_get_value_option("mtt,on_stop", $ini);
> > + _do_step($ini, "mtt", "after_mtt_start_exec");
> >
> >  # Shut down locks
> >
> > @@ -737,3 +738,13 @@
> >  print "$value\n";
> >  }
> >  }
> > +
> > +# Run cmd, specified in the non Test* sections
> > +sub _do_step {
> > + my ($ini, $section,$param) = @_;
> > + my $cmd = $ini->val($section, $param);
> > + if ( defined $cmd ) {
> > + my $x = MTT::DoCommand::RunStep(1, $cmd, -1, $ini,
> $section, $param);
> > + Verbose("  Output: $x->{result_stdout}\n")
> > + }
> > +}
> >
> > Modified: trunk/lib/MTT/DoCommand.pm
> >
> ==
> > --- trunk/lib/MTT/DoCommand.pm(original)
> > +++ trunk/lib/MTT/DoCommand.pm2009-09-29 08:34:44 EDT (Tue, 29
> Sep 2009)
> > @@ -794,9 +794,10 @@
> >  # protects against a common funclet syntax error.
> >  # We can safely do this since "foo" (literally, with
> >  # quotes included) would never be a valid shell command.
> > -$cmds =~ s/\"$//
> > -if ($cmds =~ s/^\"//);
> > +$cmds =~ s/\"$// if ($cmds =~ s/^\"//);
> >
> > +
> > + print $fh ":\n" if ($cmds !~ /^\s*\#\!/); # no shell specified -
> use default
> >  print $fh "$cmds\n";
> >  close($fh);
> >  chmod(0700, $filename);
> >
> > Modified: trunk/lib/MTT/Values/Functions.pm
> >
> ==
> > --- trunk/lib/MTT/Values/Functions.pm (original)
> > +++ trunk/lib/MTT/Values/Functions.pm 2009-09-29 08:34:44 EDT (Tue, 29
> Sep 2009)
> > @@ -3038,7 +3038,7 @@
> >  #
> >  #
> >
> > -sub shell_script {
> > +sub executable {
> >   my ($cmd_section, $cmd_param) = @_;
> >   my $cmd = _ini_val($cmd_section, $cmd_param);
> >   my $x = MTT::DoCommand::CmdScript(1, $cmd);
> > ___
> > mtt-svn mailing list
> > mtt-...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/mtt-svn
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>


Re: [MTT devel] [MTT svn] svn:mtt-svn r1319

2009-09-27 Thread Mike Dubman
On Fri, Sep 25, 2009 at 10:08 PM, Jeff Squyres <jsquy...@cisco.com> wrote:

> On Sep 24, 2009, at 12:46 PM, Mike Dubman wrote:
>
>  Im not familiar with :\n semantics, how does it force Bourne shell and
>> what it actually does :)? (seems like leftovers from 1960)
>>
>
> Yes, it might be left over from 1960.  :-)  But the nice thing is that you
> then don't have to identify /bin/sh or /usr/bin/sh.  It's convenient and it
> works everywhere.


Found some info re ":\n" as a shebang line:
...
':' was actually the first comment character.
All shells I tried still recognize it as such, so it is not obsolete, but
perhaps slightly deprecated.

The first versions of csh used '#' as a comment and used the presence of one
comment character or the other to decide which shell to run (assuming it was
given a text file with the execute bit set). This was before the advent of
the kernel-based #! "magic number"
The early "/bin/sh" versions assumed they were the only shell on the system
and had no need to choose an interpreter.
...


Re: [MTT devel] [MTT svn] svn:mtt-svn r1314

2009-09-09 Thread Mike Dubman
Hey Eytan,

It seems argv is participating in the following scenarios:


1. argv should be defined in mtt.ini for every single [Test Run] section
2. Currently, _argv() is returing un-evaluated argv`s value
3. _argv() is usually part of "exec=" parameter line of [MPI Details],
which is evaluated for very test invocation:

mpiexec @options@ -n _np() _executable() _argv()


According to analysis above, if argv contains funclets or variables, they
will get expanded during "exec" line evaluation.

regards

Mike

On Tue, Sep 8, 2009 at 9:10 PM, Ethan Mallove  wrote:

> Mike,
>
> What if argv contains a funclet, e.g.,
>
>  argv = ()
>
> Won't this change prevent it from getting expanded?
>
> -Ethan
>
>
> On Tue, Sep/08/2009 09:43:37AM, mi...@osl.iu.edu wrote:
> > Author: miked
> > Date: 2009-09-08 09:43:37 EDT (Tue, 08 Sep 2009)
> > New Revision: 1314
> > URL: https://svn.open-mpi.org/trac/mtt/changeset/1314
> >
> > Log:
> > fix:
> >
> > _np() can return incorrect value if used inside argv, here is a
> scenario:
> >
> > This behavior can be explained in next words as evaluation _test()
> > returns uninitialized $MTT::Test::Run::test_np that is initialized later
> in _run_one_np function.
> >
> > As a result using
> > $MTT::Test::Run::test_argv = $run->{argv};
> > allows to avoid damaging $MTT::Test::Run::test_argv  variable on current
> step but evaluation of _np() is done with whole command_line.
> >
> >
> > Text files modified:
> >trunk/lib/MTT/Test/RunEngine.pm | 2 +-
> >1 files changed, 1 insertions(+), 1 deletions(-)
> >
> > Modified: trunk/lib/MTT/Test/RunEngine.pm
> >
> ==
> > --- trunk/lib/MTT/Test/RunEngine.pm   (original)
> > +++ trunk/lib/MTT/Test/RunEngine.pm   2009-09-08 09:43:37 EDT (Tue, 08
> Sep 2009)
> > @@ -191,7 +191,7 @@
> >  $MTT::Test::Run::test_executable_abspath = $test_exe_abs;
> >  $MTT::Test::Run::test_executable_basename = $test_exe_basename;
> >
> > -$MTT::Test::Run::test_argv =
> MTT::Values::EvaluateString($run->{argv}, $ini, $test_run_full_name);
> > +$MTT::Test::Run::test_argv = $run->{argv};
> >  my $all_np = MTT::Values::EvaluateString($run->{np}, $ini,
> $test_run_full_name);
> >
> >  my $save_run_mpi_details = $MTT::Test::Run::mpi_details;
> > ___
> > mtt-svn mailing list
> > mtt-...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/mtt-svn
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>


Re: [MTT devel] [MTT svn] svn:mtt-svn r1306

2009-08-11 Thread Mike Dubman
Hey Jeff,

This code acts as a pre-processor during loading of ini file into mtt.
It replaces builtin vars %VAR% with their values, for example:

...
[Test run: trivial]
my_sect_name=%INI_SECTION_NAME%
...

%INI_SECTION_NAME% get replaced with real value. (trivial)



it is useful in the following situation:

...
[test run: trivial]
#param=_run_name()
param=%INI_SECTION_NAME%
...

when "param" was accessed from Reporter context, test_run_name() will return
undef, but real value if %INI_SECTION_NAME% is used!



regards

Mike

On Tue, Aug 11, 2009 at 2:03 PM, Jeff Squyres  wrote:

> Mike --
>
> Can you explain what this does?
>
> On Aug 11, 2009, at 4:28 AM,  wrote:
>
>  Author: miked
>> Date: 2009-08-11 04:28:03 EDT (Tue, 11 Aug 2009)
>> New Revision: 1306
>> URL: https://svn.open-mpi.org/trac/mtt/changeset/1306
>>
>> Log:
>> added poor-man-inifile-preprocessor
>> Text files modified:
>>   trunk/client/mtt | 3 +++
>>   trunk/lib/MTT/INI.pm |24 
>>   2 files changed, 27 insertions(+), 0 deletions(-)
>>
>> Modified: trunk/client/mtt
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> ==
>> --- trunk/client/mtt(original)
>> +++ trunk/client/mtt2009-08-11 04:28:03 EDT (Tue, 11 Aug 2009)
>> @@ -652,6 +652,9 @@
>> # Expand all the "include_section" parameters
>> $unfiltered = MTT::INI::ExpandIncludeSections($unfiltered);
>>
>> +# Expand all the "%PREDEFINED_VARS%" parameters
>> +$unfiltered = MTT::INI::ExpandPredefinedVars($unfiltered);
>> +
>> # Keep an unfiltered version of the ini file for error checking
>> my $filtered = dclone($unfiltered);
>>
>>
>> Modified: trunk/lib/MTT/INI.pm
>>
>> ==
>> --- trunk/lib/MTT/INI.pm(original)
>> +++ trunk/lib/MTT/INI.pm2009-08-11 04:28:03 EDT (Tue, 11 Aug 2009)
>> @@ -275,6 +275,30 @@
>> return $ini;
>>  }
>>
>> +sub ExpandPredefinedVars {
>> +my($ini) = @_;
>> +
>> +foreach my $section ($ini->Sections) {
>> +   foreach my $parameter ($ini->Parameters($section)) {
>> +   my $val = $ini->val($section, $parameter);
>> +   if ( $val =~ /%INI_SECTION_NAME%/i ) {
>> +   my $sect = $section;
>> +   $sect =~ s/test run://gi;
>> +   $sect =~ s/test build://gi;
>> +   $sect =~ s/test get://gi;
>> +   $sect =~ s/mpi get://gi;
>> +   $sect =~ s/mpi install://gi;
>> +   $sect =~ s/mpi details://gi;
>> +   $sect =~ s/reporter://gi;
>> +   $val =~ s/%INI_SECTION_NAME%/$sect/g;
>> +   $ini->delval($section, $parameter);
>> +   $ini->newval($section, $parameter, $val);
>> +   }
>> +   }
>> +}
>> +return $ini;
>> +}
>> +
>>  # Worker subroutine for recursive ExpandIncludeSections
>>  sub _expand_include_sections {
>> my($ini, $section) = @_;
>> ___
>> mtt-svn mailing list
>> mtt-...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-svn
>>
>>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>


Re: [MTT devel] GSOC application

2009-04-22 Thread Mike Dubman
Hello guys,

Here is a small ppt with MTToGDS summary for tomorrow`s meeting.
regards

Mike

On Thu, Apr 16, 2009 at 5:02 PM, Jeff Squyres <jsquy...@cisco.com> wrote:

> Will there be dancing bears on the slides?  I'll only accept slides with
> dancing bears!
>
> ;-)
>
> (no need to be formal; if slides help, great, otherwise don't make slides
> just because we have webex available)
>
>
>
> On Apr 16, 2009, at 9:50 AM, Mike Dubman wrote:
>
>  I will prepare ppt with summary of what were discussed and agreed,
>> milestones, open questions and other thoughts.
>> regards
>>
>> Mike
>>
>> On Thu, Apr 16, 2009 at 2:07 PM, Jeff Squyres <jsquy...@cisco.com> wrote:
>> Ok, I think we converged on a time: 9am US Eastern / 4pm Israel next
>> Thuesday, April 23.
>>
>> I'll send the webex invites in a separate email.  Mike: if you have slides
>> or other electronic material to show during the call, we can use webex for
>> that.  Otherwise, we can just use the telephone part of webex and ignore the
>> web part.
>>
>>
>>
>> On Apr 15, 2009, at 4:18 PM, Josh Hursey wrote:
>>
>> I have been listening in on the thread, but have not had time to
>> really look at much (which is why I have not been replying). I'm
>> interested in listening in on the teleconf as well, though if I become
>> a blocker for finding a time feel free to cut me out.
>>
>> Best,
>> Josh
>>
>> On Apr 14, 2009, at 8:51 PM, Jeff Squyres wrote:
>>
>> >> BTW -- if it's useful to have a teleconference about this kind of
>> >> stuff, I can host a WebEx meeting.  WebEx has local dialins around
>> >> the world, including Israel...
>> >>
>> >>
>> >> sure, what about next week?
>> >
>> > I have a Doodle account -- let's try that to do the scheduling:
>> >
>> >http://doodle.com/gzpgaun2ef4szt29
>> >
>> > Ethan, Josh, and I are all in US Eastern timezone (I don't know if
>> > Josh will participate), so that might make scheduling *slightly*
>> > easier.  I started timeslots at 8am US Eastern and stopped as 2pm US
>> > Eastern -- that's already pretty late in Israel.  I also didn't list
>> > Friday, since that's the weekend in Israel.
>>
>> ___
>> mtt-devel mailing list
>> mtt-de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>>
>>
>> --
>> Jeff Squyres
>> Cisco Systems
>>
>> ___
>> mtt-devel mailing list
>> mtt-de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>>
>> ___
>> mtt-devel mailing list
>> mtt-de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>>
>
>
> --
> Jeff Squyres
> Cisco Systems
>
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>


MTToGDS.ppt
Description: MS-Powerpoint presentation


Re: [MTT devel] GSOC application

2009-04-16 Thread Mike Dubman
I will prepare ppt with summary of what were discussed and agreed,
milestones, open questions and other thoughts.
regards

Mike

On Thu, Apr 16, 2009 at 2:07 PM, Jeff Squyres  wrote:

> Ok, I think we converged on a time: 9am US Eastern / 4pm Israel next
> Thuesday, April 23.
>
> I'll send the webex invites in a separate email.  Mike: if you have slides
> or other electronic material to show during the call, we can use webex for
> that.  Otherwise, we can just use the telephone part of webex and ignore the
> web part.
>
>
>
> On Apr 15, 2009, at 4:18 PM, Josh Hursey wrote:
>
>  I have been listening in on the thread, but have not had time to
>> really look at much (which is why I have not been replying). I'm
>> interested in listening in on the teleconf as well, though if I become
>> a blocker for finding a time feel free to cut me out.
>>
>> Best,
>> Josh
>>
>> On Apr 14, 2009, at 8:51 PM, Jeff Squyres wrote:
>>
>> >> BTW -- if it's useful to have a teleconference about this kind of
>> >> stuff, I can host a WebEx meeting.  WebEx has local dialins around
>> >> the world, including Israel...
>> >>
>> >>
>> >> sure, what about next week?
>> >
>> > I have a Doodle account -- let's try that to do the scheduling:
>> >
>> >http://doodle.com/gzpgaun2ef4szt29
>> >
>> > Ethan, Josh, and I are all in US Eastern timezone (I don't know if
>> > Josh will participate), so that might make scheduling *slightly*
>> > easier.  I started timeslots at 8am US Eastern and stopped as 2pm US
>> > Eastern -- that's already pretty late in Israel.  I also didn't list
>> > Friday, since that's the weekend in Israel.
>>
>> ___
>> mtt-devel mailing list
>> mtt-de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>>
>
>
> --
> Jeff Squyres
> Cisco Systems
>
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>


Re: [MTT devel] GSOC application

2009-04-15 Thread Mike Dubman
On Wed, Apr 15, 2009 at 8:50 PM, Jeff Squyres <jsquy...@cisco.com> wrote:

> On Apr 15, 2009, at 1:45 PM, Mike Dubman wrote:
>
>  yep. correct. We can define only static attributes (which we know for sure
>> should present in every object of given type and leave phase specific
>> attributes to stay dynamic)
>>
>> Hmm.  I would think that even in each phase, we have a bunch of fields
>> that we *know* we want to have, right?
>>
>> correct, in gds terms they call it static attributes.
>>
>
> I was more nit-picking your statement that we would only have a field
> fields that would be available for every phase, and then use dynamic fields
> for all phase-specific data.  While GDS *can* handle that, wouldn't it be
> better to have a model for each phase (similar to your mockup) that expects
> a specific set of data for each phase?  Extra data on top of that would be a
> bonus, but wouldn't be necessary.  More specifically: we *know* what data
> should be available in each phase, so why not tell GDS about it in the model
> (rather than using dynamic fields that we know will always be there)?
>
> Perhaps we're just getting confused by language and I should wait for your
> next mock-up to see what you guys do... :-)
>

completely agree, the model for every phase object should contain mostly
static fields, based on current mtt phases info.
Also, we will have flexibility to expand phase objects without changing the
model.


Re: [MTT devel] GSOC application

2009-04-15 Thread Mike Dubman
On Wed, Apr 15, 2009 at 5:23 PM, Jeff Squyres <jsquy...@cisco.com> wrote:

> On Apr 15, 2009, at 9:14 AM, Mike Dubman wrote:
>
>  Hmm.  Ok, so you're saying that we define a "phase object" (for each
>> phase) with all the fields that we expect to have, but if we need to, we can
>> create fields on the fly, and google will just "do the right thing" and
>> associate *all* the data (the "expected" fields and the "dynamic" fields)
>> together?
>>
>> yep. correct. We can define only static attributes (which we know for sure
>> should present in every object of given type and leave phase specific
>> attributes to stay dynamic)
>>
>
> Hmm.  I would think that even in each phase, we have a bunch of fields that
> we *know* we want to have, right?


correct, in gds terms they call it static attributes.


>
>
>  I have a Doodle account -- let's try that to do the scheduling:
>>
>>   http://doodle.com/gzpgaun2ef4szt29
>>
>> Ethan, Josh, and I are all in US Eastern timezone (I don't know if Josh
>> will participate), so that might make scheduling *slightly* easier.  I
>> started timeslots at 8am US Eastern and stopped as 2pm US Eastern -- that's
>> already pretty late in Israel.  I also didn't list Friday, since that's the
>> weekend in Israel.
>>
>> can we do it on your morining? (our after noon) :)
>>
>
>
> Visit the Doodle URL (above) and you'll see.  :-)


aha, tried and here what I got:

Wir sind bald zurück
i tempt to agree with it :)





>
>
> --
> Jeff Squyres
> Cisco Systems
>
>


Re: [MTT devel] GSOC application

2009-04-15 Thread Mike Dubman
On Wed, Apr 15, 2009 at 3:51 AM, Jeff Squyres <jsquy...@cisco.com> wrote:

> On Apr 14, 2009, at 2:27 PM, Mike Dubman wrote:
>
> Ah, good point (python/java not perl).  But I think that
>> lib/MTT/Reporter/GoogleDataStore.pm could still be a good thing -- we have
>> invested a lot of time/effort into getting our particular mtt clients setup
>> just the way we want them, setting up INI files, submitting to batch
>> schedulers, etc.
>>
>> A GoogleDataStore.pm reporter could well fork/exec a python/java
>> executable to do the actual communication/storing of the data, right...?
>>  More below.
>>
>> completely agree, once we have external python/java/cobol scripts to
>> manipulate GDS objects, we should wrap it by perl and call from MTT in same
>> way like it works today for submitting to the postgress.
>>
>
> So say we all!  :-)
>
> (did they show Battlestar Gallactica in Israel?  :-) )
>
> sounds good, we should introduce some guid (like pid) for mtt session,
>> where all mtt results generated by this session will be referring to this
>> guid.  Later we use this guid to submit partial results as they become ready
>> and connect it to the appropriate mtt session object (see models.py)
>>
>
> I *believe* have have 2 values like this in the MTT client already:
>
> - an ID that represents a single MTT client run
> - an ID that represents a single MTT mpi install->test build->test run tree
>
>
> I think that Ethan was asking was: can't MTT run Fluent and then use the
>> normal Reporter mechanism to report the results into whatever back-end data
>> store we have?  (postgres or GDS)
>>
>> ahhh, okie, i see.
>>
>> Correct me if Im wrong, the current mtt implementation allows following
>> way of executing mpi test:
>> /path/to/mpirun  
>>
>
> Yes and no; it's controlled by the mpi details section, right?  You can put
> whatever you want in there.
>
> Many mpi based applications have embedded MPI libraries and non-standard
>> way to start it, one should set env variable to point to desired mpi
>> installation or pass it as cmd line argument, for example:
>>
>> for fluent:
>>
>> export OPENMPI_ROOT=/path/to/openmpi
>> fluent 
>>
>>
>> for pamcrash:
>> pamworld -np 2 -mpidir=/path/to/openmpi/dir 
>>
>> Im not sure if it is possible to express that execution semantic in mtt
>> ini file. Please suggest.
>> So far, it seems that such executions can be handled externally from mtt
>> but using same object model.
>>
>
> Understood.  I think you *could* get MTT to run these with specialized mpi
> details sections.  But it may or may not be worth it.
>
> For the attachment...
>>
>> I can "sorta read" python, but I'm not familiar with its intricacies and
>> its internal APIs.
>>
>> - models.py: looks good.  I don't know if *all* the fields we have are
>> listed here; it looks fairly short to me.  Did you attempt to include all of
>> the fields we submit through the various phases in Reporter are there, or
>> did you intentionally leave some out?  (I honestly haven't checked; it just
>> "feels short" to me compared to our SQL schema).
>>
>> I listed only some of the fields in every object representing specific
>> test result source (called phase in mtt language).
>>
>
> Ok.  So that's only a sample -- just showing an example, not necessarily
> trying to be complete.  Per Ethan's comments, there are a bunch of other
> fields that we have and/or we might just be able to "tie them together" in
> GDS.  I.e., our data is hierarchical -- it worked well enough in SQL because
> you could just have one record about a test build refer to another record
> about the corresponding mpi install.  And so on.  Can we do something
> similar in GDS?
>


yep, actually in GDS it should be much easier to have hierarchy, because it
is OO storage. We just need to map all object relations and put it in
models.py - gds will do the rest :)




>
>
> This is because every test result source object is derived from python
>> provided db.Expando class. This gives us great flexibility, like adding
>> dynamic attributes for every objects, for example:
>>
>> obj = new MttBuildPhaseResult()
>> obj.my_favorite_dynamic_key = "hello"
>> obj.my_another_dynamic_key = 7
>>
>> So, we can have all phase attributes in the phase object without defining
>> it in the *sql schema way*. Also we can query object model by these dynamic
>> keys.
>>
>
> Hmm.  Ok, so you're saying that we define a "phas

Re: [MTT devel] GSOC application

2009-04-15 Thread Mike Dubman
On Tue, Apr 14, 2009 at 11:50 PM, Ethan Mallove <ethan.mall...@sun.com>wrote:

>  On Tue, Apr/14/2009 09:27:14PM, Mike Dubman wrote:
> >On Tue, Apr 14, 2009 at 5:04 PM, Jeff Squyres <jsquy...@cisco.com>
> wrote:
> >
> >      On Apr 13, 2009, at 2:08 PM, Mike Dubman wrote:
> >
> >Hello Ethan,
> >
> >  Sorry for joining the discussion late... I was on travel last week
> and
> >  that always makes me waaay behind on my INBOX. *:-(
> >
> >On Mon, Apr 13, 2009 at 5:44 PM, Ethan Mallove <
> ethan.mall...@sun.com>
> >wrote:
> >
> >Will this translate to something like
> >lib/MTT/Reporter/GoogleDatabase.pm? *If we are to move away from
> the
> >current MTT Postgres database, we want to be able to submit
> results to
> >both the current MTT database and the new Google database during
> the
> >transition period. Having a GoogleDatabase.pm would make this
> easier.
> >
> >I think we should keep both storage options: current postgress and
> >datastore. The mtt changes will be minor to support datastore.
> >Due that fact that google appengine API (as well as datastore API)
> can
> >be python or java only, we will create external scripts to
> manipulate
> >datastore objects:
> >
> >  Ah, good point (python/java not perl). *But I think that
> >  lib/MTT/Reporter/GoogleDataStore.pm could still be a good thing --
> we
> >  have invested a lot of time/effort into getting our particular mtt
> >  clients setup just the way we want them, setting up INI files,
> >  submitting to batch schedulers, etc.
> >
> >  A GoogleDataStore.pm reporter could well fork/exec a python/java
> >  executable to do the actual communication/storing of the data,
> right...?
> >  *More below.
> >
> >completely agree, once we have external python/java/cobol scripts to
> >manipulate GDS objects, we should wrap it by perl and call from MTT in
> >same way like it works today for submitting to the postgress.
> >
> >*
> >
> >The mtt will dump test results in xml format. Then, we provide two
> >python (or java?) scripts:
> >
> >mtt-results-submit-to-datastore.py - script will be called at the
> end
> >of mtt run and will read xml files, create objects and save to
> >datastore
> >
> >  Could be pretty easy to have a Reporter/GDS.pm (I keep making that
> >  filename shorter, don't I? :-) ) that simply invokes the
> >  mtt-result-submit-to-datastore.pt script on the xml that it dumped
> for
> >  that particular test.
> >
> >  Specifically: I do like having partial results submitted while my
> MTT
> >  tests are running. *Cisco's testing cycle is about 24 hours, but
> groups
> >  of tests are finishing all the time, so it's good to see those
> results
> >  without having to wait the full 24 hours before anything shows up.
> *I
> >  guess that's my only comment on the idea of having a script that
> >  traverses the MTT scratch to find / submit everything -- I'd prefer
> if
> >  we kept the same Reporter idea and used an underlying .py script to
> >  submit results as they become ready.
> >
> >  Is this do-able?
> >
> >sounds good, we should introduce some guid (like pid) for mtt session,
> >where all mtt results generated by this session will be referring to
> this
> >guid.* Later we use this guid to submit partial results as they become
> >ready and connect it to the appropriate mtt session object (see
> models.py)
> >
> >mtt-results-query.py - sample script to query datastore and
> generate
> >some simple visual/tabular reports. It will serve as tutorial for
> >howto access mtt data from scripts for reporting.
> >
> >Later, we add another script to replace php web frontend. It will
> be
> >hosted on google appengine machines and will provide web viewer
> for
> >mtt results. (same way like index.php does today)
> >
> >  Sounds good.
> >
> >> * * *b. mtt_save_to_db.py - script which will go over mtt
> scratch
> >dir, find
> >> * * *all xml files generated for every mtt phase, parse it and
> save
> >to
> >> * * *datastore, preserving test results relations,i.e. all test
> >results will
> > 

Re: [MTT devel] GSOC application

2009-04-14 Thread Mike Dubman
On Tue, Apr 14, 2009 at 5:04 PM, Jeff Squyres <jsquy...@cisco.com> wrote:

> On Apr 13, 2009, at 2:08 PM, Mike Dubman wrote:
>
>  Hello Ethan,
>>
>
> Sorry for joining the discussion late... I was on travel last week and that
> always makes me waaay behind on my INBOX.  :-(
>
>  On Mon, Apr 13, 2009 at 5:44 PM, Ethan Mallove <ethan.mall...@sun.com>
>> wrote:
>>
>> Will this translate to something like
>> lib/MTT/Reporter/GoogleDatabase.pm?  If we are to move away from the
>> current MTT Postgres database, we want to be able to submit results to
>> both the current MTT database and the new Google database during the
>> transition period. Having a GoogleDatabase.pm would make this easier.
>>
>> I think we should keep both storage options: current postgress and
>> datastore. The mtt changes will be minor to support datastore.
>> Due that fact that google appengine API (as well as datastore API) can be
>> python or java only, we will create external scripts to manipulate datastore
>> objects:
>>
>
> Ah, good point (python/java not perl).  But I think that
> lib/MTT/Reporter/GoogleDataStore.pm could still be a good thing -- we have
> invested a lot of time/effort into getting our particular mtt clients setup
> just the way we want them, setting up INI files, submitting to batch
> schedulers, etc.
>
> A GoogleDataStore.pm reporter could well fork/exec a python/java executable
> to do the actual communication/storing of the data, right...?  More below.
>

completely agree, once we have external python/java/cobol scripts to
manipulate GDS objects, we should wrap it by perl and call from MTT in same
way like it works today for submitting to the postgress.



>
>
>  The mtt will dump test results in xml format. Then, we provide two python
>> (or java?) scripts:
>>
>> mtt-results-submit-to-datastore.py - script will be called at the end of
>> mtt run and will read xml files, create objects and save to datastore
>>
>
> Could be pretty easy to have a Reporter/GDS.pm (I keep making that filename
> shorter, don't I? :-) ) that simply invokes the mtt-result-
> submit-to-datastore.pt script on the xml that it dumped for that
> particular test.
>
> Specifically: I do like having partial results submitted while my MTT tests
> are running.  Cisco's testing cycle is about 24 hours, but groups of tests
> are finishing all the time, so it's good to see those results without having
> to wait the full 24 hours before anything shows up.  I guess that's my only
> comment on the idea of having a script that traverses the MTT scratch to
> find / submit everything -- I'd prefer if we kept the same Reporter idea and
> used an underlying .py script to submit results as they become ready.
>
> Is this do-able?


sounds good, we should introduce some guid (like pid) for mtt session, where
all mtt results generated by this session will be referring to this guid.
Later we use this guid to submit partial results as they become ready and
connect it to the appropriate mtt session object (see models.py)


>
>  mtt-results-query.py - sample script to query datastore and generate some
>> simple visual/tabular reports. It will serve as tutorial for howto access
>> mtt data from scripts for reporting.
>>
>> Later, we add another script to replace php web frontend. It will be
>> hosted on google appengine machines and will provide web viewer for mtt
>> results. (same way like index.php does today)
>>
>
> Sounds good.
>
>  >  b. mtt_save_to_db.py - script which will go over mtt scratch dir,
>> find
>> >  all xml files generated for every mtt phase, parse it and save to
>> >  datastore, preserving test results relations,i.e. all test results
>> will
>> >  be grouped by mtt general info: mpi version, name, date, 
>> >
>> >  c. same script can scan, parse and save from xml files generated by
>> >  wrapper scripts for non mtt based executions (fluent, ..)
>>
>> I'm confused here.  Can't MTT be outfitted to report results of a
>> Fluent run?
>>
>>
>> I think we can enhance mtt to be not only mpi testing platform, but also
>> to serve as mpi benchmarking platform. We can use datastore to keep
>> mpi-based benchmarking results in the same manner like mtt does for testing
>> results. (no changes to mtt required for that, it is just a side effect of
>> using datastore to keep data of any type)
>>
>
> I think that Ethan was asking was: can't MTT run Fluent and then use the
> normal Reporter mechanism to report the results into whatever back-end data
> store we have?  (postgres or GDS)
>


ahh

Re: [MTT devel] GSOC application

2009-04-13 Thread Mike Dubman
Hello Ethan,


On Mon, Apr 13, 2009 at 5:44 PM, Ethan Mallove wrote:

>
> Will this translate to something like
> lib/MTT/Reporter/GoogleDatabase.pm?  If we are to move away from the
> current MTT Postgres database, we want to be able to submit results to
> both the current MTT database and the new Google database during the
> transition period. Having a GoogleDatabase.pm would make this easier.
>

I think we should keep both storage options: current postgress and
datastore. The mtt changes will be minor to support datastore.
Due that fact that google appengine API (as well as datastore API) can be
python or java only, we will create external scripts to manipulate datastore
objects:

The mtt will dump test results in xml format. Then, we provide two python
(or java?) scripts:

mtt-results-submit-to-datastore.py - script will be called at the end of mtt
run and will read xml files, create objects and save to datastore
mtt-results-query.py - sample script to query datastore and generate some
simple visual/tabular reports. It will serve as tutorial for howto access
mtt data from scripts for reporting.

Later, we add another script to replace php web frontend. It will be hosted
on google appengine machines and will provide web viewer for mtt results.
(same way like index.php does today)



>
> >
> >  b. mtt_save_to_db.py - script which will go over mtt scratch dir,
> find
> >  all xml files generated for every mtt phase, parse it and save to
> >  datastore, preserving test results relations,i.e. all test results
> will
> >  be grouped by mtt general info: mpi version, name, date, 
> >
> >  c. same script can scan, parse and save from xml files generated by
> >  wrapper scripts for non mtt based executions (fluent, ..)
> >
>
> I'm confused here.  Can't MTT be outfitted to report results of a
> Fluent run?
>


I think we can enhance mtt to be not only mpi testing platform, but also to
serve as mpi benchmarking platform. We can use datastore to keep mpi-based
benchmarking results in the same manner like mtt does for testing results.
(no changes to mtt required for that, it is just a side effect of using
datastore to keep data of any type)



>
>
> >  d. mtt_query_db.py script will be provided with basic query
> capabilities
> >  over proposed datastore object model. Most users will prefer writing
> >  custom sql-like select queries for fetching results.
> >
> >  3. Important notes:
> >  ==
> >
> >  a. The single mtt client execution generates many result files,
> every
> >  generated file represents test phase. This file contains test
> results
> >  and can be characterized as a set of attributes with its values.
> Every
> >  test phase has its own attributes which are differ for different
> phases.
> >  For example: attributes for TestBuild phase has keys "compiler_name,
> >  compiler_version", the MPIInstall phase has attributes: prefix_dir,
> >  arch, 
> >  Hence, most of the datastore objects representing phases of MTT* are
> >  derived from "db.Expando" model, which allows having dynamic
> attributes
> >  for its derived sub-classes.
> >
> >  The attached is archive with a simple test for using datastore for
> mtt.
> >  Please see models.py file with proposed object model and comment.
> >
>
> I don't see the models.py attachment.
>

I just sent original email with attachment, tell me if you want me to send
it again.

>
>
regards

Mike


[MTT devel] Fwd: GSOC application

2009-04-13 Thread Mike Dubman
resending original post with attachment (mtt_datastore.tbz). it is sample
google appengine API application. It contains models.py with object model
proposal and script with some examples of object model use-cases and flows.

-- Forwarded message --
From: Mike Dubman <mike.o...@gmail.com>
List-Post: mtt-devel@lists.open-mpi.org
Date: Mon, Apr 6, 2009 at 4:54 PM
Subject: Re: [MTT devel] GSOC application
To: Development list for the MPI Testing Tool <mtt-de...@open-mpi.org>


Hello Guys,

I have played a bit with google datastore and here is a proposal for mtt DB
infra and some accompanying tools for submission and querying:


1. Scope and requirements


a. provide storage services for storing test results generated by mtt.
Storage services will be implemented over datastore.
b. provide storage services for storing benchmarking results generated by
various mpi based applications  (not mtt based, for example: fluent,
openfoam)
c. test or benchmarking results stored in the datastore can be grouped and
referred as a group (for example: mtt execution can generate many mtt
results consisting of different phases. This mtt execution will be referred
as a session)
d. Benchmarking and test results which are generated by mtt or any other mpi
based applications, can be stored in the datastore and grouped by some
logical criteria.
e. The mtt should not depend or call directly any datastore`s provided APIs.
The mtt client (or framework/scripts executing mpi based applications)
should generate test/benchmarking results in some internal format, which
will be processed later by external tools. These external tools will be
responsible for saving test results in the datastore. Same rules should be
applied for non mtt based executions of mpi-based applications (line fluent,
openfoam,...). The scripts which are wrapping such executions will dump
benchmarking results in some internal form for later processing by external
tools.

f. The internal form for representation of test/benchmarking results can be
XML. The external tool will receive (as cmd line params) XML files, process
them and save to the datastore.

d. The external tools will be familiar with datastore object model and will
provide bridge between test results (XML) and actual datastore.



2. Flow and use-cases
=

a. The mtt client will dump all test related information into XML file. The
file will be created for every phase executed by mtt. (today there are many
summary txt and html files generated for every test phase, it is pretty easy
to add xml generation of the same information)

b. mtt_save_to_db.py - script which will go over mtt scratch dir, find all
xml files generated for every mtt phase, parse it and save to datastore,
preserving test results relations,i.e. all test results will be grouped by
mtt general info: mpi version, name, date, 

c. same script can scan, parse and save from xml files generated by wrapper
scripts for non mtt based executions (fluent, ..)

d. mtt_query_db.py script will be provided with basic query capabilities
over proposed datastore object model. Most users will prefer writing custom
sql-like select queries for fetching results.

3. Important notes:
==

a. The single mtt client execution generates many result files, every
generated file represents test phase. This file contains test results and
can be characterized as a set of attributes with its values. Every test
phase has its own attributes which are differ for different phases. For
example: attributes for TestBuild phase has keys "compiler_name,
compiler_version", the MPIInstall phase has attributes: prefix_dir, arch,

Hence, most of the datastore objects representing phases of MTT  are derived
from "db.Expando" model, which allows having dynamic attributes for its
derived sub-classes.


The attached is archive with a simple test for using datastore for mtt.
Please see models.py file with proposed object model and comment.

You can run the attached example in the google datastore dev environment. (
http://code.google.com/appengine/downloads.html)

Please comment.


Thanks

Mike



On Tue, Mar 24, 2009 at 12:17 AM, Jeff Squyres <jsquy...@cisco.com> wrote:

> On Mar 23, 2009, at 9:05 AM, Ethan Mallove wrote:
>
>   ---+-+--
>>  Resource   | Unit| Unit cost
>>  ---+-+--
>>  Outgoing Bandwidth | gigabytes   | $0.12
>>  Incoming Bandwidth | gigabytes   | $0.10
>>  CPU Time   | CPU hours   | $0.10
>>  Stored Data| gigabytes per month | $0.15
>>  Recipients Emailed | recipients  | $0.0001
>>  ---+-+--
>>
>> Would we itemize the MTT bill on a per user basis?  E.g., orgs that
>> use MTT more, would h

Re: [MTT devel] GSOC application

2009-04-13 Thread Mike Dubman
Hello Guys,

Please comment on the proposed object model and flows. We will have 1-2 ppl
working on this in a 2-3w. Till that moment I would like to finalize the
scope and flows.

Thanks

Mike.


On Mon, Apr 6, 2009 at 4:54 PM, Mike Dubman <mike.o...@gmail.com> wrote:

> Hello Guys,
>
> I have played a bit with google datastore and here is a proposal for mtt DB
> infra and some accompanying tools for submission and querying:
>
>
> 1. Scope and requirements
> 
>
> a. provide storage services for storing test results generated by mtt.
> Storage services will be implemented over datastore.
> b. provide storage services for storing benchmarking results generated by
> various mpi based applications  (not mtt based, for example: fluent,
> openfoam)
> c. test or benchmarking results stored in the datastore can be grouped and
> referred as a group (for example: mtt execution can generate many mtt
> results consisting of different phases. This mtt execution will be referred
> as a session)
> d. Benchmarking and test results which are generated by mtt or any other
> mpi based applications, can be stored in the datastore and grouped by some
> logical criteria.
> e. The mtt should not depend or call directly any datastore`s provided
> APIs. The mtt client (or framework/scripts executing mpi based applications)
> should generate test/benchmarking results in some internal format, which
> will be processed later by external tools. These external tools will be
> responsible for saving test results in the datastore. Same rules should be
> applied for non mtt based executions of mpi-based applications (line fluent,
> openfoam,...). The scripts which are wrapping such executions will dump
> benchmarking results in some internal form for later processing by external
> tools.
>
> f. The internal form for representation of test/benchmarking results can be
> XML. The external tool will receive (as cmd line params) XML files, process
> them and save to the datastore.
>
> d. The external tools will be familiar with datastore object model and will
> provide bridge between test results (XML) and actual datastore.
>
>
>
> 2. Flow and use-cases
> =
>
> a. The mtt client will dump all test related information into XML file. The
> file will be created for every phase executed by mtt. (today there are many
> summary txt and html files generated for every test phase, it is pretty easy
> to add xml generation of the same information)
>
> b. mtt_save_to_db.py - script which will go over mtt scratch dir, find all
> xml files generated for every mtt phase, parse it and save to datastore,
> preserving test results relations,i.e. all test results will be grouped by
> mtt general info: mpi version, name, date, 
>
> c. same script can scan, parse and save from xml files generated by wrapper
> scripts for non mtt based executions (fluent, ..)
>
> d. mtt_query_db.py script will be provided with basic query capabilities
> over proposed datastore object model. Most users will prefer writing custom
> sql-like select queries for fetching results.
>
> 3. Important notes:
> ==
>
> a. The single mtt client execution generates many result files, every
> generated file represents test phase. This file contains test results and
> can be characterized as a set of attributes with its values. Every test
> phase has its own attributes which are differ for different phases. For
> example: attributes for TestBuild phase has keys "compiler_name,
> compiler_version", the MPIInstall phase has attributes: prefix_dir, arch,
> 
> Hence, most of the datastore objects representing phases of MTT  are
> derived from "db.Expando" model, which allows having dynamic attributes for
> its derived sub-classes.
>
>
> The attached is archive with a simple test for using datastore for mtt.
> Please see models.py file with proposed object model and comment.
>
> You can run the attached example in the google datastore dev environment. (
> http://code.google.com/appengine/downloads.html)
>
> Please comment.
>
>
> Thanks
>
> Mike
>
>
>
> On Tue, Mar 24, 2009 at 12:17 AM, Jeff Squyres <jsquy...@cisco.com> wrote:
>
>> On Mar 23, 2009, at 9:05 AM, Ethan Mallove wrote:
>>
>>   ---+-+--
>>>  Resource   | Unit| Unit cost
>>>  ---+-+--
>>>  Outgoing Bandwidth | gigabytes   | $0.12
>>>  Incoming Bandwidth | gigabytes   | $0.10
>>>  CPU Time   | CPU hours   | $0.10
>>>  Stored Data| gigabytes per month | $0.15
>>

Re: [MTT devel] GSOC application

2009-04-06 Thread Mike Dubman
Hello Guys,

I have played a bit with google datastore and here is a proposal for mtt DB
infra and some accompanying tools for submission and querying:


1. Scope and requirements


a. provide storage services for storing test results generated by mtt.
Storage services will be implemented over datastore.
b. provide storage services for storing benchmarking results generated by
various mpi based applications  (not mtt based, for example: fluent,
openfoam)
c. test or benchmarking results stored in the datastore can be grouped and
referred as a group (for example: mtt execution can generate many mtt
results consisting of different phases. This mtt execution will be referred
as a session)
d. Benchmarking and test results which are generated by mtt or any other mpi
based applications, can be stored in the datastore and grouped by some
logical criteria.
e. The mtt should not depend or call directly any datastore`s provided APIs.
The mtt client (or framework/scripts executing mpi based applications)
should generate test/benchmarking results in some internal format, which
will be processed later by external tools. These external tools will be
responsible for saving test results in the datastore. Same rules should be
applied for non mtt based executions of mpi-based applications (line fluent,
openfoam,...). The scripts which are wrapping such executions will dump
benchmarking results in some internal form for later processing by external
tools.

f. The internal form for representation of test/benchmarking results can be
XML. The external tool will receive (as cmd line params) XML files, process
them and save to the datastore.

d. The external tools will be familiar with datastore object model and will
provide bridge between test results (XML) and actual datastore.



2. Flow and use-cases
=

a. The mtt client will dump all test related information into XML file. The
file will be created for every phase executed by mtt. (today there are many
summary txt and html files generated for every test phase, it is pretty easy
to add xml generation of the same information)

b. mtt_save_to_db.py - script which will go over mtt scratch dir, find all
xml files generated for every mtt phase, parse it and save to datastore,
preserving test results relations,i.e. all test results will be grouped by
mtt general info: mpi version, name, date, 

c. same script can scan, parse and save from xml files generated by wrapper
scripts for non mtt based executions (fluent, ..)

d. mtt_query_db.py script will be provided with basic query capabilities
over proposed datastore object model. Most users will prefer writing custom
sql-like select queries for fetching results.

3. Important notes:
==

a. The single mtt client execution generates many result files, every
generated file represents test phase. This file contains test results and
can be characterized as a set of attributes with its values. Every test
phase has its own attributes which are differ for different phases. For
example: attributes for TestBuild phase has keys "compiler_name,
compiler_version", the MPIInstall phase has attributes: prefix_dir, arch,

Hence, most of the datastore objects representing phases of MTT  are derived
from "db.Expando" model, which allows having dynamic attributes for its
derived sub-classes.


The attached is archive with a simple test for using datastore for mtt.
Please see models.py file with proposed object model and comment.

You can run the attached example in the google datastore dev environment. (
http://code.google.com/appengine/downloads.html)

Please comment.


Thanks

Mike


On Tue, Mar 24, 2009 at 12:17 AM, Jeff Squyres  wrote:

> On Mar 23, 2009, at 9:05 AM, Ethan Mallove wrote:
>
>   ---+-+--
>>  Resource   | Unit| Unit cost
>>  ---+-+--
>>  Outgoing Bandwidth | gigabytes   | $0.12
>>  Incoming Bandwidth | gigabytes   | $0.10
>>  CPU Time   | CPU hours   | $0.10
>>  Stored Data| gigabytes per month | $0.15
>>  Recipients Emailed | recipients  | $0.0001
>>  ---+-+--
>>
>> Would we itemize the MTT bill on a per user basis?  E.g., orgs that
>> use MTT more, would have to pay more?
>>
>>
>
> Let's assume stored data == incoming bandwidth, because we never throw
> anything away.  And let's go with the SWAG of 100GB.  We may or may not be
> able to gzip the data uploading to the server.  So if anything, we *might*
> be able to decrease the incoming data and have higher level of stored data.
>
> I anticipate our outgoing data to be significantly less, particularly if we
> can gzip the outgoing data (which I think we can).  You're right, CPU time
> is a mystery -- we won't know what it will be until we start running some
> queries to see what happens.
>
> 100GB * $0.10 = 

Re: [MTT devel] GSOC application

2009-03-23 Thread Mike Dubman
I'm playing with google datastore now and will send some proposal and
thoughts.

On Mon, Mar 23, 2009 at 2:33 PM, Jeff Squyres <jsquy...@cisco.com> wrote:

> Yes, I think you're right -- making a "schema" for the datastore might be
> quite easy.  I'm on travel all this week and likely won't be able to look
> into this stuff -- can you guys post a proposal and we can dive into it from
> that angle?
>
>
> On Mar 22, 2009, at 6:48 AM, Mike Dubman wrote:
>
>  Hello guys,
>>
>> I`m not sure if we should preserve current DB schema, from one simple
>> reason - datastore is an object oriented storage and have different rules
>> and techniques then rdbms.
>> The basic storage unit in the datastore is an object which can be saved,
>> loaded and queried.
>> (hadoop is based on the same principles, but open source.)
>>
>> It seems that DB model for mtt over datastore should not be complex at
>> all. The current mtt db schema is mostly optimized for specific queries
>> dictated by web UI. Datastore creates indexes automatically, based on
>> submitted queries history.
>>
>> I suggest we discuss/exchange db layout proposals by emails and when we
>> get to some general understanding how it should look like - we switch to
>> telepresence.
>>
>> Also, It seems not problem at all to get datastore access for existing
>> gmail account. You get 500MB quota for storage. It takes 5min to start using
>> it.
>>
>> Here is some short info for datastore API:
>> - howto submit data model to datastore
>> - howto save, load, query
>>
>>
>> http://code.google.com/appengine/docs/python/gettingstarted/usingdatastore.html
>>
>> please comment.
>>
>> Thanks
>>
>> Mike
>>
>> On Fri, Mar 20, 2009 at 5:38 PM, Jeff Squyres <jsquy...@cisco.com> wrote:
>> On Mar 20, 2009, at 10:42 AM, Josh Hursey wrote:
>>
>> Yeah I think this sounds like a good way to move forward with this
>> work. The database schema is pretty complex. If you need help on the
>> database side of things let me know.
>>
>> To get started, would it be useful to have a meeting over the phone/
>> telepresence to design the datastore layout? This gives us an
>> opportunity to start from a blank slate with regards to the
>> datastore, so it may be useful brainstorm a bit beforehand.
>>
>>
>> Yes, it probably would.  My understanding of hadoop (which is very
>> highlevel) is that just dump everything in without too much concern about
>> the structure / "schema".  But I could be wrong on that.
>>
>>
>> The Google Apps account is under my personal Google account, so I'm
>> reluctant to use it. I think the reason it took so long for me, was
>> because when I originally signed up it was in limited beta. I think
>> the approval time is much shorter now (maybe a day?), and we can make
>> an openmpi or mtt account that we can use.
>>
>> With regard to Hadoop, I don't think that IU has a set of machines
>> that would work, but I can ask around. We could always try Hadoop on
>> a single machine if people wanted to play around with data querying/
>> storage.
>>
>> I don't have a strong preference either way, but Google Apps may
>> provide us with a lower overhead solution for the long run even
>> though it costs $$.
>>
>>
>>
>> It looks like there is a set that you can use for free.  When you go over
>> one of several metrics (CPU hours/day, storage, bandwidth in, bandwidth out,
>> etc.), then you have to start paying.  But even with that, the costs look
>> *quite* reasonable and should be easily covered by the combined Open MPI
>> organizations (I'm talking hundreds of dollars here, not tens of thousands).
>>
>>
>> --
>> Jeff Squyres
>> Cisco Systems
>>
>> ___
>> mtt-devel mailing list
>> mtt-de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>>
>> ___
>> mtt-devel mailing list
>> mtt-de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>>
>
>
> --
> Jeff Squyres
> Cisco Systems
>
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>


Re: [MTT devel] GSOC application

2009-03-22 Thread Mike Dubman
Hello guys,

I`m not sure if we should preserve current DB schema, from one simple reason
- datastore is an object oriented storage and have different rules and
techniques then rdbms.
The basic storage unit in the datastore is an object which can be saved,
loaded and queried.
(hadoop is based on the same principles, but open source.)

It seems that DB model for mtt over datastore should not be complex at all.
The current mtt db schema is mostly optimized for specific queries dictated
by web UI. Datastore creates indexes automatically, based on submitted
queries history.

I suggest we discuss/exchange db layout proposals by emails and when we get
to some general understanding how it should look like - we switch to
telepresence.

Also, It seems not problem at all to get datastore access for existing gmail
account. You get 500MB quota for storage. It takes 5min to start using it.

Here is some short info for datastore API:
- howto submit data model to datastore
- howto save, load, query

http://code.google.com/appengine/docs/python/gettingstarted/usingdatastore.html

please comment.

Thanks

Mike

On Fri, Mar 20, 2009 at 5:38 PM, Jeff Squyres  wrote:

> On Mar 20, 2009, at 10:42 AM, Josh Hursey wrote:
>
>  Yeah I think this sounds like a good way to move forward with this
>> work. The database schema is pretty complex. If you need help on the
>> database side of things let me know.
>>
>> To get started, would it be useful to have a meeting over the phone/
>> telepresence to design the datastore layout? This gives us an
>> opportunity to start from a blank slate with regards to the
>> datastore, so it may be useful brainstorm a bit beforehand.
>>
>>
> Yes, it probably would.  My understanding of hadoop (which is very
> highlevel) is that just dump everything in without too much concern about
> the structure / "schema".  But I could be wrong on that.
>
>  The Google Apps account is under my personal Google account, so I'm
>> reluctant to use it. I think the reason it took so long for me, was
>> because when I originally signed up it was in limited beta. I think
>> the approval time is much shorter now (maybe a day?), and we can make
>> an openmpi or mtt account that we can use.
>>
>> With regard to Hadoop, I don't think that IU has a set of machines
>> that would work, but I can ask around. We could always try Hadoop on
>> a single machine if people wanted to play around with data querying/
>> storage.
>>
>> I don't have a strong preference either way, but Google Apps may
>> provide us with a lower overhead solution for the long run even
>> though it costs $$.
>>
>>
>
> It looks like there is a set that you can use for free.  When you go over
> one of several metrics (CPU hours/day, storage, bandwidth in, bandwidth out,
> etc.), then you have to start paying.  But even with that, the costs look
> *quite* reasonable and should be easily covered by the combined Open MPI
> organizations (I'm talking hundreds of dollars here, not tens of thousands).
>
>
> --
> Jeff Squyres
> Cisco Systems
>
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>


Re: [MTT devel] [MTT svn] svn:mtt-svn r1273 (Analyze/Performance plug-ins)

2009-03-19 Thread Mike Dubman
Hello Eithan,
Thanks for info, will refactor it.

from http://www.netlib.org/benchmark/hpl/

...
*HPL* is a software package that solves a (random) dense linear system in
double precision (64 bits) arithmetic on distributed-memory computers. It
can thus be regarded as a portable as well as freely available
implementation of the High Performance Computing Linpack Benchmark.

The HPL package provides a testing and timing program to quantify the *
accuracy* of the obtained solution as well as the time it took to compute it
...


Where do you think is a good place to keep parsers for other then lat/bw
based mpi benchmarks?
I think we can have a collection of such parsers in the mtt and at some
point we can enhance mtt reports with other metrics.

What do you think?


On Thu, Mar 19, 2009 at 8:22 PM, Ethan Mallove wrote:

> Hi Mike,
>
> Is HPL a latency and/or bandwidth performance test?  All the Analyze
> plug-ins in lib/MTT/Test/Analyze/Performance are for latency/bandwidth
> tests, which means they can then be rendered as graphs in the MTT
> Reporter.  All of these plug-ins are required to output at least one
> of the following:
>
>  latency_avg
>  latency_min
>  latency_max
>  bandwidth_avg
>  bandwidth_min
>  bandwidth_max
>
> They all contain this:
>
>  $report->{test_type} = 'latency_bandwidth';
>
> HPL.pm should have a line like this somewhere:
>
>  $report->{test_type} = 'tv_gflops';
>
> Maybe HPL.pm could go into a different directory or have a comment
> somewhere to clear up this confusion.
>
> Regards,
> Ethan
>
>
> On Thu, Mar/19/2009 02:11:05AM, mi...@osl.iu.edu wrote:
> > Author: miked
> > Date: 2009-03-19 02:11:04 EDT (Thu, 19 Mar 2009)
> > New Revision: 1273
> > URL: https://svn.open-mpi.org/trac/mtt/changeset/1273
> >
> > Log:
> > HPL analyzer added
> >
> > Added:
> >trunk/lib/MTT/Test/Analyze/Performance/HPL.pm
> >
> > Added: trunk/lib/MTT/Test/Analyze/Performance/HPL.pm
> >
> ==
> > --- (empty file)
> > +++ trunk/lib/MTT/Test/Analyze/Performance/HPL.pm 2009-03-19 02:11:04
> EDT (Thu, 19 Mar 2009)
> > @@ -0,0 +1,63 @@
> > +#!/usr/bin/env perl
> > +#
> > +# Copyright (c) 2006-2007 Sun Microsystems, Inc.  All rights reserved.
> > +# Copyright (c) 2007  Voltaire  All rights reserved.
> > +# $COPYRIGHT$
> > +#
> > +# Additional copyrights may follow
> > +#
> > +# $HEADER$
> > +#
> > +
> > +package MTT::Test::Analyze::Performance::HPL;
> > +use strict;
> > +use Data::Dumper;
> > +#use MTT::Messages;
> > +
> > +# Process the result_stdout emitted from one of hpl tests
> > +sub Analyze {
> > +
> > +my($result_stdout) = @_;
> > +my $report;
> > +my(@t_v,
> > +   @time,
> > +   @gflops);
> > +
> > +$report->{test_name}="HPL";
> > +my @lines = split(/\n|\r/, $result_stdout);
> > +# Sample result_stdout:
> > +#- The matrix A is randomly generated for each test.
> > +#- The following scaled residual check will be computed:
> > +#  ||Ax-b||_oo / ( eps * ( || x ||_oo * || A ||_oo + || b ||_oo ) *
> N )
> > +#- The relative machine precision (eps) is taken to be
> 1.110223e-16
> > +#- Computational tests pass if scaled residuals are less than
>  16.0
> >
> +#
> > +#T/VNNB P Q   Time
>   Gflops
> >
> +#
> > +#WR00L2L2   29184   128 2 4   15596.86
>  1.063e+00
> >
> +#
> > +#||Ax-b||_oo/(eps*(||A||_oo*||x||_oo+||b||_oo)*N)=0.0008986
> .. PASSED
> >
> +#
> > +#T/VNNB P Q   Time
>   Gflops
> >
> +#
> > +#WR00L2L4   29184   128 2 4   15251.81
>  1.087e+00
> > +my $line;
> > +while (defined($line = shift(@lines))) {
> > +#WR00L2L2   29184   128 2 4   15596.86
>1.063e+00
> > +if ($line =~
> m/^(\S+)\s+\d+\s+\d+\s+\d+\s+\d+\s+(\d+[\.\d]+)\s+(\S+)/) {
> > +push(@t_v, $1);
> > +push(@time, $2);
> > +push(@gflops, $3);
> > +}
> > +}
> > +
> > +  # Postgres uses brackets for array insertion
> > +# (see postgresql.org/docs/7.4/interactive/arrays.html)
> > +$report->{tv}   = "{" . join(",", @t_v) . "}";
> > +$report->{time}   = "{" . join(",", @time) . "}";
> > +$report->{gflops}   = "{" . join(",", @gflops) . "}";
> > +return $report;
> > +}
> > +
> > +1;
> > +
> > ___
> > mtt-svn mailing list
> > mtt-...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/mtt-svn
>


Re: [MTT devel] mtt text report oddity

2009-03-19 Thread Mike Dubman
because the results are rendered in chunks during reporting phase. (100
pieces every flush)
This caused same benchmark line to appear more then once in the final
report.

You can configure the reporter to issue results not by number, but for same
benchmark at once:

put this in the ini file:

[MTT]
submit_group_results=1


Also, html report is nicer and allows you easy navigation to the errors

regards

Mike


2009/3/19 Jeff Squyres 

> I got a fairly odd mtt text report (it's super wide, sorry):
>
> | Test Run| intel | 1.3.1rc5| 00:12| 5|  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 02:59| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 03:08| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 02:51| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 02:59| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 03:48| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 03:10| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 03:05| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 03:09| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 03:25| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 02:46| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 02:59| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 03:23| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 02:50| 100  | 1|
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 02:56| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 02:53| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 03:22| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 04:21| 100  |  |
> 1|  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 04:12| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 03:36| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 02:48| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 02:47| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 03:08| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 02:57| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 02:43| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
> | Test Run| intel | 1.3.1rc5| 02:48| 101  |  |
>  |  | Test_Run-intel-developer-1.3.1rc5.html|
>
> Notice that there are *many* "intel" lines, each with 101 passes.  The only
> difference between them is the times that they ran -- but there's even
> repeats of that.
>
> Do we know why there is so many different lines for the intel test suite?
>
> Did this get changed in the text reporter changes from Voltaire (somewhat)
> recently?
>
> --
> Jeff Squyres
> Cisco Systems
>
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> 

Re: [MTT devel] GSOC apps now open

2009-03-11 Thread Mike Dubman
I`ll help, lead us master.

On Tue, Mar 10, 2009 at 6:01 PM, Josh Hursey  wrote:

> Yeah I have some time to dedicate do this. We should talk about a couple of
> specific topics to propose from this list we posted on the wiki.
>
> I can start digging in later this evening/tomorrow morning.
>
> -- Josh
>
>
> On Mar 10, 2009, at 10:42 AM, Jeff Squyres wrote:
>
>  Google Summer of Code applications are now open.  We have until Friday to
>> post one.
>>
>> I may have time to put an application together, but I'll need some help
>> proofing/finalizing it before it gets submitted.  If I put it together, will
>> one or more of you help me finish it?
>>
>> --
>> Jeff Squyres
>> Cisco Systems
>>
>> ___
>> mtt-devel mailing list
>> mtt-de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>>
>
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>


Re: [MTT devel] mpi_details section with different scenarios for command line params

2008-11-04 Thread Mike Dubman
yep. it works. I thought that "exec" for mpirun will be executed once with
all @mca@ params passed to it.

Thanks again.
Mike

On Tue, Nov 4, 2008 at 2:08 PM, Jeff Squyres <jsquy...@cisco.com> wrote:

> On Nov 4, 2008, at 1:18 AM, Mike Dubman wrote:
>
>  Do you mean that you have a huge "" funclet with different
>> command lines for mpirun inside mpi_details section or smth else?
>>
>
> I have a few enumerates, yes.  See ompi-tests/trunk/cisco/mtt.  The main
> INI file that I use nightly is cisco-ompi-core-testing.ini.  I just recently
> started using the "include_section" directive to cut down on a lot of
> repetition between many of my sections.
>
>
> --
> Jeff Squyres
> Cisco Systems
>
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>


Re: [MTT devel] mpi_details section with different scenarios for command line params

2008-11-04 Thread Mike Dubman
Hello Jeff,
Do you mean that you have a huge "" funclet with different command
lines for mpirun inside mpi_details section or smth else?

Mike.

On Mon, Nov 3, 2008 at 7:55 PM, Jeff Squyres <jsquy...@cisco.com> wrote:

> What exactly do you want to do?
>
> For example, Cisco's MTT files simply list a huge number of different
> mpirun command lines in the MPI Details section (25, in one case, IIRC).  So
> I run lots of different cases for each MPI test (e.g., with leave pinned,
> without leave pinned, ...etc.).
>
>
>
> On Nov 3, 2008, at 10:45 AM, Ethan Mallove wrote:
>
>  On Mon, Nov/03/2008 09:34:07AM, Mike Dubman wrote:
>>
>>>  Hello Guys,
>>>
>>>  Please suggest the proper way to handle the following:
>>>
>>>  Is there any way to run "test run" section with a list
>>>  of "mpi_details" sections?
>>>
>>
>> Mike,
>>
>> There is currently no way to iterate over multiple
>> mpi_details sections, but there might be an acceptable
>> workaround. You can create a simple wrapper script to
>> iterate over variations of your MPI details section using
>> command line INI file overrides (see
>> https://svn.open-mpi.org/trac/mtt/wiki/INIOverrides). E.g.,
>> say you have the following MPI details section:
>>
>>  [MPI details: Open MPI]
>>  foo = some default value
>>  bar = some default value
>>  exec = mpirun @foo@ @bar@ ...
>>
>> Using command-line INI overrides, you can iterate over a
>> series of values for "foo" and/or "bar":
>>
>>  $ client/mtt --scratch /some/dir ...
>>  $ client/mtt --scratch /some/dir --test-run foo=abc ...
>>  $ client/mtt --scratch /some/dir --test-run foo=def ...
>>  $ client/mtt --scratch /some/dir --test-run bar=uvw ...
>>  $ client/mtt --scratch /some/dir --test-run bar=xyz ...
>>  ...
>>
>> Note in the above example, we use the same scratch directory
>> for all the runs, and we run only the test run phase (via
>> the --test-run option) since we do not need to reinstall or
>> rebuild anything as we iterate over different command lines.
>>
>> Could the above be of use for what you're trying to do?
>>
>> -Ethan
>>
>>
>>
>>>  Or how to execute specific "Test run" section against
>>>  specific "mpi_details" section, where "mpi_details" can
>>>  have many different scenarios of command line
>>>  parameters (i.e. single mpi_details should be executed
>>>  a number of times equal to the number of available
>>>  scenarios for this mpi_details)? Is that possible? (it
>>>  is similar to the @np param treatment available inside
>>>  mpi_details section)
>>>
>>>  Thanks
>>>
>>>  Mike.
>>>
>>
>>  ___
>>> mtt-devel mailing list
>>> mtt-de...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>>>
>>
>> ___
>> mtt-devel mailing list
>> mtt-de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>>
>
>
> --
> Jeff Squyres
> Cisco Systems
>
>
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>


Re: [MTT devel] mpi_details section with different scenarios for command line params

2008-11-04 Thread Mike Dubman
Ethan,
Thanks for the tip, nice way to achieve multi-scenarios for single
mpi_details section.


Mike.

On Mon, Nov 3, 2008 at 5:45 PM, Ethan Mallove <ethan.mall...@sun.com> wrote:

> On Mon, Nov/03/2008 09:34:07AM, Mike Dubman wrote:
> >Hello Guys,
> >
> >Please suggest the proper way to handle the following:
> >
> >Is there any way to run "test run" section with a list
> >of "mpi_details" sections?
>
> Mike,
>
> There is currently no way to iterate over multiple
> mpi_details sections, but there might be an acceptable
> workaround. You can create a simple wrapper script to
> iterate over variations of your MPI details section using
> command line INI file overrides (see
> https://svn.open-mpi.org/trac/mtt/wiki/INIOverrides). E.g.,
> say you have the following MPI details section:
>
>  [MPI details: Open MPI]
>  foo = some default value
>  bar = some default value
>  exec = mpirun @foo@ @bar@ ...
>
> Using command-line INI overrides, you can iterate over a
> series of values for "foo" and/or "bar":
>
>  $ client/mtt --scratch /some/dir ...
>  $ client/mtt --scratch /some/dir --test-run foo=abc ...
>  $ client/mtt --scratch /some/dir --test-run foo=def ...
>  $ client/mtt --scratch /some/dir --test-run bar=uvw ...
>  $ client/mtt --scratch /some/dir --test-run bar=xyz ...
>  ...
>
> Note in the above example, we use the same scratch directory
> for all the runs, and we run only the test run phase (via
> the --test-run option) since we do not need to reinstall or
> rebuild anything as we iterate over different command lines.
>
> Could the above be of use for what you're trying to do?
>
> -Ethan
>
>
> >
> >Or how to execute specific "Test run" section against
> >specific "mpi_details" section, where "mpi_details" can
> >have many different scenarios of command line
> >parameters (i.e. single mpi_details should be executed
> >a number of times equal to the number of available
> >scenarios for this mpi_details)? Is that possible? (it
> >is similar to the @np param treatment available inside
> >mpi_details section)
> >
> >Thanks
> >
> >Mike.
>
> > ___
> > mtt-devel mailing list
> > mtt-de...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>


[MTT devel] mpi_details section with different scenarios for command line params

2008-11-03 Thread Mike Dubman
Hello Guys,
Please suggest the proper way to handle the following:

Is there any way to run "test run" section with a list of "mpi_details"
sections?

Or how to execute specific "Test run" section against specific "mpi_details"
section, where "mpi_details" can have many different scenarios of command
line parameters (i.e. single mpi_details should be executed a number of
times equal to the number of available scenarios for this mpi_details)? Is
that possible? (it is similar to the @np param treatment available inside
mpi_details section)



Thanks

Mike.


Re: [MTT devel] mtt patch: summary digest

2008-10-29 Thread Mike Dubman
yep. works as a charm

On Tue, Oct 28, 2008 at 4:57 PM, Jeff Squyres <jsquy...@cisco.com> wrote:

> I sent the whatami patch upstream, and Brian Finley (the whatami author)
> encouraged us to actually use his recently-included Centos-5 support instead
> of our patch.  This is because his support is generic enough that it should
> work for any lsb_release-capable machine (to include Centos 5).
>
> I pulled that down into the MTT trunk; Mike, could you verify that it works
> for you?
>
>
>
> On Oct 28, 2008, at 8:30 AM, Jeff Squyres (jsquyres) wrote:
>
>  Done!
>>
>> On Oct 28, 2008, at 2:06 AM, Mike Dubman wrote:
>>
>> >
>> > Hey Jeff,
>> >
>> > I have no svn permissions to commit. Can you please provide me with
>> > one? (login: miked)
>> > Thanks
>> >
>> > On Mon, Oct 27, 2008 at 4:38 PM, Jeff Squyres <jsquy...@cisco.com>
>> > wrote:
>> > Aside from the 2 space tabs, looks great.  ;-)
>> >
>> > Go ahead and commit; I'll send the whatami patch upstream (whatami
>> > is maintained by Brian Finley at Argonne National Labs).
>> >
>> >
>> >
>> > On Oct 26, 2008, at 10:14 AM, Mike Dubman wrote:
>> >
>> >
>> > Hello guys,
>> >
>> > Please consider applying attached mtt patch to allow following
>> > features:
>> >
>> >• Support for centos5
>> >• Send single, digested email report for all completed tests
>> > (similar to text file summary file)
>> >• Provide basic statistics in the digested email about
>> > completed tests (similar to junit): duration, mpi version, overall
>> > status.
>> >
>> > Thanks
>> >
>> > Mike
>> > 
>> >
>> > ___
>> > mtt-devel mailing list
>> > mtt-de...@open-mpi.org
>> > http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>> >
>> >
>> > --
>> > Jeff Squyres
>> > Cisco Systems
>> >
>> >
>> >
>> > ___
>> > mtt-devel mailing list
>> > mtt-de...@open-mpi.org
>> > http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>> >
>> > ___
>> > mtt-devel mailing list
>> > mtt-de...@open-mpi.org
>> > http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>>
>>
>> --
>> Jeff Squyres
>> Cisco Systems
>>
>>
>> ___
>> mtt-devel mailing list
>> mtt-de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>>
>
>
> --
> Jeff Squyres
> Cisco Systems
>
>
> ___
> mtt-devel mailing list
> mtt-de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>