On Feb 24, 2009, at 1:02 AM, Mike Dubman wrote:

I`m looking for a way having automatic regression report at the end of mtt run which include graph+table for bw/lat/2way-bw for this specific run as well as for previous runs on the same configuration.

Cool.

The way we are doing it, is generating dynamic query for MTT test reporter at the end of mtt run, fetching html, extracting .png files with graphs and attaching it to the final MTT report.

During this process we observe the following:

The MTT database hosted at http://www.open-mpi.org/mtt/index.php behaves in very inconsistent way:

it work very sllllloooowwwwwww, sometimes it takes 5-10min to get query results

We probably should look at the typical bottlenecks these days. It used to be DB speed (e.g., our schema was not good). The schema's been tuned up to be pretty good these days, but sometimes there's still a mountain of data to plow through to find results. Possibilities for bottlenecks include:

- same old db issues (eg., the SQL queries just take a long time)
- PHP adding overhead
- the server itself being slow
- ...?

I'm not a DB expert; Josh spent a summer and came up with the current DB schema that we have now. Perhaps he would have some insight into these kinds of issues...?

We get many SQL errors during querying the performance results

Ouch. That should not be happening. What kinds of errors? Do they stem from PHP, or directly from SQL?

Sometimes we get no performance graphs for historic searches (queried by date range, like "past 6 month")

I wonder if PHP is hitting resource limits and therefore killing the job (PHP jobs are only allowed to run for so long and only allowed to use so much memory). I've seen that happen before.

Should we allow direct postgres connections (across the internet) to specific OMPI organizations who want/need it?

So, I`m wondering if someone else using this feature (generate performance results for historic runs) for similar goals and have better experience with it or recommendations?

We've toyed with it, but not tried to use it seriously. The data is all there in the DB, but I agree that the current UI/generation aspect of it could definitely use some improvements.

Will it behave better if we create a local copy of Mtt database?

This could probably be done if you want to; I think the entire database is many GB these days. If you want to develop some extension/ query tools locally, we could probably ship a copy of some/all of it to you for convenience. Or you could just setup your own local postgres database and populate it with some local data for development purposes. Either is possible.

Can we connect to MTT database hosted at www.openmpi.org with SQL directly?

Heh; great minds think alike.  :-)

For how long historic results are kept in the MTT database?


So far, we haven't deleted anything (except possibly when we changed the db schema in incompatible ways...? I don't remember clearly).

--
Jeff Squyres
Cisco Systems

Reply via email to