On Thu, Jul 13, 2017 at 11:29:10AM -0700, Junio C Hamano wrote:

> > So then I think your config file primarily becomes about defining the
> > properties of each run. I'm not sure if it would look like what you're
> > starting on here or not.
> 
> Yeah, I suspect that the final shape that defines the matrix might
> have to become quite a bit different.

I think it would help if the perf code was split better into three
distinct bits:

  1. A data-store capable of storing the run tuples along with their
     outcomes for each test.

  2. A "run" front-end that runs various profiles (based on config,
     command-line options, etc) and writes the results to the data
     store.

  3. A flexible viewer which can slice and dice the contents of the data
     store according to different parameters.

We're almost there now. The "run" script actually does store results,
and you can view them via "aggregate.pl" without actually re-running the
tests. But the data store only indexes on one property: the tree that
was tested (and all of the other properties are ignored totally; you can
get some quite confusing results if you do a "./run" using say git.git
as your test repo, and then a followup with "linux.git").

I have to imagine that somebody else has written such a system already
that we could reuse.  I don't know of one off-hand, but this is also not
an area where I've spent a lot of time.

We're sort of drifting off topic from Christian's patches here. But if
we did have a third-party system, I suspect the interesting work would
be setting up profiles for the "run" tool to kick off. And we might be
stuck in such a case using whatever format the tool prefers. So having a
sense of what the final solution looks like might help us know whether
it makes sense to introduce a custom config format here.

-Peff

Reply via email to