On Wed, Jan 23, 2013 at 10:03 PM, Matthew Knepley <knepley at gmail.com> wrote:
> I am torn here. I would bet serious money that I can write a parser for > our current numerical output in 1/10 > of the time it takes to write new output/parsers and setup all the > associated infrastructure (databases). If > all we ever do is compare, we should just write the parser. Can you think > of any other value that would come > from JSON test output? > petscplot does this parsing in a relatively extensible way https://github.com/jedbrown/petscplot/wiki/PETSc-Plot The problem is that when users inject their own diagnostic output, we don't have a reliable way to identify ours. If we had a mechanism by which the output from each object was reliably distinguished, it would be easier to build tools that consume the output. In addition to plotting and error checking, it's relevant for live monitoring and batch analysis. I currently do batch analysis with multiple passes of grep and awk (the effort to properly parsing every incidental thing is too high for the first pass), but it's less precise than I'd like. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20130123/c1995763/attachment.html>
