Robert Collins <[email protected]> writes: > FWIW I want to put a testr front-end on all of this, to do fault > correlation across test runs; this could be implemented a number of > different ways, but I think the key thing is that we may want to stage > things in a couple of different formats (raw, preprocessed for > correlation, preprocessed for humans).
Cool, thanks for the input. Do you have a notion as to whether this would be better served by an app that can do the processing as it's being served (from the original data), or do you think this is more amenable to pre-processing or batch processing? Also, would logstash or elasticsearch be useful? > We should do as little processing of logs on the workers as possible, > because log processing doesn't add value to the pass/fail nature of > the test - and the sooner we free up the node, the sooner we can be > spawning another test. I agree with that, but there are a few things in the works that may alter our thinking on that. We want to start following the subunit stream so that we can report failure back to Zuul faster (so that it can reset). The general idea of decoupling results from processing may be useful to apply here. However, since we supply links to logs in the reports back to gerrit, it is important that they be there at that time, so some processing may be appropriate to do on the job nodes themselves. -Jim _______________________________________________ OpenStack-Infra mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
