Here is something I feel is important and I wanted to see what you think.

Problem definition: Cordova is a bit unique in that it not only has to deal with running on a large number of permutations of device models and device OS's, but the testing is started manually by a large number of people using devices in their desk drawer, it is very decentralized. My fear is that there are invisible holes and dups in our testing at the end of a release. Thus we are releasing code for environments we didn't test, but perhaps could have tested. And are we really running both the automatic and manual tests in mobile-spec, or just the automatic tests? In a more generalized perspective, we don't have good visibility to what environments on which tests ran, and visibility to test failures as we evaluate release readiness, other than a few comments on the dev mailing list.

Proposed solution: mobile-spec is a reasonably good test suite. But the test results never leave the device. So the suggestion is to add a "Submit test results" button throughout mobile-spec (at each place where test results are generated) that would be an opt-in mechanism to get the pass/fail test results off the device. Included in the submission with the test results would be a bit of metadata, such as the device make and model and OS version, Cordova version, etc. The submission would get posted to a centralized db somewhere. A web-based query could be run against the db to list the submitted results, with the intention to understand which environments have been tested for a particular Cordova version, which tests ran, and which tests failed/succeeded.

It should be opt-in instead of opt-out because we generally don't want test results from a developer that is working on a new feature in the middle of a dev cycle. The intent here is to capture tests from rc's and similar as we approach the end of a release. It's about evaluating release readiness. Maybe in the long term we want to evaluate release readiness all the way through a release, but that's not the short-term intention here.

I love the vision of continuous integration test, and see this proposal as a first step towards that.

Start shooting holes: Reply with what you think. Before we dive into where the db should be hosted and a wire protocol for the submission, at a high level do you agree with the problem definition and do you have any other ideas/comments on a solution? And what do you feel is the urgency for this? Thanks!

-- Marcel Kinard

Reply via email to