On Tue, Aug 9, 2011 at 8:11 PM, Filipe David Manana <[email protected]> wrote: > On Mon, Aug 8, 2011 at 11:43 PM, Paul Davis <[email protected]> > wrote: >> The entire test suite takes about 4 minutes to run on a semi recent >> MBA. Most of the tests are fairly speedy, but four of them stick out >> quite drastically: >> >> delayed_commits: 15s >> design_docs: 25s >> replication: 90s >> replcaitor_db: 60s > > The replication.js test grew a lot after the new replicator was > introduced. Basically it covers a lot more scenarios then the old > replication.js test, and tests with larger amounts of documents and > continuous replications. > I think this is a good thing and inevitable (due to bug fixes, new > features, etc). > > The replicator_db.js does several server restart calls, which are > necessary to test this feature. > > After Jan's patch to add a "verify installation" feature to Futon, I > don't think individual tests taking 1, 2 or 5 minutes are an issue, as > long as they succeed. > For a database management system, having much more comprehensive tests > (which mean that they take longer to run) is a good thing. > > I agree with everything said in this thread. >
I only mention the replication tests specifically because it seems like they spend a lot of time polling database info objects and the logs fly by without any other log messages. I was mostly wondering if this was related to a gen_server timeout or commit_after message. On the other hand, we should also probably start thinking about hierarchical testing schemes. replication.js is over 1.5K loc which seems awfully heaving for a single test. These sorts of things will help when we want to run certain parts of the suite continuously while hacking and then run the full thing before committing. Also, I think you're spot on about Jan's patch. We should turn Futon's tests into a "Your node is functioning suite" and move the test suite to the CLI so we can be much more specific in our testing. And randomly it occurs to me that maybe we should re-evaluate our use of init:restart during testing. I know it gives us a clean slate, but perhaps having a "randomize test order" would be more useful for detecting failures that are non-obvious. Granted that introduces obvious difficulties with incompatible tests (ie things that test behavior for multiple values of a specific config setting). >> >> I haven't dug into these too much yet. The only thing I've noticed is >> that replication and relplicator_db seem to spend a lot of their time >> polling various URLs waiting for task completion. Perhaps I'm just >> being impatient but it seems that each poll lasts an uncessarily long >> time for a unit tests (3-5s) so I was wondering if we were hitting a >> timeout or something. >> >> If anyone wants to dig into the test suite and see if they can't speed >> these up or even just split them apart so we know what's taking awhile >> that'd be super awesome. >> >> Also, I've been thinking more and more about beefing up the JavaScript >> test suite runner and moving more of our browser tests over to >> dedicated code in those tests. If anyone's interested in hacking on >> some C and JavaScript against an HTTP API, let me know. >> > > > > -- > Filipe David Manana, > [email protected], [email protected] > > "Reasonable men adapt themselves to the world. > Unreasonable men adapt the world to themselves. > That's why all progress depends on unreasonable men." >
