On Oct 31, 2011, at 2:11 PM, Robert Collins wrote:

> On Tue, Nov 1, 2011 at 2:27 AM, Gary Poster <gary.pos...@canonical.com> wrote:
>> I have a couple of concerns about that approach (which in fact is what I 
>> tried initially with yuixhr).
>> 
>> The first is that the main testprocess has no visibility of JS tests, or 
>> even of JS test cases.  It only has visibility into JS test suites as long 
>> as we follow the convention of one suite per file.
> 
> I don't quite understand that - the slave process is an appserver, not
> a test runner; its the mechanized browser that is running js tests and
> knows about everything, isn't it?

Not really, no.  The slave process is the JS testrunner.  The JS tests are run 
in the browser, integrated with the fixtures as run in the app server on the 
slave process, so that each test does a server-side setup and teardown via the 
(slave) appserver.  The main testrunner's "test" is in fact the entire suite of 
tests in the slave process/the JS.  yuixhr cares about the appserver side.

> 
>> somehow.  The approaches that I imagined for this seemed unnecessarily 
>> tricky and hand-wavy, though, and I don't have any new ideas in this regard.
> 
> We can probably work something out; or we can build up the code
> muscles to let us do oops etc introspection in the js test code
> itself. As yet I don't know whether having such duplication is
> desirable (or not). We certainly have a rich testing environment in
> Python and being unable to leverage that would be a bit sad.

FWIW, we do leverage it--in the sub-process.  The yuixhr fixtures use 
factories, and they reset databases, and so on.

> That said, using the stock processlayercontroller doesn't imply the
> parent test process knowning about individual js tests as it executes:
> as long as either side is willing to do a reset-in-place its probably
> all good. That won't be too hard to arrange IMO.

I can guess at what you mean here, and yes, I suspect it would be doable.  More 
work than I wanted to get yuixhr running, but doable.

> 
>> The second concern is that I would like the interactive approach ("make 
>> run-testapp") and the non-interactive test-suite-integration approach to be 
>> as similar as possible so that it is easier to diagnose problems.  Indeed, 
>> one of the few differences between them (the test setup change caused by the 
>> INTERACTIVE_TESTS flag) added to the confusion in diagnosing this particular 
>> problem, and your fix was to eliminate that difference.
> 
> There is significant manual duplication between run-testapp and the
> main test runner as it stands -

"significant" is in the eye of the beholder, I suppose, but...

> we're going to have other cases of
> duplication as we bring more microservices online - gpgverifyd,
> mailbounced; jsoopsd etc.... I think there is a significant
> maintenance burden in that duplication and we should put significant
> resources into eliminating it.

...that's certainly a reasonable concern.  There are multiple ways of reducing 
the duplication, I suspect though--reusable setup functions might work, for 
instance.

> 
>>> For now, I'm landing a workaround: remove the 'INTERACTIVE_TESTS' flag
>>> that was used to prevent rabbit starting - my code makes rabbit
>>> desirable always, and we're going forward on that, not backwards.
>> 
>> This sounds like a great direction for a solution.  I'll verify later that 
>> the parent process is not duplicating setup work.  (I plan to reinstate the 
>> flag itself in the Makefile so that I can use it for an additional 
>> interactive feature in one of my branches, but I won't let it affect the 
>> test setup.)
> 
> It certainly currently is duplicating effort when running a single
> test, because the layer chosen to run the slave appserver is one
> appropriate for running such appservers via the code path our other
> tests which need an appserver do.

It's duplicating some work, but not the full appserver setup--and it can do 
much less in the main process (which is what I meant when I said that I would 
verify that the parent process is not duplicating setup work).  It might even 
be able to use the unit test layer at the top level; not sure.

> 
>> So, I'm happy with where we are now with this, and what you've done.  We can 
>> discuss longer-term plans later, if you'd like to, but I think what you've 
>> done as a workaround is the right way forward.
> 
> Yes, I would like to discuss longer term plans, later :).

Heh, ok, cool

Gary


_______________________________________________
Mailing list: https://launchpad.net/~launchpad-dev
Post to     : launchpad-dev@lists.launchpad.net
Unsubscribe : https://launchpad.net/~launchpad-dev
More help   : https://help.launchpad.net/ListHelp

Reply via email to