we shouldn't call this 'testing'.

If anything, it is 'excercising' the application.

As far as gradle is concerned, I don't understand this thread.
If there is any code that should be called without dependency,
one can simply wrap it in a task that has no dependency and call that task.

Dierk


Am 28.12.2010 um 09:29 schrieb [email protected]:

> Just another case...
> 
> Functionally testing ajax heavy sites (with Geb of course), or any concurrent 
> type code. With the ajax case, I find that I don't have much faith the 
> *tests* are good until I have run them a few times to get the timeouts right.
> 
> On 27/12/2010, at 6:47 AM, Adam Murdoch <[email protected]> wrote:
> 
>> 
>> On 24/12/2010, at 6:56 AM, Peter Niederwieser wrote:
>> 
>>> 
>>> 
>>> Adam Murdoch-3 wrote:
>>>> 
>>>>> Sometimes there are external factors.
>>>> 
>>>> I guess I was after something a bit more concrete.
>>>> 
>>> 
>>> A few examples that come to my mind:
>>> 
>>> - A test that generates random inputs (property-based testing, testing for
>>> deadlocks, etc.)
>>> - A test that reads inputs/outputs from an external Excel sheet
>>> - An integration test hitting a remote (test) database
>>> - An acceptance test that runs against a live website
>>> 
>> 
>> I like these examples. It feels like there are a few aspects here:
>> 
>> * Tests which use resources other than those on the classpath, such as the 
>> excel sheet, database, or web site.
>> 
>> At the moment, we model these as inputs and outputs of the test task. And 
>> only for local files. But it might be interesting to model other sorts of 
>> resources - such as remote files, or database instances or web applications. 
>> Then, we can skip a given test if the resources it uses have not changed 
>> since last time the test was executed.
>> 
>> Not sure how useful this is for incremental testing. But it is useful for 
>> things such as test setup and tear down, where Gradle can make sure the 
>> resources are deployed before the test and cleaned up after the test. And 
>> for reusing the tests in different contexts. A test can declare a dependency 
>> on a particular web app, and Gradle can inject the right config into the 
>> test: an embedded deployment for a dev build, a test deployment in staging 
>> for the ci build, and the production web app as a smoke test for the 
>> deployment build.
>> 
>> 
>> * Tests which have some non-deterministic element
>> 
>> Each test execution is a data point that increases confidence in the system 
>> under test. In a sense, all tests are like this to some degree.
>> 
>> To me, running the test task multiple times from the command-line to 
>> increase confidence is in itself a test case, and is probably worth 
>> capturing in the automated test suite somehow.
>> 
>> One option is to define this in the build: we might introduce the concept of 
>> a test suite, and allow you to specify things such as how many times a given 
>> test or suite should be executed for us to have confidence that the test has 
>> 'passed'. We might add other constraints too: this test must run repeatedly 
>> for 8 hours, this test must run on a machine with at least 2 physical cores, 
>> this test must run repeatedly at the rate of 30/minute, this test must run 
>> concurrently on at least 4 difference machines, etc.
>> 
>> These constraints would be useful for other use cases to, such as: this test 
>> must run under java 5, this test must run on a linux machine, this test must 
>> run on each of a windows, linux and mac machine, this test must run on a 
>> machine with oracle installed, this test must run against each of oracle, 
>> postgres, mysql, etc.
>> 
>> Regardless, I think it is an important point you make that tests are not 
>> always deterministic and may need to execute multiple times. We should 
>> capture this some how.
>> 
>> 
>> * Tests which run at multiple points in the application lifecycle.
>> 
>> For example, some acceptance tests which run in the developer pre-commit 
>> build, the ci build, and the deployment build. You might inject a different 
>> web app at each stage, and you might add difference constraints at each 
>> stage.
>> 
>> I wonder if you might also want different up-to-date strategies at each 
>> stage. For a dev build, I don't want to run a set of acceptance tests if I 
>> haven't changed the system under test since I last ran them (and they've 
>> passed according to whatever constraints have been specified). But, for a 
>> deployment build, I might want to specify that the acceptance tests must 
>> always be run.
>> 
>> 
>> --
>> Adam Murdoch
>> Gradle Developer
>> http://www.gradle.org
>> CTO, Gradle Inc. - Gradle Training, Support, Consulting
>> http://www.gradle.biz
>> 

Reply via email to