Re: [Ledger-smb-devel] Deciding on a default company setup for BDD tests

2016-01-23 Thread John Locke

Hi,

On 01/23/2016 08:12 AM, Erik Huelsmann wrote:

Hi,


What we're really talking about here is how to set up test data --
whether we ship a test database already containing data our tests rely
upon, or have those dependencies created when running the tests.

I pretty strongly advocate the latter -- create the
configurations/data
we are testing for at the start of a test run, if they don't already
exist. And make it safe to re-run a test on the same database.


This might be a bit of extra effort to achieve: Since we can't remove 
some data in the database (e.g. transaction deletion is an absolute 
no-no), it might not always be possible to re-run a test.


What I have in mind is along the lines of "orders that get created get 
closed", "invoices that get created get fully paid", that sort of thing. 
So when your test expects to see one open invoice, it doesn't then see 
two the next time.


I think it's reasonable to say that running tests on a production 
database will change your overall balances (e.g. don't do that!) but I 
find that during testing, especially when trying to resolve a thorny 
issue I don't understand, there's lots of small iterative incremental 
changes. I don't want to have to wipe and reload the database every 
time, especially when I don't get it right the first time.



I think the main point here is that for a lot of the setup steps, the 
step definitions check to see if it exists before creating -- 
particularly things like test accounts, test customers, test parts, test 
warehouses, etc.


And this will need to be split out into features -- e.g:

Feature: create a customer and vendor

-- this feature should test the interface for creating customers and 
vendors, and should not rely upon steps to set these up in the 
background, because they are testing the interface. At the end, should 
delete the customers and vendors created. (hmm, not seeing this is 
possible...maybe set the end date for the customer to the past?)


Feature: create parts/services

-- this feature tests the interface for adding/editing parts. In its 
background steps it creates the appropriate income/cogs accounts that 
will be used. The setup steps for the background creates the accounts if 
they do not exist, and succeeds without changing anything if they do 
exist -- for example:


Background:
  Given accounts:
  | accno | name | flags|
  | 2410 | COGS - parts | AR_paid,AP_paid|

(or whatever)...

At the end of the feature, mark all created parts obsolete, so the next 
test run can re-insert with the same skus, etc.



Feature: Create sales orders:

-- this feature would put the parts and customers it uses into the 
background section, using steps that populate parts, accounts, and 
customers as before -- create them if they don't exist, pass without 
changing anything if they do exist.




In other words, I'm proposing that each feature tests one module (or 
workflow), and uses background steps to provide the necessary supporting 
data. And that it should be possible to run each feature multiple times 
in the same database -- what we're actually testing should be cleaned up 
sufficiently to actually run again without throwing errors/failures. But 
allow the supporting data used in each feature to persist for future runs.


And each of those background data steps needs to have its own feature to 
test that the interface works correctly -- and these features do need to 
clean up for future runs...



I don't mind cleaning up test data if a test fails in development, but
as long as tests are completing, they should be able to be run
multiple
times on the same db.


Well, if we clean up behind succesfully run tests, that could also 
mean we simply delete the test databases in the cluster. Then, we can 
run the same tests again and again on the given cluster. I'm thinking 
we will eventually need different databases because we need different 
company set-ups to test all available features. However, to start, we 
need a setup with a CoA, accounts and some data, with which we can get 
an acceptable testing scope in place.


This kind of testing I think reaches the limits of BDD. We're not going 
to be able to verify that the math is handled correctly through every 
phase, on copies of different databases, through BDD.


We have unit tests for testing individual module functionality, and BDD 
is good for user interface testing... MIght need another layer for the 
business logic testing -- integration testing... For those kinds of 
tests, having a clean/well-known starting point for the database seems 
necessary.


Cheers,
John Locke
http://www.freelock.com



--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and 

Re: [Ledger-smb-devel] Deciding on a default company setup for BDD tests

2016-01-23 Thread David G

  
  
Hi,

On 24/01/16 05:51, John Locke wrote:


  
  Hi,
  
  On 01/23/2016 08:12 AM, Erik
Huelsmann wrote:
  
  

  Hi,
  
  

  
   
   What we're really
talking about here is how to set up test data --
whether we ship a test database already containing data
our tests rely
upon, or have those dependencies created when running
the tests.

I pretty strongly advocate the latter -- create the
configurations/data
we are testing for at the start of a test run, if they
don't already
exist. And make it safe to re-run a test on the same
database.
  
  
  
  This might be a bit of extra effort to achieve: Since
we can't remove some data in the database (e.g.
transaction deletion is an absolute no-no), it might not
always be possible to re-run a test. 
  

  

  
  
  What I have in mind is along the lines of "orders that get created
  get closed", "invoices that get created get fully paid", that sort
  of thing. So when your test expects to see one open invoice, it
  doesn't then see two the next time.
  
  I think it's reasonable to say that running tests on a production
  database will change your overall balances (e.g. don't do that!)
  but I find that during testing, especially when trying to resolve
  a thorny issue I don't understand, there's lots of small iterative
  incremental changes. I don't want to have to wipe and reload the
  database every time, especially when I don't get it right the
  first time.
  
  
  I think the main point here is that for a lot of the setup steps,
  the step definitions check to see if it exists before creating --
  particularly things like test accounts, test customers, test
  parts, test warehouses, etc.
  
  And this will need to be split out into features -- e.g:
  
  Feature: create a customer and vendor
  
  -- this feature should test the interface for creating customers
  and vendors, and should not rely upon steps to set these up in the
  background, because they are testing the interface. At the end,
  should delete the customers and vendors created. (hmm, not seeing
  this is possible...maybe set the end date for the customer to the
  past?)
  

Ok, so you have set the end date to the past, you then re-run the
test, which will either get skipped because the customer already
exists, or fail due to an error creating the customer.
Either way you can't rerun the test (and possibly others that expect
that customer to exist) on the same database.
I think Erik's suggestion that we simply Drop the DB and reclone
before running a set of tests is the most reliable option here.

  Feature: create parts/services
  
  -- this feature tests the interface for adding/editing parts. In
  its background steps it creates the appropriate income/cogs
  accounts that will be used. The setup steps for the background
  creates the accounts if they do not exist, and succeeds without
  changing anything if they do exist -- for example:
  
  Background:
    Given accounts:
    | accno | name | flags|
    | 2410 | COGS - parts | AR_paid,AP_paid|
  
  (or whatever)...
  
  At the end of the feature, mark all created parts obsolete, so the
  next test run can re-insert with the same skus, etc.

I haven't tried this, but I would expect it to subtly change the
process even if it is just a case of needing a single checkbox in a
different state.
Surely this makes the integrity of the tests more difficult to
manage?
Aside from the fact that I don't see any way of then testing the
"create account" step more than once, unless you are going to use a
random account number/name generator.
 
  
  Feature: Create sales orders:
  
  -- this feature would put the parts and customers it uses into the
  background section, using steps that populate parts, accounts, and
  customers as before -- create them if they don't exist, pass
  without changing anything if they do exist.
  
  
  
  In other words, I'm proposing that each feature tests one module
  (or workflow), and uses background steps to provide the necessary
  supporting data. And that it should be possible to run each
  feature multiple times in the same database -- what we're actually
  testing should be cleaned up sufficiently to actually run again
  without