Re: [Ledger-smb-devel] Deciding on a default company setup for BDD tests

2016-02-07 Thread Chris Travers
On Sun, Jan 24, 2016 at 7:24 PM, Michael Richardson 
wrote:

> David G  wrote:
> > This was sort of my point too, I don't think it is worth the extra
> effort to
> > try and clean up the DB so tests can be re-run. Just drop the db and
> re-clone
> > it before rerunning the test. You don't want to drop it after
> running the
> > tests incase you need to manually verify something. hence the
> suggestion to
> > use a known naming scheme so it is obvious what a db is for.
>
> It would be ideal if we could run the tests in a transaction, and then just
> roll it back.  That's what Rails and Django do.
>
> I wonder if we could use some other postgresql magic here... for instance,
> maybe the new feature that makes the database hide anything that isn't
> between the valid time stamps. (I learnt of this at PGcon, I can't find the
> feature at the moment).
>
> If not, maybe:
>CREATE DATABASE newdb WITH TEMPLATE originaldb OWNER dbuser;
>
> would make it nice and fast to run between test cases... getting the test
> cases to run really fast, is pretty important, and I don't think going
> behind
> the applications' back to clean it is unreasaonable.
>

I think this is a great idea.  I also think our lsmb_test_db template
should be set up for all tests.  I.e. it should be sufficiently realistic
data to provide a starting point for all testing scenarios.  In other words
right now we set up the database for specific test cases, such as
reconciliation.  We should probably also include the test result table.

I don';t think that would be hard at present.  I think I could merge all
the special test setup scripts and adjust the test cases pretty quickly.

Any thoughts beyond adding the data we currently have?

>
> --
> ]   Never tell me the odds! | ipv6 mesh
> networks [
> ]   Michael Richardson, Sandelman Software Works| network
> architect  [
> ] m...@sandelman.ca  http://www.sandelman.ca/|   ruby on
> rails[
>
>
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
> ___
> Ledger-smb-devel mailing list
> Ledger-smb-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140___
Ledger-smb-devel mailing list
Ledger-smb-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel


Re: [Ledger-smb-devel] Deciding on a default company setup for BDD tests

2016-01-24 Thread Michael Richardson
David G  wrote:
> This was sort of my point too, I don't think it is worth the extra effort 
to
> try and clean up the DB so tests can be re-run. Just drop the db and 
re-clone
> it before rerunning the test. You don't want to drop it after running the
> tests incase you need to manually verify something. hence the suggestion 
to
> use a known naming scheme so it is obvious what a db is for.

It would be ideal if we could run the tests in a transaction, and then just
roll it back.  That's what Rails and Django do.

I wonder if we could use some other postgresql magic here... for instance,
maybe the new feature that makes the database hide anything that isn't
between the valid time stamps. (I learnt of this at PGcon, I can't find the
feature at the moment).

If not, maybe:
   CREATE DATABASE newdb WITH TEMPLATE originaldb OWNER dbuser;

would make it nice and fast to run between test cases... getting the test
cases to run really fast, is pretty important, and I don't think going behind
the applications' back to clean it is unreasaonable.

--
]   Never tell me the odds! | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works| network architect  [
] m...@sandelman.ca  http://www.sandelman.ca/|   ruby on rails[


--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
Ledger-smb-devel mailing list
Ledger-smb-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel


Re: [Ledger-smb-devel] Deciding on a default company setup for BDD tests

2016-01-23 Thread John Locke

Hi,

On 01/23/2016 08:12 AM, Erik Huelsmann wrote:

Hi,


What we're really talking about here is how to set up test data --
whether we ship a test database already containing data our tests rely
upon, or have those dependencies created when running the tests.

I pretty strongly advocate the latter -- create the
configurations/data
we are testing for at the start of a test run, if they don't already
exist. And make it safe to re-run a test on the same database.


This might be a bit of extra effort to achieve: Since we can't remove 
some data in the database (e.g. transaction deletion is an absolute 
no-no), it might not always be possible to re-run a test.


What I have in mind is along the lines of "orders that get created get 
closed", "invoices that get created get fully paid", that sort of thing. 
So when your test expects to see one open invoice, it doesn't then see 
two the next time.


I think it's reasonable to say that running tests on a production 
database will change your overall balances (e.g. don't do that!) but I 
find that during testing, especially when trying to resolve a thorny 
issue I don't understand, there's lots of small iterative incremental 
changes. I don't want to have to wipe and reload the database every 
time, especially when I don't get it right the first time.



I think the main point here is that for a lot of the setup steps, the 
step definitions check to see if it exists before creating -- 
particularly things like test accounts, test customers, test parts, test 
warehouses, etc.


And this will need to be split out into features -- e.g:

Feature: create a customer and vendor

-- this feature should test the interface for creating customers and 
vendors, and should not rely upon steps to set these up in the 
background, because they are testing the interface. At the end, should 
delete the customers and vendors created. (hmm, not seeing this is 
possible...maybe set the end date for the customer to the past?)


Feature: create parts/services

-- this feature tests the interface for adding/editing parts. In its 
background steps it creates the appropriate income/cogs accounts that 
will be used. The setup steps for the background creates the accounts if 
they do not exist, and succeeds without changing anything if they do 
exist -- for example:


Background:
  Given accounts:
  | accno | name | flags|
  | 2410 | COGS - parts | AR_paid,AP_paid|

(or whatever)...

At the end of the feature, mark all created parts obsolete, so the next 
test run can re-insert with the same skus, etc.



Feature: Create sales orders:

-- this feature would put the parts and customers it uses into the 
background section, using steps that populate parts, accounts, and 
customers as before -- create them if they don't exist, pass without 
changing anything if they do exist.




In other words, I'm proposing that each feature tests one module (or 
workflow), and uses background steps to provide the necessary supporting 
data. And that it should be possible to run each feature multiple times 
in the same database -- what we're actually testing should be cleaned up 
sufficiently to actually run again without throwing errors/failures. But 
allow the supporting data used in each feature to persist for future runs.


And each of those background data steps needs to have its own feature to 
test that the interface works correctly -- and these features do need to 
clean up for future runs...



I don't mind cleaning up test data if a test fails in development, but
as long as tests are completing, they should be able to be run
multiple
times on the same db.


Well, if we clean up behind succesfully run tests, that could also 
mean we simply delete the test databases in the cluster. Then, we can 
run the same tests again and again on the given cluster. I'm thinking 
we will eventually need different databases because we need different 
company set-ups to test all available features. However, to start, we 
need a setup with a CoA, accounts and some data, with which we can get 
an acceptable testing scope in place.


This kind of testing I think reaches the limits of BDD. We're not going 
to be able to verify that the math is handled correctly through every 
phase, on copies of different databases, through BDD.


We have unit tests for testing individual module functionality, and BDD 
is good for user interface testing... MIght need another layer for the 
business logic testing -- integration testing... For those kinds of 
tests, having a clean/well-known starting point for the database seems 
necessary.


Cheers,
John Locke
http://www.freelock.com



--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and 

Re: [Ledger-smb-devel] Deciding on a default company setup for BDD tests

2016-01-23 Thread David G

  
  
Hi,

On 24/01/16 05:51, John Locke wrote:


  
  Hi,
  
  On 01/23/2016 08:12 AM, Erik
Huelsmann wrote:
  
  

  Hi,
  
  

  
   
   What we're really
talking about here is how to set up test data --
whether we ship a test database already containing data
our tests rely
upon, or have those dependencies created when running
the tests.

I pretty strongly advocate the latter -- create the
configurations/data
we are testing for at the start of a test run, if they
don't already
exist. And make it safe to re-run a test on the same
database.
  
  
  
  This might be a bit of extra effort to achieve: Since
we can't remove some data in the database (e.g.
transaction deletion is an absolute no-no), it might not
always be possible to re-run a test. 
  

  

  
  
  What I have in mind is along the lines of "orders that get created
  get closed", "invoices that get created get fully paid", that sort
  of thing. So when your test expects to see one open invoice, it
  doesn't then see two the next time.
  
  I think it's reasonable to say that running tests on a production
  database will change your overall balances (e.g. don't do that!)
  but I find that during testing, especially when trying to resolve
  a thorny issue I don't understand, there's lots of small iterative
  incremental changes. I don't want to have to wipe and reload the
  database every time, especially when I don't get it right the
  first time.
  
  
  I think the main point here is that for a lot of the setup steps,
  the step definitions check to see if it exists before creating --
  particularly things like test accounts, test customers, test
  parts, test warehouses, etc.
  
  And this will need to be split out into features -- e.g:
  
  Feature: create a customer and vendor
  
  -- this feature should test the interface for creating customers
  and vendors, and should not rely upon steps to set these up in the
  background, because they are testing the interface. At the end,
  should delete the customers and vendors created. (hmm, not seeing
  this is possible...maybe set the end date for the customer to the
  past?)
  

Ok, so you have set the end date to the past, you then re-run the
test, which will either get skipped because the customer already
exists, or fail due to an error creating the customer.
Either way you can't rerun the test (and possibly others that expect
that customer to exist) on the same database.
I think Erik's suggestion that we simply Drop the DB and reclone
before running a set of tests is the most reliable option here.

  Feature: create parts/services
  
  -- this feature tests the interface for adding/editing parts. In
  its background steps it creates the appropriate income/cogs
  accounts that will be used. The setup steps for the background
  creates the accounts if they do not exist, and succeeds without
  changing anything if they do exist -- for example:
  
  Background:
    Given accounts:
    | accno | name | flags|
    | 2410 | COGS - parts | AR_paid,AP_paid|
  
  (or whatever)...
  
  At the end of the feature, mark all created parts obsolete, so the
  next test run can re-insert with the same skus, etc.

I haven't tried this, but I would expect it to subtly change the
process even if it is just a case of needing a single checkbox in a
different state.
Surely this makes the integrity of the tests more difficult to
manage?
Aside from the fact that I don't see any way of then testing the
"create account" step more than once, unless you are going to use a
random account number/name generator.
 
  
  Feature: Create sales orders:
  
  -- this feature would put the parts and customers it uses into the
  background section, using steps that populate parts, accounts, and
  customers as before -- create them if they don't exist, pass
  without changing anything if they do exist.
  
  
  
  In other words, I'm proposing that each feature tests one module
  (or workflow), and uses background steps to provide the necessary
  supporting data. And that it should be possible to run each
  feature multiple times in the same database -- what we're actually
  testing should be cleaned up sufficiently to actually run again
  without 

Re: [Ledger-smb-devel] Deciding on a default company setup for BDD tests

2016-01-22 Thread John Locke
Hi,

On 01/21/2016 05:54 PM, David G wrote:
> Hi All,
>
> I agree with Michaels comments, with a couple of extra thoughts inline
> below.
>
> On 19/01/16 09:13, Michael Richardson wrote:
>> Erik Huelsmann  wrote:
>>
>>
>>  > To start with the first and foremost question: do we want our tests 
>> to run
>>  > succesfully on a copy of *any* company (as John stated he would like, 
>> on IRC)
>>  > or do we "design" the company setups we want to run our tests on, from
>>  > scratch, as I was aiming for? (Note that I wasn't aiming for 
>> regenerating all
>>  > setup data on each scenario or feature; I'm just talking about making 
>> sure we
>>  > *know* what's in the database -- we'd still run on a copy of a 
>> database set
>>  > up according to this plan).
>>
>> By *any* company, you mean, I could run it against (a copy of) my database?
>> I think that is not useful to focus on right now.
> I agree that it's probably not a good thing to focus on right now,
> but,
> I think it would be worth keeping in mind so the tests aren't written to
> exclude this as a possibility.
> In the long run, I think rather than the tests being designed to be run
> on a *live* database they should,
> if run on a "non test" database copy the DB to a new DB ${name}-bdd-test
> and run against the copy.
>
> I think this is a better long term solution as for many scenarios it may
> be impossible to properly remove entries from the database due to the
> Audit Features we have.

Drupal has a tremendous amount of variation between sites, and lots of 
configuration that ends up in the database. This certainly colors my 
perspective -- and that's why I think it's important to be able to run 
BDD tests on a copy of any production database.

I'm not sure that's the same for LedgerSMB -- but it would certainly 
help track down issues if people customize their database in ways we 
don't expect.

What we're really talking about here is how to set up test data -- 
whether we ship a test database already containing data our tests rely 
upon, or have those dependencies created when running the tests.

I pretty strongly advocate the latter -- create the configurations/data 
we are testing for at the start of a test run, if they don't already 
exist. And make it safe to re-run a test on the same database.

I don't mind cleaning up test data if a test fails in development, but 
as long as tests are completing, they should be able to be run multiple 
times on the same db.

>>  > Additionally, John and I were talking about supporting test 
>> infrastructure
>>  > and we agree that it would be tremendously helpful to be able to see
>>  > screenshots of failing scenarios and maybe to be able to see 
>> screenshots of
>>  > various points in non-failing tests too. Since Travis storage isn't
>>  > persistent, we were thinking that we'd need to collect all 
>> screenshots as
>>  > "build artifacts" and upload them into an AWS S3 account for 
>> inspection.
>>
>> Email to ticket system?
>> Or S3...
> Michael makes a really good point here.
> Perhaps the easiest way of capturing the screenshots is not to use S3,
> but have a github project (eg: ledgersmb-bdd-results) that we can raise
> a ticket against for failing builds with associated screenshots attached.
> At the same time we could use "git annex" to store all screenshots for a
> test in a new git branch (or just simply a tag) in the
> ledgersmb-bdd-results project repository.
>
> Storing "good" results probably should only be done if a specific flag
> is passed in the PR commit message.
> While all screenshots (good and bad) should be stored if a single test
> fails.

However we store them, I suggest we at least store "good" results for 
each release. Especially of screenshots. This will allow comparing 
version-on-version, as well as give you a place to go back to see "what 
did this look like in version x?"

S3 storage seems to be built in to many test runners like Travis, I'm 
guessing that's the fastest/easiest to get up and running.

The Matrix project uses Jenkins as a test runner, and the runs are 
public, so you can access artifacts just by visiting their jenkins 
instance, no logins necessary. Can Travis do the same?

Cheers,
John Locke
http://www.freelock.com

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
Ledger-smb-devel mailing list
Ledger-smb-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel


Re: [Ledger-smb-devel] Deciding on a default company setup for BDD tests

2016-01-21 Thread David G
Hi All,

I agree with Michaels comments, with a couple of extra thoughts inline
below.

On 19/01/16 09:13, Michael Richardson wrote:
> Erik Huelsmann  wrote:
> > Chris, John and I have been slowly working our way to creating 
> infrastructure
> > on which we can base browser-based BDD tests. We had some problems with 
> race
> > conditions between the HTML/JS renderer (PhantomJS) and the expectations
> > being tested in the test-driver (Selenium::Driver). However, these have 
> been
> > fixed as of this morning.
>
> WOOHOO!
> Before PhantomJS became available, with the firefox plugin, I found it best
> to run it all under Xnest or Xvnc, so that I could control the screen
> resolution. Otherwise, whether or not certain things displayed depended upon
> the size of the display  With PhantomJS that shouldn't be an issue, I 
> think.
>
> > Earlier today, I merged the first feature file (2 tests) to 'master'. 
> This
> > feature file does nothing more than just navigate to /setup.pl and 
> /login.pl
> > and verify that the credentials text boxes are displayed.
>
> > Now that we're able to create feature files and write step files (and 
> we know
> > what we need to do to prevent these race conditions), I'm thinking that 
> we
> > need to devise a generally applicable structure on how tests are 
> initialized,
> > torn down, cleanup takes place, etc.
>
> Yes.
>
> > John and I were talking how we'd like tests to clean up behind 
> themselves,
> > removing database objects that have been added in the testing process, 
> such
> > databases, (super/login) roles, etc...
>
> yes, also one might sometimes like to write the test to validate that the
> resulting database objects exist.
>
> I suggest a basic set of infrastructure, including logins, a few customers
> and some transactions.   Ideally, one would then start a transaction and open
> the HTTP port within the transaction...
>
> > To start with the first and foremost question: do we want our tests to 
> run
> > succesfully on a copy of *any* company (as John stated he would like, 
> on IRC)
> > or do we "design" the company setups we want to run our tests on, from
> > scratch, as I was aiming for? (Note that I wasn't aiming for 
> regenerating all
> > setup data on each scenario or feature; I'm just talking about making 
> sure we
> > *know* what's in the database -- we'd still run on a copy of a database 
> set
> > up according to this plan).
>
> By *any* company, you mean, I could run it against (a copy of) my database?
> I think that is not useful to focus on right now.
I agree that it's probably not a good thing to focus on right now,
but,
I think it would be worth keeping in mind so the tests aren't written to
exclude this as a possibility.
In the long run, I think rather than the tests being designed to be run
on a *live* database they should,
if run on a "non test" database copy the DB to a new DB ${name}-bdd-test
and run against the copy.

I think this is a better long term solution as for many scenarios it may
be impossible to properly remove entries from the database due to the
Audit Features we have.
>
> > Additionally, John and I were talking about supporting test 
> infrastructure
> > and we agree that it would be tremendously helpful to be able to see
> > screenshots of failing scenarios and maybe to be able to see 
> screenshots of
> > various points in non-failing tests too. Since Travis storage isn't
> > persistent, we were thinking that we'd need to collect all screenshots 
> as
> > "build artifacts" and upload them into an AWS S3 account for inspection.
>
> Email to ticket system?
> Or S3...
Michael makes a really good point here.
Perhaps the easiest way of capturing the screenshots is not to use S3,
but have a github project (eg: ledgersmb-bdd-results) that we can raise
a ticket against for failing builds with associated screenshots attached.
At the same time we could use "git annex" to store all screenshots for a
test in a new git branch (or just simply a tag) in the
ledgersmb-bdd-results project repository.

Storing "good" results probably should only be done if a specific flag
is passed in the PR commit message.
While all screenshots (good and bad) should be stored if a single test
fails.
>
> --
> ]   Never tell me the odds! | ipv6 mesh networks [
> ]   Michael Richardson, Sandelman Software Works| network architect  [
> ] m...@sandelman.ca  http://www.sandelman.ca/|   ruby on rails
> [
>
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> 

Re: [Ledger-smb-devel] Deciding on a default company setup for BDD tests

2016-01-18 Thread Michael Richardson
Erik Huelsmann  wrote:
> Chris, John and I have been slowly working our way to creating 
infrastructure
> on which we can base browser-based BDD tests. We had some problems with 
race
> conditions between the HTML/JS renderer (PhantomJS) and the expectations
> being tested in the test-driver (Selenium::Driver). However, these have 
been
> fixed as of this morning.

WOOHOO!
Before PhantomJS became available, with the firefox plugin, I found it best
to run it all under Xnest or Xvnc, so that I could control the screen
resolution. Otherwise, whether or not certain things displayed depended upon
the size of the display  With PhantomJS that shouldn't be an issue, I think.

> Earlier today, I merged the first feature file (2 tests) to 'master'. This
> feature file does nothing more than just navigate to /setup.pl and 
/login.pl
> and verify that the credentials text boxes are displayed.

> Now that we're able to create feature files and write step files (and we 
know
> what we need to do to prevent these race conditions), I'm thinking that we
> need to devise a generally applicable structure on how tests are 
initialized,
> torn down, cleanup takes place, etc.

Yes.

> John and I were talking how we'd like tests to clean up behind themselves,
> removing database objects that have been added in the testing process, 
such
> databases, (super/login) roles, etc...

yes, also one might sometimes like to write the test to validate that the
resulting database objects exist.

I suggest a basic set of infrastructure, including logins, a few customers
and some transactions.   Ideally, one would then start a transaction and open
the HTTP port within the transaction...

> To start with the first and foremost question: do we want our tests to run
> succesfully on a copy of *any* company (as John stated he would like, on 
IRC)
> or do we "design" the company setups we want to run our tests on, from
> scratch, as I was aiming for? (Note that I wasn't aiming for regenerating 
all
> setup data on each scenario or feature; I'm just talking about making 
sure we
> *know* what's in the database -- we'd still run on a copy of a database 
set
> up according to this plan).

By *any* company, you mean, I could run it against (a copy of) my database?
I think that is not useful to focus on right now.

> Additionally, John and I were talking about supporting test infrastructure
> and we agree that it would be tremendously helpful to be able to see
> screenshots of failing scenarios and maybe to be able to see screenshots 
of
> various points in non-failing tests too. Since Travis storage isn't
> persistent, we were thinking that we'd need to collect all screenshots as
> "build artifacts" and upload them into an AWS S3 account for inspection.

Email to ticket system?
Or S3...

--
]   Never tell me the odds! | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works| network architect  [
] m...@sandelman.ca  http://www.sandelman.ca/|   ruby on rails[


--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
Ledger-smb-devel mailing list
Ledger-smb-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel


[Ledger-smb-devel] Deciding on a default company setup for BDD tests

2016-01-18 Thread Erik Huelsmann
Hi all,


Chris, John and I have been slowly working our way to creating
infrastructure on which we can base browser-based BDD tests. We had some
problems with race conditions between the HTML/JS renderer (PhantomJS) and
the expectations being tested in the test-driver (Selenium::Driver).
However, these have been fixed as of this morning.

Earlier today, I merged the first feature file (2 tests) to 'master'. This
feature file does nothing more than just navigate to /setup.pl and /login.pl
and verify that the credentials text boxes are displayed.


Now that we're able to create feature files and write step files (and we
know what we need to do to prevent these race conditions), I'm thinking
that we need to devise a generally applicable structure on how tests are
initialized, torn down, cleanup takes place, etc.
John and I were talking how we'd like tests to clean up behind themselves,
removing database objects that have been added in the testing process, such
databases, (super/login) roles, etc...

To start with the first and foremost question: do we want our tests to run
succesfully on a copy of *any* company (as John stated he would like, on
IRC) or do we "design" the company setups we want to run our tests on, from
scratch, as I was aiming for? (Note that I wasn't aiming for regenerating
all setup data on each scenario or feature; I'm just talking about making
sure we *know* what's in the database -- we'd still run on a copy of a
database set up according to this plan).

I'm thinking a lot of next questions depend on the answer to this one, so,
I'll leave it at this for now, with respect to test definitions.

Additionally, John and I were talking about supporting test infrastructure
and we agree that it would be tremendously helpful to be able to see
screenshots of failing scenarios and maybe to be able to see screenshots of
various points in non-failing tests too. Since Travis storage isn't
persistent, we were thinking that we'd need to collect all screenshots as
"build artifacts" and upload them into an AWS S3 account for inspection.


Comments?



-- 
Bye,

Erik.

http://efficito.com -- Hosted accounting and ERP.
Robust and Flexible. No vendor lock-in.
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140___
Ledger-smb-devel mailing list
Ledger-smb-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel