Hi,

> I think this is a better long term solution as for many scenarios it may
> > be impossible to properly remove entries from the database due to the
> > Audit Features we have.
>
> Drupal has a tremendous amount of variation between sites, and lots of
> configuration that ends up in the database. This certainly colors my
> perspective -- and that's why I think it's important to be able to run
> BDD tests on a copy of any production database.
>
> I'm not sure that's the same for LedgerSMB -- but it would certainly
> help track down issues if people customize their database in ways we
> don't expect.
>

LedgerSMB has a lot of room for between-company variations too, although
the number of variations my be lower than with Drupal (e.g. we don't have
things as complex as the "Views" module in Drupal). However, I would very
much be in favor of trying to distill minimal reproduction recipes when
people report errors; mostly because that helps both to verfy that a bug
has been fixed -- something that might need to happen multiple times
throughout the course of fixing the bug any way.


> What we're really talking about here is how to set up test data --
> whether we ship a test database already containing data our tests rely
> upon, or have those dependencies created when running the tests.
>
> I pretty strongly advocate the latter -- create the configurations/data
> we are testing for at the start of a test run, if they don't already
> exist. And make it safe to re-run a test on the same database.
>

This might be a bit of extra effort to achieve: Since we can't remove some
data in the database (e.g. transaction deletion is an absolute no-no), it
might not always be possible to re-run a test.


> I don't mind cleaning up test data if a test fails in development, but
> as long as tests are completing, they should be able to be run multiple
> times on the same db.
>

Well, if we clean up behind succesfully run tests, that could also mean we
simply delete the test databases in the cluster. Then, we can run the same
tests again and again on the given cluster. I'm thinking we will eventually
need different databases because we need different company set-ups to test
all available features. However, to start, we need a setup with a CoA,
accounts and some data, with which we can get an acceptable testing scope
in place.


> >>      > Additionally, John and I were talking about supporting test
> infrastructure
> >>      > and we agree that it would be tremendously helpful to be able to
> see
> >>      > screenshots of failing scenarios and maybe to be able to see
> screenshots of
> >>      > various points in non-failing tests too. Since Travis storage
> isn't
> >>      > persistent, we were thinking that we'd need to collect all
> screenshots as
> >>      > "build artifacts" and upload them into an AWS S3 account for
> inspection.
> >>
> >> Email to ticket system?
> >> Or S3...
> > Michael makes a really good point here.
> > Perhaps the easiest way of capturing the screenshots is not to use S3,
>

https://docs.travis-ci.com/user/deployment/s3 seems to indicate we can copy
to S3 at the end of a build. I think a "bucket" like structure (S3) is
going to be much simpler than something meant for version control. (We
don't want to version control the images, I'd guess). Ideally, we'd be able
to link the images to a build/scenario/step combination and additionally,
have an indication of whether the test failed or not.


> > but have a github project (eg: ledgersmb-bdd-results) that we can raise
> > a ticket against for failing builds with associated screenshots attached.
> > At the same time we could use "git annex" to store all screenshots for a
> > test in a new git branch (or just simply a tag) in the
> > ledgersmb-bdd-results project repository.
> >
> > Storing "good" results probably should only be done if a specific flag
> > is passed in the PR commit message.
> > While all screenshots (good and bad) should be stored if a single test
> > fails.
>
> However we store them, I suggest we at least store "good" results for
> each release. Especially of screenshots. This will allow comparing
> version-on-version, as well as give you a place to go back to see "what
> did this look like in version x?"
>

I'm thinking that if the images are small enough, there shouldn't be much
of a problem keeping a *loong* tail of history.


> S3 storage seems to be built in to many test runners like Travis, I'm
> guessing that's the fastest/easiest to get up and running.
>

Based on https://docs.travis-ci.com/user/deployment/s3 I think we can cause
the uploaders built into Travis to automatically set the buckets in S3 to
public, so anybody can inspect the results of a failing test (and the
previous succesful one).


> The Matrix project uses Jenkins as a test runner, and the runs are
> public, so you can access artifacts just by visiting their jenkins
> instance, no logins necessary. Can Travis do the same?
>

I have no experience with the uploaders from Travis (i.e. I don't know if
there's a link you can click on when you install this), but I would say
that at least part of the use-case is supported (the uploading + publishing
part).

-- 
Bye,

Erik.

http://efficito.com -- Hosted accounting and ERP.
Robust and Flexible. No vendor lock-in.
------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
Ledger-smb-devel mailing list
Ledger-smb-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel

Reply via email to