Erik Huelsmann <ehu...@gmail.com> wrote:
    > Chris, John and I have been slowly working our way to creating 
infrastructure
    > on which we can base browser-based BDD tests. We had some problems with 
race
    > conditions between the HTML/JS renderer (PhantomJS) and the expectations
    > being tested in the test-driver (Selenium::Driver). However, these have 
been
    > fixed as of this morning.

WOOHOO!
Before PhantomJS became available, with the firefox plugin, I found it best
to run it all under Xnest or Xvnc, so that I could control the screen
resolution. Otherwise, whether or not certain things displayed depended upon
the size of the display....  With PhantomJS that shouldn't be an issue, I think.

    > Earlier today, I merged the first feature file (2 tests) to 'master'. This
    > feature file does nothing more than just navigate to /setup.pl and 
/login.pl
    > and verify that the credentials text boxes are displayed.

    > Now that we're able to create feature files and write step files (and we 
know
    > what we need to do to prevent these race conditions), I'm thinking that we
    > need to devise a generally applicable structure on how tests are 
initialized,
    > torn down, cleanup takes place, etc.

Yes.

    > John and I were talking how we'd like tests to clean up behind themselves,
    > removing database objects that have been added in the testing process, 
such
    > databases, (super/login) roles, etc...

yes, also one might sometimes like to write the test to validate that the
resulting database objects exist.

I suggest a basic set of infrastructure, including logins, a few customers
and some transactions.   Ideally, one would then start a transaction and open
the HTTP port within the transaction...

    > To start with the first and foremost question: do we want our tests to run
    > succesfully on a copy of *any* company (as John stated he would like, on 
IRC)
    > or do we "design" the company setups we want to run our tests on, from
    > scratch, as I was aiming for? (Note that I wasn't aiming for regenerating 
all
    > setup data on each scenario or feature; I'm just talking about making 
sure we
    > *know* what's in the database -- we'd still run on a copy of a database 
set
    > up according to this plan).

By *any* company, you mean, I could run it against (a copy of) my database?
I think that is not useful to focus on right now.

    > Additionally, John and I were talking about supporting test infrastructure
    > and we agree that it would be tremendously helpful to be able to see
    > screenshots of failing scenarios and maybe to be able to see screenshots 
of
    > various points in non-failing tests too. Since Travis storage isn't
    > persistent, we were thinking that we'd need to collect all screenshots as
    > "build artifacts" and upload them into an AWS S3 account for inspection.

Email to ticket system?
Or S3...

--
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        | network architect  [
]     m...@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [


------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
Ledger-smb-devel mailing list
Ledger-smb-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel

Reply via email to