On these two points:

> Hmm, that would be cool if we don't need Selenium to report results.
> Keeping Selenium synced up is a PITA.

For tests that are not browser-dependant, we should not need selenium. Testing 
directly through node should be much faster besides requiring less setup.

For tests that rely on browser features, most of the testing frameworks I’ve 
seen use selenium. Most drive it using node.js. Some use PhantomJS, and I think 
there’s an option to run Chromium headlessly. One interesting framework which 
does not seem to use Selenium is cypress.

I’m still looking around. If anyone has experience with JS testing frameworks, 
please let me know...

> 
> I've never looked to see how FlexUnit handles this, but I'm not
> clear how a test can be written in AS as:
> 
> @Test
> Function MyTest() {
>  SetAProperty();
>  AssertSomeOtherProperty();
> }

I’m not sure how FlexUnit does it either.

Here are two interesting options
http://nightwatchjs.org/ <http://nightwatchjs.org/>
https://www.cypress.io/ <https://www.cypress.io/>

The frameworks I’ve seen seem to have a “wait” function.

We could probably also use event listeners.

Here’s a nice list:
https://www.joecolantonio.com/2016/06/14/top-8-essential-javascript-automation-frameworks/
 
<https://www.joecolantonio.com/2016/06/14/top-8-essential-javascript-automation-frameworks/>

I just had an interesting idea for solving the component testing problem in a 
Royale-specific way which might be a nice advantage over other frameworks:

Testing Beads.

The problem with component test seem to be the following:
1. Testing at the correct point in the component lifecycle.
2. Being able to address specific components and their parts.
3. Being able to fail-early on tests that don’t require complete loading.
4. Ensuring that all tests complete — which usually means synchronous execution 
of tests.

Testing beads seem like they should be able to solve these problems in an 
interesting way.

Basically, a testing bead would be a bead which has an interface which:
a. Reports test passes.
b. reports test failures.
c. reports ignored test.
d. Reports when all tests are done.

It would work something like this:
1. A test runner/test app, would create components and add testing beads to the 
components.
2. It would retain references to the testing beads and listen for results from 
the beads.
3. The test runner would run the app.
4. Each test bead would take care of running its own tests and report back when 
done.
5. Once all the test beads report success or a bead reports failure, the test 
runner would exit with the full report.

This would have the following advantages:
1. All tests could run in parallel. This would probably speed up test runs 
tremendously. Async operations would not block other tests from being run.
2. There’s no need for the test runner to worry about life-cycles. The bead 
would be responsible to test at the correct point in the lifecycle.
3. The first test to fail could exit. Failing early could make the test run 
much quicker when tests fail.
4. You could have an option to have the test runner either report all failing 
tests or fail early on the first one.
5. Running tests should be simple with a well-defined interface, and the actual 
tests could be as simple or as complicated as necessary.

This seems like a very good solution from framework development.

I’m not sure how this concept could be used for application development.  I 
guess an application developer could create a parallel testing app which is the 
same as the app plus testing beads, but that seems awkward.

Maybe it’s possible to use a testing CSS file which would add testing beads to 
components for testing builds, the problem with that is that there’s a 
requirement for code to add those beads.

Maybe we can add special tags for adding the beads via MXML and/or ActionScript 
which could be stripped out for non-test builds.

Food for thought…
Harbs

Reply via email to