On 11/6/17, 2:06 PM, "Harbs" <[email protected]> wrote:
>Lots of points here. > >I’m not an expert either, but I’ll try to add my 2 cents… > >> My temptation would be to leverage the [Mixin] capability in the >>compiler >> instead of additional/different CSS. Then it is just a command-line >> option to inject a class that gets initialized early and can do other >> things (including bringing in additional/different CSS). However, I >>have >> been considering some sort of compiler option that injects beads on the >> main application's strand. > >This sounds very interesting. > >That would sort of require a single bead attached to the application. >It’s probably wor-able, but it makes fine grained testing a bit harder. > >I wonder if we could utilize Mixin tags to add beads to classes and MXML >files using the same compiler option. That would allow dividing the app >into “units” of testing where the developer thinks it makes sense. Each Mixin brings in one class, but that class can drag in tons of stuff. The key question for our users is how they want to determine what gets tested. From prior Adobe Flex users, they didn't want to add testing overhead to every MXML component, only certain ones, and often needed to assign it a different name than the id in order to have meaningful output. Especially because an id can be used in more than one place in MXML. Individual automation beads can be placed on each instance that you want tested, but that changes the source code. Having an external map that the Mixin uses to walk the DOM and add beads doesn't affect the source code. > > >> I believe the component/framework testing must figure out how to run the >> next test step "later". And that's hard in AS and JS. Or else, we >>needs >> mocks or we restrict component tests to units that don't require any >> runtime support. I'm not sure you can solve the "later" problem with >> beads, but it would be great if you can. > >I think the later problem can be solved very nicely by beads. The bead >could run tests at whatever point it wants. It could add an event >listener to the strand and/or other beads to run specific tests at >specific points. To me the "Later" problem is about how to sequential lines of ActionScript/JavaScript don't get run sequentially in order for the runtime to do some processing. I don't understand how a bead can do that if the tests are written in a non-declarative language. > >It keeps track of all its tests and sends notification to the test runner >when it’s done with the results and/or sends the results as the >individual tests are run. The total number of tests could be set >manually, or it could be calculated automatically by [Test] metadata tags. > >> It also has to figure out how to >> handle the script timeout issue as well. Once we decide on that, it >>just >> comes a matter of writing more tests. > >I’m not sure what you mean by this. What timeouts are you concerned by? Flash for sure won't let you run code for more than 60 seconds without letting the player do its thing. I thought there were timeouts for JavaScript in browsers, and potentially for operating systems thinking a process is "not responding". The runtime probably needs to be given a chance to do something between tests. > >> Since we are brainstorming, I want to mention that I have dreams of >> automatically generating tests from metadata. > >Sounds like an interesting idea, but to be honest you lost me from the >start… ;-) > >I think these kinds of things are fundamentally incompatible with my >brain, and I’ll probably have a hard time wrapping my head around this… >;-) One theory of testing says that you should test boundary conditions of every code path as well as some intermediate values. Royale should have fewer "if" statements and other code path forks in the beads because we are trying to write PAYG code and every "if" theoretically introduces "just-in-case" code. So, in theory, if you could describe the boundary conditions in metadata, you could write a test case generator. I do not enjoy writing and debugging test cases so having something generate the tests would make life much easier for me. My 2 cents, -Alex > >> On Nov 6, 2017, at 8:35 PM, Alex Harui <[email protected]> wrote: >> >> Disclaimer: I am not an expert on automated testing, but I was involved >> in many discussions around the time Flex was donated to Apache. So I >>have >> some knowledge, but it might be stale. Here are some thoughts on this >> topic. >> >> To respond to the subject: as in the skinning/theming thread, I >>wouldn't >> worry about beads right now. Beads are just encapsulations of code >> snippets. In complex situations like these, it is often better just to >> "get the code to work", then get someone else to "get the code to work" >>in >> a different scenario and then see what needs to be parameterized and >> re-used. >> >> I'm unclear as to how much we need to do along the lines of automated >> testing for Applications. There are existing tools tuned for automating >> Application testing. It would be great to hear from users as to whether >> they have already chosen an automated testing tool for other >>Applications. >> Flex, for example, provided integration with the QTP testing system. >> Maybe people want us to leverage QTP or RIATest, or something else. >>Also, >> Microsoft was trying to formalize automated testing for Windows app. I >> don't know if our users are using that or not. >> >> Microsoft was introducing the notion of "roles" as part of the WAI-ARIA >> standard [1] and building a test harness around that. We've spent a >> little bit of time thinking about that in Royale. The NumericStepper is >> no longer a single component like it was in Flex, but rather, two >> components (Input and up/down control) in order to conform to WAI-ARIA >>not >> just for testing but someday for accessibility. >> >> Because of beads, there should be relatively few "private" parts to a >> component, so I don't know how much code will be needed to access >>things, >> especially in JS where nothing is truly private anyway. >> >> Because of PAYG, we do want to have some other code set the additional >> information the automated testing tools need. IIRC, not every tag in >>MXML >> needs to be tested, so adding a bead to specific MXML tags to mark them >> for the testing tools makes sense, but then you can't make it completely >> go away at runtime. >> >> I often thought a key feature of PAYG and automated testing would be >>that, >> without touching the code, you could add some compiler option and inject >> all of the extra data. I think this is technically possible, and I >>think >> this is what you are discussing in this thread, but I'm not sure if >>folks >> want that or not. If you don't want to touch the code, managing an >> external map instead might be too painful. Don't know, we should just >>try >> it. >> >> My temptation would be to leverage the [Mixin] capability in the >>compiler >> instead of additional/different CSS. Then it is just a command-line >> option to inject a class that gets initialized early and can do other >> things (including bringing in additional/different CSS). However, I >>have >> been considering some sort of compiler option that injects beads on the >> main application's strand. >> >> But the above is all about automated Application testing. IMO, >> component/framework testing is different. >> >> I believe the component/framework testing must figure out how to run the >> next test step "later". And that's hard in AS and JS. Or else, we >>needs >> mocks or we restrict component tests to units that don't require any >> runtime support. I'm not sure you can solve the "later" problem with >> beads, but it would be great if you can. It also has to figure out how >>to >> handle the script timeout issue as well. Once we decide on that, it >>just >> comes a matter of writing more tests. >> >> Since we are brainstorming, I want to mention that I have dreams of >> automatically generating tests from metadata. Our framework code has >>very >> few functions/methods that are called by the Application developer. >> Instead, most of the code we write are functions as setters and getters, >> and event handlers. Adding metadata to each of our functions seems way >> more efficient than writing tests for each one, and might help solve the >> "later" problem as the test harness could have control over when to make >> the function call and when to test for the results. >> >> So, some getter could have metadata that is something like: >> >> [Test[type="getter", initialValue="0", minValue="int.MIN_VALUE", >> maxValue="int.MAX_VALUE")] >> Function get value():int; >> >> And that would generate several tests: >> >> var comp:Foo = new Foo(); >> Assert(comp.value, is(0)) >> >> comp.foo = int.MIN_VALUE; >> Assert(comp.value, is(int.MIN_VALUE)); >> >> comp.foo = int.MAX_VALUE; >> Assert(comp.value, is(int.MAX_VALUE)); >> >> And even, if we add more metadata about out-of-range: >> >> [Test[initialValue="0", minValue="0", maxValue="int.MAX_VALUE", >> outOfRangeMin="exception")] >> Function get value():int; >> >> try { >> comp.foo = -1; // (minValue - 1) >> } catch (e:Error) { >> Success(); >> } >> Failure(); >> >> [Test[initialValue="0", minValue="0", maxValue="int.MAX_VALUE", >> outOfRangeMin="0")] >> Function get value():int; >> >> comp.foo = -1; // (minValue - 1) >> Assert(comp.value, is(0)) >> >> >> >> An Event handler might look like: >> >> [Test("eventType='org.apache.flex.events.MouseEvent', type="click", >> data="localx:0;localy:0" resultEvent="stateChange")] >> function clickHandler(e:MouseEvent):void >> { >> } >> >> >> >> >> >> >> And result in: >> var comp:Foo = new Foo(); >> Var e:Event = new org.apache.flex.events.MouseEvent('click'); >> e["localx"] = 0; >> e["localy"] = 0; >> Comp.addEventListener("stateChange", genericEventListener); >> comp.clickHandler(e); >> AssertEvent(was(0)) >> >> >> If we want to do integration testing that requires the runtime, we could >> add a "wait" tag to the metadata and the test engine would do what it >> needs to in order for the runtime to do some processing. >> >> My 2 cents, >> -Alex >> >> [1] >>https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.w3.o >>rg%2FWAI%2Fintro%2Faria&data=02%7C01%7C%7Cefd3cb6bdd3d419c71bd08d52562bb4 >>9%7Cfa7b1b5a7b34438794aed2c178decee1%7C0%7C0%7C636456028337090213&sdata=d >>xS8C4NfVbqXRmymt1sbJxQ322csY2hVEKZnV%2Fvvwuw%3D&reserved=0 >> >> On 11/5/17, 1:14 AM, "Harbs" <[email protected]> wrote: >> >>> I wanted to branch this into a separate discussion because I want to >>> discuss whether this is a good idea or a bad idea on its own. >>> >>> Harbs >>>> On Nov 5, 2017, at 11:55 AM, Harbs <[email protected]> wrote: >>>> >>>> I just had an interesting idea for solving the component testing >>>> problem in a Royale-specific way which might be a nice advantage over >>>> other frameworks: >>>> >>>> Testing Beads. >>>> >>>> The problem with component test seem to be the following: >>>> 1. Testing at the correct point in the component lifecycle. >>>> 2. Being able to address specific components and their parts. >>>> 3. Being able to fail-early on tests that don’t require complete >>>> loading. >>>> 4. Ensuring that all tests complete — which usually means synchronous >>>> execution of tests. >>>> >>>> Testing beads seem like they should be able to solve these problems in >>>> an interesting way. >>>> >>>> Basically, a testing bead would be a bead which has an interface >>>>which: >>>> a. Reports test passes. >>>> b. reports test failures. >>>> c. reports ignored test. >>>> d. Reports when all tests are done. >>>> >>>> It would work something like this: >>>> 1. A test runner/test app, would create components and add testing >>>> beads to the components. >>>> 2. It would retain references to the testing beads and listen for >>>> results from the beads. >>>> 3. The test runner would run the app. >>>> 4. Each test bead would take care of running its own tests and report >>>> back when done. >>>> 5. Once all the test beads report success or a bead reports failure, >>>> the test runner would exit with the full report. >>>> >>>> This would have the following advantages: >>>> 1. All tests could run in parallel. This would probably speed up test >>>> runs tremendously. Async operations would not block other tests from >>>> being run. >>>> 2. There’s no need for the test runner to worry about life-cycles. The >>>> bead would be responsible to test at the correct point in the >>>>lifecycle. >>>> 3. The first test to fail could exit. Failing early could make the >>>>test >>>> run much quicker when tests fail. >>>> 4. You could have an option to have the test runner either report all >>>> failing tests or fail early on the first one. >>>> 5. Running tests should be simple with a well-defined interface, and >>>> the actual tests could be as simple or as complicated as necessary. >>>> >>>> This seems like a very good solution from framework development. >>>> >>>> I’m not sure how this concept could be used for application >>>> development. I guess an application developer could create a parallel >>>> testing app which is the same as the app plus testing beads, but that >>>> seems awkward. >>>> >>>> Maybe it’s possible to use a testing CSS file which would add testing >>>> beads to components for testing builds, the problem with that is that >>>> there’s a requirement for code to add those beads. >>>> >>>> Maybe we can add special tags for adding the beads via MXML and/or >>>> ActionScript which could be stripped out for non-test builds. >>>> >>>> Food for thought… >>>> Harbs >>> >> >
