Re: [Architecture] Another Glorious Merge

2018-03-20 Thread Tony Atkins
Hi, Steve:

We should talk about this further.  I designed gpii-webdriver more for
testing web interfaces from the outside.  I included a means of pulling out
QUnit test results from the browser, but I did so mainly as a convenience
for people who had a small number of unit tests and a large number of black
box tests to operate the UI and observe the results.  My suggestion in this
case would be to run the QUnit tests in a gpii-testem instance, and
gpii-webdriver UI tests separately.  For starters, at the moment
gpii-webdriver only supports Chrome, we should be testing with a range of
browsers where possible.

Just to summarise the range of approaches, at the moment you can collect:

   1. Coverage data from code reached in node tests using nyc.
   2. Coverage data from code reached in browser tests using gpii-testem.
   3. Coverage data from server-side code fixtures (REST endpoints, etc.)
   hit while running browser tests (although I have not had to do this yet, it
   should simply be a matter of running Testem using nyc).
   4. Coverage data regarding server side fixtures hit by gpii-webdriver
   tests (again, using nyc to run node tests).

What we can't currently collect is coverage data regarding client side
fixtures that are hit while exercising the UI using gpii-webdriver.  If
this is an important enough use case, I have ideas about how this might be
accomplished, and can write those up as feature request against
gpii-webdriver.

Cheers,


Tony


On 19 March 2018 at 23:23, Steven Githens  wrote:

> Hi Tony,
>
> Thanks for this write up!  I was able to get the coverage running fine for
> my node tests in the PTT, and it looks fairly straightforward for a page
> using qunit or something.
>
> I'm wondering about the use case where you are using gpii-webdriver to run
> tests.  In my case I'm using these to add tests to click around and test
> the UI, and the webdriver tests are also ran as a portion of the node
> tests.  If I want to profile the client side javascript that is being
> executed during the webdriver run, I'm guessing I'd need to do something
> like:
>
> 1. Add the step 5 js includes to my main application... maybe in a partial
> that is only included during testing or something.
> 2. Create some configuration of gpii.testem.coverage that still starts up
> the testem server, but skips running any actual tests.
> 3. The gpii webdriver tests would then start this gpii.testem.coverage
> configuration, and make sure it keeps running until the web-driver tests
> are done.
>
> And then the usual instrumentation and reporting tasks before and after
> those steps.
>
> Thanks!
> Steve
>
>
> On Mar 14, 2018, at 10:55 AM, Tony Atkins 
> wrote:
>
> Sorry!  Forgot to include the list!
>
> T
>
> -- Forwarded message --
> From: Tony Atkins 
> Date: 14 March 2018 at 11:35
> Subject: Re: [Architecture] Another Glorious Merge
> To: Steven Githens 
>
>
> Hi, Steve.
>
> For the benefit of others, I'm going to interpret your question somewhat
> broadly and give some examples for a range of scenarios.
>
> If you only have node components, you just use nyc (possibly with an
> .nycrc configuration file), and let it handle instrumentation and
> reporting.  For anything that isn't a monorepo, it usually does "just work".
>
> If you only have browser components, you use the gpii.testem grade
> provided by the gpii-testem package, which handles both instrumentation and
> reporting.  If you haven't used gpii-testem but are familiar with Testem in
> general, you'd need to create a testem.js file to replace your previous
> testem.json <https://github.com/GPII/gpii-testem#usage-instructions>, and 
> include
> the coverage client in your client-side includes
> <https://github.com/GPII/gpii-testem/blob/master/docs/coverage.md>.  If
> you've used gpii-testem before, recent versions have small breaking
> changes, mostly around using a different syntax to represent the
> instrumentation config, and using different grades when you just want to
> collect coverage data but don't want to generate a report after the Testem
> run, as is the case when you want a "combined" coverage report across
> browsers and node.
>
> If you have both node and browser tests, it's a bit more involved.  In
> short, you clean up before running all tests, run each set of tests without
> generating individual reports, and then generate a report after.  My
> approach to date uses only npm scripts, and is covered in more detail in
> this guide:
>
> https://github.com/GPII/gpii-testem/blob/master/docs/advanced.md
>
> The recent work on universal gives one example of preparing a combined
> coverage report.  I would urge eve

Re: [Architecture] Another Glorious Merge

2018-03-19 Thread Steven Githens
Hi Tony,

Thanks for this write up!  I was able to get the coverage running fine for my 
node tests in the PTT, and it looks fairly straightforward for a page using 
qunit or something.

I'm wondering about the use case where you are using gpii-webdriver to run 
tests.  In my case I'm using these to add tests to click around and test the 
UI, and the webdriver tests are also ran as a portion of the node tests.  If I 
want to profile the client side javascript that is being executed during the 
webdriver run, I'm guessing I'd need to do something like:

1. Add the step 5 js includes to my main application... maybe in a partial that 
is only included during testing or something.
2. Create some configuration of gpii.testem.coverage that still starts up the 
testem server, but skips running any actual tests.
3. The gpii webdriver tests would then start this gpii.testem.coverage 
configuration, and make sure it keeps running until the web-driver tests are 
done.

And then the usual instrumentation and reporting tasks before and after those 
steps.

Thanks!
Steve

> On Mar 14, 2018, at 10:55 AM, Tony Atkins  wrote:
> 
> Sorry!  Forgot to include the list!
> 
> T
> 
> -- Forwarded message --
> From: Tony Atkins mailto:t...@raisingthefloor.org>>
> Date: 14 March 2018 at 11:35
> Subject: Re: [Architecture] Another Glorious Merge
> To: Steven Githens mailto:swgit...@mtu.edu>>
> 
> 
> Hi, Steve.
> 
> For the benefit of others, I'm going to interpret your question somewhat 
> broadly and give some examples for a range of scenarios.
> 
> If you only have node components, you just use nyc (possibly with an .nycrc 
> configuration file), and let it handle instrumentation and reporting.  For 
> anything that isn't a monorepo, it usually does "just work".
> 
> If you only have browser components, you use the gpii.testem grade provided 
> by the gpii-testem package, which handles both instrumentation and reporting. 
>  If you haven't used gpii-testem but are familiar with Testem in general, 
> you'd need to create a testem.js file to replace your previous testem.json 
> <https://github.com/GPII/gpii-testem#usage-instructions>, and include the 
> coverage client in your client-side includes 
> <https://github.com/GPII/gpii-testem/blob/master/docs/coverage.md>.  If 
> you've used gpii-testem before, recent versions have small breaking changes, 
> mostly around using a different syntax to represent the instrumentation 
> config, and using different grades when you just want to collect coverage 
> data but don't want to generate a report after the Testem run, as is the case 
> when you want a "combined" coverage report across browsers and node.
> 
> If you have both node and browser tests, it's a bit more involved.  In short, 
> you clean up before running all tests, run each set of tests without 
> generating individual reports, and then generate a report after.  My approach 
> to date uses only npm scripts, and is covered in more detail in this guide:
> 
> https://github.com/GPII/gpii-testem/blob/master/docs/advanced.md 
> <https://github.com/GPII/gpii-testem/blob/master/docs/advanced.md>
> 
> The recent work on universal gives one example of preparing a combined 
> coverage report.  I would urge everyone not to start with that .nycrc config 
> file, which bends over backwards to work with a monorepo that doesn't match 
> nyc's lerna-ish assumptions 
> <https://github.com/istanbuljs/istanbuljs/issues/146>.   Most of our work in 
> non-monorepos should be able to get away without an .nycrc config file, or 
> with a much much simpler one like we use in infusion 
> <https://github.com/fluid-project/infusion/blob/master/.nycrc>.
> 
> Anyway, do please check out the gpii-testem docs, et cetera and hit me up if 
> there are any problems or questions.  I am always happy to help.
> 
> Cheers,
> 
> 
> Tony
> 
> On 13 March 2018 at 18:56, Steven Githens  <mailto:swgit...@mtu.edu>> wrote:
> That's awesome, thanks Tony and Antranig.
> 
> So, if I want to add test coverage to my XYZed GPII module I would just 
> include gpii-test and nyc in my dev dependences, add that .nycrc file,  and 
> then mimic these lines in my package.json, then my project will get full test 
> coverage?
> 
> https://github.com/GPII/universal/blob/master/package.json#L72-L80 
> <https://github.com/GPII/universal/blob/master/package.json#L72-L80>
> 
> Cheers to the Max,
> Steve
> 
>> On Mar 12, 2018, at 12:16 PM, Antranig Basman > <mailto:antranig.bas...@colorado.edu>> wrote:
>> 
>> Dear All -
>>  This is to report another important milestone in impro

Re: [Architecture] Another Glorious Merge

2018-03-13 Thread Steven Githens
That's awesome, thanks Tony and Antranig.

So, if I want to add test coverage to my XYZed GPII module I would just include 
gpii-test and nyc in my dev dependences, add that .nycrc file,  and then mimic 
these lines in my package.json, then my project will get full test coverage?

https://github.com/GPII/universal/blob/master/package.json#L72-L80 


Cheers to the Max,
Steve

> On Mar 12, 2018, at 12:16 PM, Antranig Basman  
> wrote:
> 
> Dear All -
>  This is to report another important milestone in improving our quality 
> infrastructure. This evening another significant branch has been merged, 
> itself representing more than 6 months work by Tony Atkins but building on 
> yet more work implementing similar capabilities for Infusion and captured in 
> other projects in our ecosystem, including gpii-testem. As a result of this, 
> we will now have complete code coverage information pooled between our 
> web-based and node-based unit tests and integration tests, which will greatly 
> improve the efficiency of future code review.
> 
> The code coverage reports can be browsed within the "reports" directory 
> created after a standard (successful) run of the "npm test" task. We should 
> be aiming to reach a baseline quality target of around 90% branch coverage, 
> which across most of our implementation we do already. Once we reach this 
> uniformly, we should consider how further engineering cycles should best be 
> spent.
> 
> Similar capabilities will now be rolled out across our other GPII projects, 
> and also we will probably try to enroll in some form of coverage dashboard so 
> that this can be checked in an automated way as part of our CI.
> 
> Could everyone who has outstanding pull requests against universal please 
> merge them up against current master, and if your work has contributed any 
> further implementation files you will need to explicitly list them against 
> the peculiar patterns found in the .nycrc file listed here (bug has been 
> filed in the upstream "istanbul" project which will hopefully make this 
> unnecessary one day):
> 
> https://github.com/GPII/universal/blob/master/.nycrc
> 
> This has been a long road with many awkward turnings due to mismatched 
> assumptions and inflexible implementations in the tower of projects that we 
> depend on, and let us all join in congratulating Tony on the endurance and 
> carefulness needed to land this highly valuable work improving our project 
> infrastructure.
> 
> Cheers,
> 
> Antranig.
> ___
> Architecture mailing list
> Architecture@lists.gpii.net
> https://lists.gpii.net/mailman/listinfo/architecture

___
Architecture mailing list
Architecture@lists.gpii.net
https://lists.gpii.net/mailman/listinfo/architecture