On 17/05/2012, at 1:15 AM, Adam Murdoch wrote:

> On 17/05/2012, at 6:12 AM, Luke Daley wrote:
> 
>> Howdy,
>> 
>> Here are some disjointed thoughts about JavaScript support, as provided by 
>> the different 3rd party plugins.
>> 
>> The biggest problem I can see is that none of these plugins play well 
>> together, which to some extent is a symptom of there being many different 
>> ways to “do JavaScript”. Also, I think it's a symptom of there being no core 
>> modelling in this area.
>> 
>> I've been contemplating what a “javascript-base” plugin might provide if we 
>> were to do such a thing.
>> 
>> Providing something like sourceSets would probably be a good start. At the 
>> moment most tasks are using references to directories and finding all js 
>> files manually, which is unnecessary and defeats task chaining through 
>> buildable inputs. I'm not sure what else a JavaScript source set would do 
>> besides be a glorified FileTree. I'm not sure at this point what else there 
>> is to model, or is worth modelling.
> 
> A couple more things a SourceSet would provide:
> * A way to declare the purpose of some javascript source files: is it part of 
> the web app, or is it a test script, or is it a test fixture, or is it a 
> tool, or …

Which would just be name of the sourceSet right?

> * A way to declare the dependencies of some javascript source. In general, a 
> set of javascript source has dependencies on other javascript libraries, and 
> on various other kinds of artefacts, such as css or images.

I'm not sure javascript ever really does depend on css or images. What did you 
have in mind here? I think the combination of js + css + images is probably 
another concept.

> A few other things we should model:
> * The concept of a javascript library.

This is a very loose concept. I'm not sure what we could do besides naming 
attributes and source. 

We could look at modelling enough so that we can generated module definitions, 
more on this in the next point.

> * Dependency management for javascript libraries. This, most importantly, 
> would involve defining a convention for publishing a javascript component, 
> and automating the work that has to happen to follow this convention.

AFAICT, we'd be carving new ground here. 

There is no format for declaring dependencies in a metadata format. At least in 
the short term, we'd be expressing dependencies in terms of downloading raw js 
files, or zips then extracting them to somewhere in a js source tree. The only 
benefit I can see to doing this (opposed to checking this stuff into the source 
tree) would be that we could build up a model of the dependency graph locally 
in the build and potentially use this information to drive bundlers/compressors.

There are runtime based resolution mechanisms though (e.g 
http://requirejs.org/, https://github.com/unscriptable/curl) that use either 
AMD or UMD (http://addyosmani.com/writing-modular-js/ was the best explanation 
I found). This is really for managing scoping between “modules” in a large 
JavaScript codebase. It's unclear what the build time implications of this kind 
of thing is, as it relates to dependency management.

> * The connection between javascript and web applications. Javascript almost 
> always ends up in a web application. Our model should reflect this fact

Unsure on this, depending on what you mean. Something like a JavaScript source 
set should have no knowledge of web applications.

> our infrastructure should automate the bundling, there should be conventions 
> for how this happens, and our IDE integration should expose this concept.

I'm unsure what kind of conventions we'll be able to impose, given the sheer 
number of different tools that do the same thing. I think the best we could do 
is model the kind of processing pipeline that's typically involved here in a 
way that lets people use whatever tools they want at each processing step. In 
the Java world, we can get away with standardising on a compiler and a bundler 
for the most part where that level of standardisation will not be possible in 
the JavaScript space.

I think the difference between C++ and JavaScript here is that common 
abstractions will be harder to find in the JS space because of its immaturity 
and the different approaches that different tools take. I think we'll have to 
pitch lower, and focus on providing general processing/pipelining abstractions 
and runtime/execution abstractions. I don't see this the same as abstracting 
over C++ compilers, there are more established patterns in that space allowing 
us to abstract a little higher.

> There is also some goodness we can reuse from the c++ dependency management 
> stuff we want to do:
> * The idea of variants: a minified variant of a script vs a non-minified 
> variant of a script.
> * The idea of packaging: a library distributed as a script vs a library 
> distributed in a zip with supporting resources, dependencies and so on.
> * The idea of bundling: a published library may include some or all of the 
> dependencies of that library.

These mostly address the case of building a JS library to be consumed by 
others. This will not be the common case by far. Pulling dependencies, and 
combining with local code to become part of the web application that's being 
built will be the common case. Publishing JS would only be done for very large 
orgs, or for multi project builds where you need to share js across apps in the 
build.

It's unclear to me atm how appealing the idea of building your JS as an 
independent project in a multi project build would be. I imagine most 
developers would prefer to not do this, unless they explicitly need to share 
the JS.

>> Not sure what we would do code wise to achieve this, but promotion of using 
>> SourceTask as a super class would probably do wonders in this space. 
>> 
>> Providing some JavaScript runtime support would be useful too. At the very 
>> least, this would be support for RhinoJS. It could eventually expand to 
>> include PhantomJS and possibly real browsers automated with WebDriver too. A 
>> lot of the javascript tools (compressors, static analysis etc.) are written 
>> in JavaScript, hence the need for a runtime.
>> 
>> As for testing, I'm still not sure what to do. The different test frameworks 
>> have different ways of working that would be hard to generalise like we did 
>> with JUnit and TestNG I think.
> 
> What are some of the candidates we might consider?

http://pivotal.github.com/jasmine/

Requires a HTML driver page, that includes Jasmine, the CUT and the test cases. 
Can be used in a pure JS environment, but you'd need to bundle the CUT and test 
cases in some fashion and throw them at a runtime. Some projects create test 
suites of different subsets of the tests, which means more than one HTML driver 
page.

For automation, people typically do one of three things:

1. Start a http server that serves the html driver page, then automate a real 
browser to hit it, then capture the results
2. Using Rhino with http://www.envjs.com/ (a Javascript DOM impl) 
3. Use PhantomJS (i.e. headless webkit)
4. Use HtmlUnit

http://visionmedia.github.com/mocha/ (same kind of story as above)

http://docs.jquery.com/QUnit (same kind of story as above)

Looking at it again, there is clearly a pattern here. We could provide a “DOM 
runtime” abstraction/infrastructure with potential impls for WebDriver (i.e. 
real browsers), Rhino + envjs, PhantomJS  and HtmlUnit. We then would just 
point them at a html page (probably served through a server that we start) and 
then capture the DOM after the page loads. More focussed testing support could 
build on this.

There's another interesting option in http://code.google.com/p/js-test-driver. 
The stated goal of this tool is to bring a JUnit type workflow to JS testing. 
Here's how it works:

You start a js-test-driver process that serves up a html page. You then point 
one or more browsers (real or otherwise) at this page, this “connects” the 
browser to the server in a bi-directional way. You then can tell the server to 
“run the tests”, which in turn triggers the execution of the tests in all of 
the connected browsers, then collects and aggregates the results.

What's nice about this tool is that it has IDEA and Eclipse plugins for 
starting the server and running tests via the IDE, and it also spits out JUnit 
XML. There are also adapters available for QUnit and Jasmine, that allow them 
to run in this environment. This might be a compelling place to start.

There's some overlap between these two approaches that we could exploit. At 
build time, we could use js-test-driver to manage the generation of the driver 
html page and results XML and use or “DOM runtime” machinery to point browsers 
at the junit-test-driver server.
 

>> I think the best we could do is provide some abstractions around JavaScript 
>> runtimes and JavaScript + HTML runtimes. This would allow different focussed 
>> JavaScript testing plugins to code against these abstractions. At the 
>> moment, most of the existing javascript testing plugins couple a testing 
>> framework with a runtime. This is something users will want to control. They 
>> may even want to use multiple, e.g. run their JS unit tests in IE, then 
>> FireFox etc.
>> 
>> No actions to take at this stage, just sharing my thoughts so far.
> 
> I think there's a huge overlap with javascript and c++. For everything you've 
> written above, you could replace 'javascript' with 'c++' and still make 
> sense. And I think whatever we do in the javascript space will benefit our 
> c++ support, and vice versa.

You're dead right here. I didn't see it initially.

> It all boils down to 1) no obvious conventions, so we either need to choose 
> one, or abstract over several, and 2) much of the Gradle model is 
> jvm-centric, even where it's abstract, so we need to detangle the model from 
> the jvm, and model more stuff. In particular, the 'build my jar but pretend 
> we're general-purpose' view of the world that our dependency model has has to 
> go.

-- 
Luke Daley
Principal Engineer, Gradleware 
http://gradleware.com

Reply via email to