Re: [websockets] Test results available

2015-03-26 Thread James Graham
On 26/03/15 15:37, Olli Pettay wrote:

> websockets/interfaces.html  the test itself has bugs (uses old
> idlharness.js?).
> 
> Also websockets/interfaces/WebSocket/events/013.html is buggy. Seems to
> rely on blink/presto's EventHandler behavior, which is not
> what the specs says should happen.

If you are inclined to fix these you can either do it in GitHub or in
mozilla-inbound, from where the changes will get upstreamed.




Re: Test runner now available in web-platform-tests

2014-01-13 Thread James Graham

Apologies: I meant to set a reply-to header to prevent fragmentation.



Test runner now available in web-platform-tests

2014-01-13 Thread James Graham
A simple in-browser test runner is now available in the 
web-platform-tests repository. This will automatically run 
testharness.js tests and provides UI for manually marking the results of 
reftests and manual tests. This runner is designed to be helpful when 
developing tests and implementations, allowing all tests, or a subset of 
tests, to be run with minimal labour, and to be helpful in comparing 
various implementations.


To facilitate the latter use case, the runner can produce JSON output 
from the test run, and there is a simple command line script for taking 
this output from one or more browsers and producing an implementation 
report (this script could be ported to run entirely in-browser).


I suggest that henceforth we stop compiling implementation reports by 
hand on wiki pages and instead ask implementors to provide the JSON 
output from their implementation. This could be from the runner in the 
repository or from implementor-specific test harnesses used in 
automation. In the longer term we should look to get a vendor-controlled 
URL at which the latest test result data for each implementation is 
published. Given this we will be able to automatically collate a test 
report from the latest available data.


In order to use the in-repository runner it is necessary to:

a) Set up the latest web-platform-tests checkout, following the 
instructions in README.md to install all the relevant submodules and set 
up the hosts.


b) Generate a test manifest using

python tools/scripts/manifest.py MANIFEST.json

from the web-platform-tests root (this step must be repeated whenever 
new tests are added).


c) Start the local server using

python serve.py

b) Navigate to

http://web-platform.test:8000/tools/runner/index.html

This is all documented in the README.md file in the web-platform-tests 
repository.




Re: [testing] Common way to "manage" test bugs?

2013-12-19 Thread James Graham

On 19/12/13 16:09, Domenic Denicola wrote:

I would encourage use of GitHub for greater developer involvement. I
think elsewhere in the thread it's been covered how to adapt GitHub
issues to solve the potential problems you mention, so hopefully it's
sufficient technically for your needs. Just wanted to give a voice to
that perspective, if it comes down to a matter of preference.


This discussion has forked. Can we have it on public-test-infra, since 
it's a pretty clear cross-group concern related to testing.


(BTW I have also advocated GH there, even though the issue tracker sucks).



Re: Refactoring SharedWorkers out of Web Workers W3C spec

2013-12-16 Thread James Graham

On 16/12/13 16:43, Arthur Barstow wrote:

On 12/16/13 11:20 AM, ext James Graham wrote:

On 12/12/13 16:20, James Graham wrote:

On 12/12/13 15:13, Boris Zbarsky wrote:

On 12/11/13 8:42 AM, Arthur Barstow wrote:

[IR] <http://www.w3.org/wiki/Webapps/Interop/WebWorkers>


Looking at this link, there are passes marked for obviously incorrect
tests (e.g. see https://www.w3.org/Bugs/Public/show_bug.cgi?id=24077
which says that
http://w3c-test.org/web-platform-tests/master/workers/interfaces/DedicatedWorkerGlobalScope/postMessage/second-argument-null.html


should fail in any conformant UA, but it's marked as passing in Opera
and Chrome.

So presumably we will need to rerun the tests in all UAs again once all
the bugs have been fixed, yes?


Yes. I have found another couple of trivial bugs in the tests which I
will fix up. I will also have a got at fixing Ms2ger's test runner to
work in a better way, sort out some way to automate the visual output,
and hopefully we can generate a new implementation report with minimal
effort.


So, I made a sample implementation report [1] using an in-browser test
runner based on Ms2ger's earlier work (see public-test-infra for more
details). The browsers are those that happened to be on my computer. I
don't intend for anyone to take these results as authoritative, and
more work is needed, but it is much better than editing a wiki. And
has revealed yet more bugs in the tests.

In time we can use this approach in collaboration with vendors to
fully automate generating implementation reports.

[1] http://hoppipolla.co.uk/410/workers.html


James - this is excellent!

Did you run the tests via <http://www.w3c-test.org/testrunner/workers/>?
What would it take to include Travis's IE results?


No, this is based on a new-ish tool that itself depends on the 
self-hosted-tests changes [1].


If Travis can make the results available in the same JSON format the 
tool uses then we can incorporate them directly; having a common, 
machine-writable format is the essential point of this work. However I 
would suggest that he waits until we fix the broken tests and land the 
self-hosted-tests changes and test runner / report generator. If people 
are interesting in speeding this process up, the most valuable thing 
they can do is help finish the review at [1].


[1] https://critic.hoppipolla.co.uk/r/368




Re: Refactoring SharedWorkers out of Web Workers W3C spec

2013-12-16 Thread James Graham

On 12/12/13 16:20, James Graham wrote:

On 12/12/13 15:13, Boris Zbarsky wrote:

On 12/11/13 8:42 AM, Arthur Barstow wrote:

[IR] <http://www.w3.org/wiki/Webapps/Interop/WebWorkers>


Looking at this link, there are passes marked for obviously incorrect
tests (e.g. see https://www.w3.org/Bugs/Public/show_bug.cgi?id=24077
which says that
http://w3c-test.org/web-platform-tests/master/workers/interfaces/DedicatedWorkerGlobalScope/postMessage/second-argument-null.html

should fail in any conformant UA, but it's marked as passing in Opera
and Chrome.

So presumably we will need to rerun the tests in all UAs again once all
the bugs have been fixed, yes?


Yes. I have found another couple of trivial bugs in the tests which I
will fix up. I will also have a got at fixing Ms2ger's test runner to
work in a better way, sort out some way to automate the visual output,
and hopefully we can generate a new implementation report with minimal
effort.


So, I made a sample implementation report [1] using an in-browser test 
runner based on Ms2ger's earlier work (see public-test-infra for more 
details). The browsers are those that happened to be on my computer. I 
don't intend for anyone to take these results as authoritative, and more 
work is needed, but it is much better than editing a wiki. And has 
revealed yet more bugs in the tests.


In time we can use this approach in collaboration with vendors to fully 
automate generating implementation reports.


[1] http://hoppipolla.co.uk/410/workers.html




Re: Refactoring SharedWorkers out of Web Workers W3C spec

2013-12-12 Thread James Graham

On 12/12/13 15:13, Boris Zbarsky wrote:

On 12/11/13 8:42 AM, Arthur Barstow wrote:

[IR] 


Looking at this link, there are passes marked for obviously incorrect
tests (e.g. see https://www.w3.org/Bugs/Public/show_bug.cgi?id=24077
which says that
http://w3c-test.org/web-platform-tests/master/workers/interfaces/DedicatedWorkerGlobalScope/postMessage/second-argument-null.html
should fail in any conformant UA, but it's marked as passing in Opera
and Chrome.

So presumably we will need to rerun the tests in all UAs again once all
the bugs have been fixed, yes?


Yes. I have found another couple of trivial bugs in the tests which I 
will fix up. I will also have a got at fixing Ms2ger's test runner to 
work in a better way, sort out some way to automate the visual output, 
and hopefully we can generate a new implementation report with minimal 
effort.





Re: Refactoring SharedWorkers out of Web Workers W3C spec

2013-12-12 Thread James Graham

Redirecting this conversation to public-test-infra.

On 12/12/13 13:01, Arthur Barstow wrote:

On 12/12/13 7:31 AM, ext Simon Pieters wrote:



First I ran the tests using
https://bitbucket.org/ms2ger/test-runner/src on a local server, but
then I couldn't think of a straight-forward way to put the results in
the wiki so I just ran the tests manually, too. :-( Since most tests
are automated it's silly to run them manually and edit a wiki page. Is
there a better way?


Re automated running, there is 
but I think it is considered obsolete (and isn't maintained). Test
automation is/was on Tobie's ToDo list. I'll followup separately about
the status on public-test-infra.


Ms2ger has a simple in-browser runner which we could adapt to use a 
top-level browsing context rather than an iframe, and to use the 
manifest file generated by the script in review at [1].



(Re using a wiki for the implementation report, to produce the Web
Messaging and Web Sockets implementation reports, I created a script
that merges tests results from individual runs and outputs the wiki
table syntax.)


Yeah, so I forsee this taking longer to output than to actually do the 
run (which I can fully automate for gecko). We should agree on a simple 
format that can be produced by any kind of automated runner and make a 
tool that can turn that format into an implementation report. Something like


[{test_id:string|list, status:string, subtests:[{name:string, 
status:string}]}]


Seems like it would work fine. The test id would either by the url to 
the top-level test file or the list [test_url, cmp, ref_url] for 
reftests. The harness status would be something like OK|TIMEOUT|ERROR 
and the subtest statuses would be something like PASS|FAIL|TIMEOUT|NOTRUN.


If we do something like this I can likely organise for such output to be 
automatically generated for every testrun on gecko, so producing an 
implementation report for any feature would just be a matter of 
importing the data from the latest nightly build.


[1] https://critic.hoppipolla.co.uk/r/440



Re: Refactoring SharedWorkers out of Web Workers W3C spec

2013-12-10 Thread James Graham

On 10/12/13 21:09, Jonas Sicking wrote:

We at Mozilla just finished our implementation of Shared Workers. It
will be turned on in the nightly releases starting tomorrow (or maybe
thursday) and will hit release on April 29th.

So if we are only reason we're doing anything here is lack of a 2nd
implementation, then we might already be good.

That said, I don't know what the test suite status is etc, so I'm
totally fine with punting Shared Workers for now.


There are certainly some tests for Shared Workers (see [1] and 
subdirectories), although of course it is always possible to have more 
tests.


[1] https://github.com/w3c/web-platform-tests/tree/master/workers




Need review for changes to some html/webapps tests

2013-11-04 Thread James Graham
As part of the work to make the testsuite self hosting, I have made 
changes to a number of testsuites to remove the dependence on PHP and 
eliminate hardcoded server names. These changes touch a variety of 
testsuites that I don't own and need careful review. If possible I would 
like to complete this work before TestTWF/Shenzen, which is this 
Saturday. Therefore, if you are able to review any of the remaining 
unreviewed changes in the following directories, I would sincerely 
appreciate your efforts:


XMLHttpRequest/
shared-worker/
old-tests/submission/Infraware/Offline_Application_Cache/
old-tests/submission/Microsoft/async/
old-tests/submission/Opera/dnd/
old-tests/submission/Opera/preload/
old-tests/submission/Opera/script_scheduling/

(note in case it comes up: "old-tests" here is a signifier that these 
testsuites have not been fully reviewed and moved to their permanent 
home; not that they are outdated or low value. The slow progress on this 
is another problem we have to solve.)


The review is at [1]. I am around to provide any help you need with the 
machanics of the review process, or to answer technical questions about 
the changes, so please don't hesitate to get in touch.


[1] https://critic.hoppipolla.co.uk/r/368



Re: [webcomponents] HTML Imports

2013-10-07 Thread James Graham

On 06/10/13 17:25, Dimitri Glazkov wrote:


And, if the script is executed against the global/window object of
the main document, can and should you be able to access the imported
document?


You can and you should. HTML Imports are effectively #include for the Web.


Yes, that sounds like a good description of the problem :) It is rather 
noticable that no one making programming languages today replicates the 
#include mechanism, and I think html-imports has some of the same design 
flaws that makes #include unpopular.


I think authors will find it very hard to write code in an environment 
where simple functions like document.getElementById don't actually work 
on the document containing the script, but on some other document that 
they can't see. It also seems that the design requires you to be super 
careful about having side effects; if the author happens to have a 
non-idempotent action in a document that is imported, then things will 
break in the relatively uncommon case where a single document is 
imported more than once.


Overall it feels like html imports has been designed as an over general 
mechanism to address certain narrow use cases and, in so doing, has 
handed authors a footgun. Whilst I don't doubt it is usable by the 
highly competent people who are working at the bleeding edge on 
polyfilling components, the rest of the population can't be expected to 
understand the implemetation details that seem to have led the design in 
this direction. I think it would be useful to go right back to use cases 
here and work out if we can't design something better.




Re: Reminder: TTWF-Shenzhen Nov 9, F2F Meeting Nov 11-12, TP Meeting Nov 13

2013-09-17 Thread James Graham

On 13/09/13 14:29, Arthur Barstow wrote:

Hi All - three threads about TPAC 2013 and WebApps' November 11-12 in
Shenzhen.

1. WebApps meeting November 11-12:

* You Must register for the meeting


* WebApps meeting page
. Input on the
agenda is of course welcome (but feel free to directly edit this page).
I have  been thinking about setting aside some portion of Tuesday Nov 12
for test case writing so any comments on that idea are welcome (perhaps
all of Tuesday afternoon).


FWIW I think the most productive use of this "testing" time would be to 
spend some time getting the group up to speed on how testcase review 
works and then trying to clear some of the review backlog. The members 
of the WG are exactly the people who are best placed to review test 
submissions, but at the moment this is not happening and it is causing 
problems. Of course I will be happy if people want to spend the time 
writing tests rather than reviewing them.




Re: Making selectors first-class citizens

2013-09-11 Thread James Graham

On 11/09/13 15:50, Brian Kardell wrote:


Yes, to be clear, that is what i meant. If it is in a draft and
widely/compatibly implemented and deployed in released browsers not
behind a flag - people are using it.


If people are using a prefixed — i.e. proprietary — API there is no 
requirement that a standard is developed and shipped for that API. It's 
then up to the individual vendor to decide whether to drop their 
proprietary feature or not.





Critic [was: Re: [admin] Testing and GitHub login names]

2013-04-23 Thread James Graham

On 04/23/2013 08:43 AM, Robin Berjon wrote:

On 22/04/2013 13:12 , James Graham wrote:

(as an aside, I note that critic does a much better job here. It allows
reviewers to mark when they have completed reviewing each file in each
commit. It also records exactly how each issue raised was resolved,
either by the commit that fixed it or by the person that decided to mark
the issue as resolved)


You may wish to introduce Critic a bit more than that; I'm pretty sure
that many of the bystanders in this conversation aren't conversant with it.


Right, I lost track of which list this conversation was on :)


"Opera Critic", commonly known as "Critic" is a code review system 
written by Jens Lindström to solve the major pain points that we had 
experienced with all other code review systems we had tried at Opera. 
Critic is written to work with git, and has the following workflow:


* Each set of changes to be reviewed is a single branch containing one 
or more commits
* A set of possible reviewers for each file changed is assigned based on 
preconfigured, path-based, filters
* The reviewer(s) review the commits, raising issues in general, or 
against specific lines, marking each file as "reviewed" once they have 
finished commenting on it (irrespective of whether it has issues).
* The submitter (or anyone else) pushes additional commits to the same 
branch in order to address the issues flagged by the reviewer.
* Critic automatically resolves issues where the new commits have 
altered code that previously had an issue
* The reviewer reviews the new commit, adding new issues or reopening 
old issues that were incorrectly marked as resolved.
* In case that clarification is needed, reviewer and author add 
additional comments to the various issues. If it turns out that the 
issue is not a problem, or that it was fixed in some way undetected by 
critic, it is manually marked as resolved.
* Finally all changes are marked as "reviewed" and there are 0 issues 
remaining so the code is automatically marked as "Accepted".
* Someone (typically the author, but it could also be the reviewer) 
proceeds to integrate the review branch with master in the normal way.


As you see this has a great deal in common with the github pull request 
process. Pull requests are branches, to which the author will push more 
commits in order to respond to issues. Compared to the github experience 
critic offers a number of significant advantages:


* Automatic assignment of reviewers to changes based on 
reviewer-supplied filters (so e.g. an XHR test facilitator could set 
themselves up to review only XHR-related tests without also getting 
email related to 2dcontext tests).
* The ability to review multiple commits squashed together into a single 
diff.
* Tracking of which files in which commits have already received review 
and which still require review.
* A record of which issues are still to be addressed and which have been 
resolved, and how.
* A significantly better UI for doing the actual review, notably 
including a side-by-side view of the before/after code (with changes 
highlighted of course) so that one can read the final code without 
having to mentally apply a diff.
* Less email (for some reason github think it's OK to send one email per 
comment on the review, whereas critic allows comments to be sent in 
batches).


Noticing the similarity with the github workflow, I have extended critic 
to allow it to integrate with github. In particular I added two features:


* Critic authentication against github using OAuth.
* Integration with pull requests so that new pull requests create new 
review requests, pushes to pull requests are mirrored in the 
corresponding review and closing pull requests closes the review.


I have set up a critic instance for the web-platform-tests repository at 
[1] (you have to sign in using your github credentials). So far my 
experience is that it makes it possible to review changes that are 
essentially unreviewable on github with a minimum of accidental 
complexity (e.g. [2]; yes that review still has a lot of work to be done).


There is a certain amount of controversy around using critic for 
web-platform-tests, with some people taking the position that we should 
exclusively use the undoubtedly weaker, but more familiar, github tools 
to make things easier for new contributors. I consider that review has 
so much intrinsic difficulty that we should explore all the options we 
have for making it easier, especially given the particular dynamics of 
test reviews. Testsuites are often developed internally and then 
submitted to a standards body at the last moment, so they are commonly 
large code dumps rather than small changes that can be reviewed in a 
single sitting. This is the situation in which the github tools become 
almost useless. Also, even in an ideal situation with many contributors, 
I would expect test submissions t

Re: [admin] Testing and GitHub login names

2013-04-23 Thread James Graham

On 04/23/2013 08:43 AM, Robin Berjon wrote:

On 22/04/2013 13:12 , James Graham wrote:

On Mon, 22 Apr 2013, Arthur Barstow wrote:

The only thing that we ask is that pull requests not be merged by
whoever made the request.


Is this to prevent the `fox guarding the chicken coop`, so to speak?

If a test facilitator submits tests (i.e. makes a PR) and everyone
that reviews them says they are OK, it seems like the facilitator
should be able to do the merge.


Yes, my view is that Robin is trying to enforce the wrong condition
here.


No, I'm just operating under different assumptions. As I said before, if
someone wants to review without having push/merge powers, it's perfectly
okay. I don't even think we need a convention for it (at this point). I
do however consider that this is an open project, so that whoever
reviews tests can be granted push/merge power.

Why? Because the alternative is this: you get an "accepted" comment from
someone on a PR. Either you trust that person, in which case she could
have merge powers; or you don't, in which case you have to review the
review to check that it's okay. Either way, we're better off making that
decision at the capability assignment level since it only happens once
per person.


FWIW I'm used to a situation in which the opposite approach is generally 
taken; that is a reviewer is responsible for reviewing, but the 
submitter is responsible for doing the final integration of their 
changes. This has several advantages; it is not uncommon for code to go 
through review only for the submitter themselves to realise that there 
was an uncaught mistake or a piece missing. This prevents the overhead 
of a second review/test cycle just to fixup such an error. It also means 
that the submitter is very clear about what has been integrated and what 
has not.



Indeed, there are currently 41 open pull requests and that number is not
decreasing. Getting more help with the reviewing is essential. But
that's a Hard Problem because reviewing is both difficult and boring.


I would qualify that statement. If you're already pretty good with web
standards and you wish to improve your understanding to top levels (and
gain respect from your peers), this is actually a really good thing to
work on. Or if you're implementing, it's likely a little bit less work
to review than to write from scratch (and it can make you aware of
corner cases or problems you hadn't thought of). Put differently, I
think it can be a lot less boring if you're getting something out of it.


Oh yeah, there are theoretically good reasons that people might want to 
spend time on doing test review. But as yet that doesn't seem to be 
enough to get people actually doing it; so far there have been a handful 
of people reviewing tests (I count 6 in total). Clearly we need to do 
something better here.




Re: [admin] Testing and GitHub login names

2013-04-22 Thread James Graham


On Mon, 22 Apr 2013, Arthur Barstow wrote:

The only thing that we ask is that pull requests not be merged by whoever 
made the request. 


Is this to prevent the `fox guarding the chicken coop`, so to speak?

If a test facilitator submits tests (i.e. makes a PR) and everyone that 
reviews them says they are OK, it seems like the facilitator should be able 
to do the merge.


Yes, my view is that Robin is trying to enforce the wrong condition here. 
The problem isn't with people merging their own changes; it's with 
unreviewed changes being merged. Unfortunately github doesn't naturally 
provide any way to track progress of a review and therefore there isn't 
any way to tell that review is complete.


Just to signal the end of the review we could adopt some convention like 
leaving a comment "Accepted" to indicate that the reviewer believes that 
all commits have been fully reviewed and there are no further issues to be 
resolved.


(as an aside, I note that critic does a much better job here. It allows 
reviewers to mark when they have completed reviewing each file in each 
commit. It also records exactly how each issue raised was resolved, either 
by the commit that fixed it or by the person that decided to mark the 
issue as resolved)



So anyone with a GitHub account is already 100% set up to contribute.

If you *do* wish to help with the reviewing and organisation effort, you're 
more than welcome to and I'll be happy to add you. I just wanted to make 
sure that people realise there's zero overhead for regular contributions.


Indeed, there are currently 41 open pull requests and that number is not 
decreasing. Getting more help with the reviewing is essential. But that's 
a Hard Problem because reviewing is both difficult and boring.




Re: Clipboard API: Stripping script element

2013-03-28 Thread James Graham

On 03/28/2013 12:34 PM, Hallvord Reiar Michaelsen Steen wrote:

On 03/28/2013 10:36 AM, Hallvord Reiar Michaelsen Steen wrote:

In particular, WebKit has been stripping script element from
the pasted content but this may have some side effects on CSS
rules.]



AFAIK (without re-testing right now), WebKit's implementation
is: * rich text content that is pasted into a page without JS
handling it is sanitized (SCRIPT, javascript: links etc removed)
* a paste event listener that calls getData('text/html') will get
the full, pre-sanitized source


If that's correct I can add a short description of this to the
spec, in the informative section.





Why would this be informative?



Mainly because it seems like spec'ing it is a bit out of scope for
this spec - I'm trying to spec how clipboard events should work as
seen from the JS side. Implementation details like how data is pasted
when there is no JS or event handling involved don't seem to belong
here, and IMO the interop issues are far-fetched (though the XSS
risks aren't).


I don't see why the interop issues are particularly far-fetched. The 
approach of not problems in spec A because they "ought" to be addressed 
some other hypothetical spec B is something we have tried before and it 
hasn't worked well yet, so I don't think we should do it again here. As 
the python doctrine goes, "practicality beats purity".





Re: Fixing appcache: a proposal to get us started

2013-03-26 Thread James Graham

On 03/26/2013 08:02 AM, Jonas Sicking wrote:


Another "feature" that we are proposing is to drop the current
manifest format and instead use a JSON based one. The most simple
reason for this is that we noticed that the information we need to
express quickly became complex enough that using a format with simple
parsing rules was beneficial.

A format based on extending the current appcache format would be no
problem for a UA to parse. However the complexity that we need to
express resulted in something that's too hard for a human to manually
write, or for a human to understand when looking at somebody else's
manifest in order to learn.

The simple parsing rules for JSON seemed like a better fit. It also
provides more of an opportunity to extend the format in the future.
JSON also has advantages when it comes to creating APIs exposed to
webpages for interacting with appcaches. More about this below.


Some slightly trivial feedback: I am worried about using a format with 
no support for comments. I agree that some hypothetical JSON+comments 
format would be a good fit, but without the ability to document complex 
rulesets, it seems like we are going to create a maintenance nightmare.




Re: Beacon API

2013-02-20 Thread James Graham

On 02/20/2013 08:24 AM, Reitbauer, Alois wrote:

My personal experience is different. We found that using img tags is not
that reliable. Especially in Firefox we recently saw some problems. Img
tags in general have the disadvantage that the amount of data that can
be set is rather limited. While this obviously should be kept as small
as possible, the information available via resource timing will increase
the amount of data that gets sent.

Is there a way we can integrate this into a W3C test suite to check how
different browsers behave in this case


Yes, tests for behaviour around navigation and unload should go in the 
HTML testsuite[1]. There are some guidelines for writing tests at [2], 
but they appear in need of an update to reflect the fact that we now use 
git[hub] rather than mercurial.


If you have any questions, please ask, either on the 
public-html-testsuite list, or in the #whatwg channel on Freenode or on 
the #testing channel on the W3C IRC server.


[1] https://github.com/w3c/html-testsuite/
[2] http://www.w3.org/html/wg/wiki/Testing/Authoring/



Re: Proposal: moving tests to GitHub

2013-02-04 Thread James Graham

On 02/02/2013 12:50 AM, Tobie Langel wrote:

On 2/1/13 4:23 AM, "Arthur Barstow"  wrote:

One of things I wondering about is - after you leave your Fellow
position [BTW, that's totally wicked so congrats on that!], and Robin
has moved on to `greener pastures` and Odin has moved on to be CEO of
Opera - if/when there are problems with GH, who are we gonna' call? Hg,
despite its shortcomings, is backed by the W3C's crack SysTeam. Do we
get an equivalent service from GH?


In the broader context, git[hub] is very much the low-risk alternative. 
Whilst there are certainly people using mercurial, git is far more 
popular. Git skills are far more common in the marketplace, and far more 
common amongst people who are likely to join working groups. In addition 
the tooling support for git is much better; even Microsoft are adding 
git support to their tools these days ;) These things have a very real 
impact on us. For example I spent some time searching for a code review 
tool we could use with hg and came up blank. With git[hub] we get a 
mediocre one for free and the possibility of using a very good one once 
I add github API support.



If crowd-sourcing is part of our strategy to get more tests, (and the
testing meeting we had this week seems to imply it is), then moving to
GitHub is a requirement.


Yes, those are good points and I'm wondering if there really needs to be
a binary choice here or if there could be advantages to using both. For
example, set up a skeleton structure on GH and if/when tests are
submitted, the Test Facilitator could review them and copy the `good
ones` to Hg.


I have very large doubts about strategies that involve hum a intervention
to sync resources across different versioning systems.


Yes. Having only a skeleton setup on github would mean that people using 
that would be unable to fix existing tests. If a file changed on github 
that had already been copied across to hg, it would be an enormous pain 
to work out whether it is safe to port the changes. I think you would 
have to use a write/copy/delete workflow, which would be confusing as 
people wouldn't be able to find their contributions.


Having fully repositories in both places is also fairly painful, but 
tolerable if the commits are in on place only (if you start allowing 
commits in both places, you can end up with a very difficult 
synchronisation problem).




Re: Proposal: moving tests to GitHub

2013-01-25 Thread James Graham

On 01/24/2013 07:22 PM, Odin Hørthe Omdal wrote:

Arthur Barstow wrote:

Before we start a CfC to change WebApps' agreed testing process
[Testing], please make a clear proposal regarding the submission
process, approval process, roles, etc. as is defined in [Testing] and
its references. (My preference is for you to document the new process,
expectations, etc. in WebApps' Public wiki, rooted at
).


I've written (well, copied and changed) a document at:

 http://www.w3.org/wiki/Webapps/Submitting_tests

It might not have everything required right now, but I think it's a good
start. :-)


FWIW that looks good to me. At risk of bikeshedding, I think that 
calling a repo with tests for non-HTML specs "html-testsuite" is 
confusing and will make the repository harder to find, especially since 
the people who are aware that html is not a catch-all term are also the 
people most likely to be writing tests. Some more generic name like 
"web-platform-testsuite" seems better.





Re: Proposal: moving tests to GitHub

2013-01-22 Thread James Graham

On 01/22/2013 12:37 PM, Anne van Kesteren wrote:

On Tue, Jan 22, 2013 at 12:30 PM, Tobie Langel  wrote:

That's definitely something to keep in mind. How frequent is it that a
feature moves from one spec to another (that, is outside of the continuous
flow of features that migrate from HTML5 to WebApps)?

Is your concern about history loss?


My concern is 1) make work and 2) overhead of deciding what has to be
tested where. E.g. tests for the ProgressEvent interface test quite a
bit of IDL related requirements, where do they go? From a distance it
might all appear modular, but it really is all quite interconnected
and by creating artificial boundaries that interconnectedness might
not get testing.


I don't have a strong opinion either way here, but I note that it is 
generally possible to move things between repos relatively easily 
without losing history e.g. using git subtree[1], so I don't think that 
it is necessary to make a decision on this before moving to git.


With that in mind, I suggest doing the move with the repository 
structure more or less as it is today and then later merging the repos 
if we decide that is desirable.


[1] https://github.com/apenwarr/git-subtree/blob/master/git-subtree.txt




Re: Moving File API: Directories and System API to Note track?

2012-09-25 Thread James Graham


On Tue, 25 Sep 2012, Brendan Eich wrote:


Maciej Stachowiak wrote:

On Sep 22, 2012, at 9:35 PM, Maciej Stachowiak  wrote:


On Sep 22, 2012, at 8:18 PM, Brendan Eich  wrote:


And two of the interfaces are generic and reusable in other contexts.
Nice, and DOMRequest predates yours -- should it be done separately since 
(I believe) it is being used by other proposals unrelated to 
FileSystem-like ones?


Sorry if I missed it and it's already being split out.
Yes, I borrowed DOMRequest. I think DOMRequest and DOMMultiRequest could 
be a separate spec, if that sort of asynchronous response pattern is 
generally useful. And it seems like it might be. That would leave only two 
interfaces specific to the Minimal File System proposal, Directory and 
FileHandle.


Here's an alternate version where I renamed some things to match Filesystem 
API and FileWriter, and added the missing key feature of getting a 
persistent URL for a file in a local filesystem (extending the URL 
interface). It's still a much simpler API that provides most of the same 
functionality.


https://trac.webkit.org/wiki/MinimalFileStorageAlternate


Even better.

What's the next step? I would hope we can make device storage be this, but 
that's just old Unix-hacker me (yes, no pathnames, you have to namei by 
calling get for each component -- that's a feature).


*Personally* I like this proposal more than the original Google one, but 
I'm still have concerns. Every operation requiring a callback seems likely 
to lead to a "pyramid of doom" and unmaintainable code. Since this seems 
to be an increasing problem with certain classes of JS APIs (the SQL API 
had similar issues and I wouldn't be surprised if IndexedDB has too), it 
seems like we should work out some solution, either at the langauge level 
or at the DOM level, for managing this complexity rather than adding new 
features that require it. Obviously workers and a sync API are one option, 
but having to keep a worker around just to do storage operations also 
seems rather burdensome.


In addition, this would be the fourth storage API that we have tried to 
introduce to the platform in 5 years (localStorage, WebSQL, IndexedDB 
being the other three), and the fifth in total. Of the four APIs excluding 
this one, one has failed over interoperability concerns (WebSQL), one has 
significant performance issues and is discouraged from production use 
(localStorage) and one suffers from a significant problems due to its 
legacy design (cookies). The remaining API (IndexedDB) has not yet 
achieved widespread use. It seems to me that we don't have a great track 
record in this area, and rushing to add yet another API probably isn't 
wise. I would rather see JS-level implementations of a filesystem-like API 
on top of IndexedDB in order to work out the kinks without creating a 
legacy that has to be maintained for back-compat than native 
implementations at this time.




Paths exposed in input type=file (was: Re: Moving File API: Directories and System API to Note track?)

2012-09-21 Thread James Graham

On 09/20/2012 11:45 PM, Darin Fisher wrote:


File path information is already exposed via .

"File names may contain partial paths."
http://www.whatwg.org/specs/web-apps/current-work/multipage/states-of-the-type-attribute.html#concept-input-type-file-selected


I couldn't get any actual browser to expose paths in this way. Did I 
miss something? It also seems to be encouraging wilful violation of the 
FileAPI spec which clearly states that File objects don't return path 
info [1].


Having a may-level conformance criterion that supposes to override a 
must-level criterion in another spec, but no strong compat. requirement, 
doesn't seem like the optimum path to interoperability or sanity of 
people trying to understand and implement the platform. Is there any 
reason this paragraph shouldn't be considered a bug in HTML?


[1] http://dev.w3.org/2006/webapi/FileAPI/#dfn-file




Re: Moving File API: Directories and System API to Note track?

2012-09-19 Thread James Graham



On Wed, 19 Sep 2012, Adam Barth wrote:


On Wed, Sep 19, 2012 at 1:46 PM, James Graham  wrote:

On Wed, 19 Sep 2012, Edward O'Connor wrote:

Olli wrote:

I think we should discuss about moving File API: Directories and
System API from Recommendation track to Note.


Sounds good to me.


Indeed. We are not enthusiastic about implementing an API that has to
traverse directory trees as this has significant technical challenges, or
may expose user's path names, as this has security implications. Also AIUI
this API is not a good fit for all platforms.


There's nothing in the spec that exposes user paths.  That's just FUD.


I was thinking specifically of the combination of this and Drag and Drop 
and this API. I assumed that at some level one would end up with a bunch 
on Entry objects which seem to expose a path. It then seems that then a 
user who is tricked into dragging their root drive onto a webapp would 
expose all their paths.


It is quite possible that this is a horrible misunderstanding of the spec, 
and if so I apologise. Nevertheless I think it's poor form to immediately 
characterise an error as a deliberate attempt to spread lies.


In any case my central point remains which is that would support this spec 
moving off the Rec. track at this time.




Re: Moving File API: Directories and System API to Note track?

2012-09-19 Thread James Graham

On Wed, 19 Sep 2012, Edward O'Connor wrote:


Hi,

Olli wrote:


I think we should discuss about moving File API: Directories and
System API from Recommendation track to Note.


Sounds good to me.



Indeed. We are not enthusiastic about implementing an API that has to 
traverse directory trees as this has significant technical challenges, or 
may expose user's path names, as this has security implications. Also AIUI 
this API is not a good fit for all platforms.




Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-06 Thread James Graham

On 07/06/2012 02:01 AM, Ryosuke Niwa wrote:

On Thu, Jul 5, 2012 at 4:27 PM, Ojan Vafai  wrote:



In your version, you need to remember the order of the arguments, which
requires you looking it up each time. If we do decide to add the
DOMTransaction constructor back, we should keep passing it a dictionary as
it's argument. Or maybe take the label and a dictionary as arguments.



That's true of almost all other Web APIs when used in ECMAScript 5. I'm not
sympathetic to the argument that you have to remember orders in which
arguments appear because that's true of all functions in ES5, C++, and
various other programming languages.


That just isn't true. Many web APIs solve the lack of named arguments in 
javascript by accepting an object with arguments. For example both 
jQuery and prototype use this pattern in their XHR APIs. Insofar as it 
doesn't follow this pattern, DOM is very much the odd one out. But in 
fact we already realise that having many positional arguments is a bad 
pattern, hence dictionary types.




Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-05 Thread James Graham

On Thu, 5 Jul 2012, Ryosuke Niwa wrote:

On Thu, Jul 5, 2012 at 8:08 AM, James Graham  wrote:

  On 07/05/2012 12:38 AM, Ryosuke Niwa wrote:
  After this change, authors can write:
  scope.undoManager.transact(new AutomaticDOMTransaction{function () {
       scope.appendChild("foo");
  }, 'append "foo"'));


[...]


  document.undoManager.transact(new DOMTransaction({function () {
           // Draw a line on canvas
       }, function () {
           // Undraw a line
       }, function () { this.execute(); },
       'Draw a line'
  }));


I think this proposed API looks particularly verbose and ugly. I thought we 
wanted to make new APIs more author
friendly and less like refugees from Java-land.



What makes you say so? If anything, you don't have to have labels like execute, 
undo, and redo. So it's less verbose. If you
don't like the long name like AutomaticDOMTransaction, we can come up with a 
shorter name.


I think having to call a particular constructor for an object that is just 
passed straight into a DOM function is verbose, and difficult to 
internalise (because you have to remember the constructor name and case 
and so on). I think the design with three positional arguments is much 
harder to read than the design with three explicitly "named arguments" 
implemented as object properties.



Also, I think consistency matters a lot here. I'm not aware of any other 
Web-facing API that takes a pure object with
callback functions.


I think Olli mentioned some already, but the obvious example is event 
handlers which take a pure function or an object with a handleEvent 
property.


Passing in objects containing one or more non-callback properties is also 
an increaingly common pattern, and we are trying to replace legacy APIs 
that took lots of positional arguments with options-object based 
replacements (e.g. init*Event). From the point of view of a javascript 
author there is no difference between something like {foo:true} and 
{foo:function(){}}. Insisting that there should be a difference in DOM 
APIs because of low-level implementation concerns is doing a disservice to 
web authors by increasing the impedence mismatch between the DOM and 
javascript.



On Thu, Jul 5, 2012 at 11:07 AM, Olli Pettay  wrote:

  We shouldn't change the UndoManager API because of implementation issues, 
but if event based API ends up being
  better.



I don't think it's reasonable to agree on an unimplementable design. In theory, 
mutation events can be implemented correctly
but we couldn't, so we're moving on and getting rid of it.


That's true, but based on the content of this thread, not relevant to the 
issue at hand.


The current deisgn is not "unimplementable", it's just slightly more work 
in WebKit than you would like. I don't think it's reasonable to reject 
good designs in favour of worse designs simply because the better design 
isn't a perfect fit for a single implementation; from time to time we all 
have to make larger changes to accomodate use cases that weren't 
considered when architecting our code (beforeunload in Opera is a case in 
point).

Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-05 Thread James Graham

On 07/05/2012 12:38 AM, Ryosuke Niwa wrote:

Hi all,

Sukolsak has been implementing the Undo Manager API in WebKit but the fact
undoManager.transact() takes a pure JS object with callback functions is
making it very challenging.  The problem is that this object needs to be
kept alive by either JS reference or DOM but doesn't have a backing C++
object.  Also, as far as we've looked, there are no other specification
that uses the same mechanism.


I am given to understand that this kind of construct is not a big 
problem for Opera.



Since I want to make the API consistent with the rest of the platform and
the implementation maintainable in WebKit, I propose the following changes:

- Re-introduce DOMTransaction interface so that scripts can instantiate
new DOMTransaction().
- Introduce AutomaticDOMTransaction that inherits from DOMTransaction
and has a constructor that takes two arguments: a function and an optional
label

After this change, authors can write:
scope.undoManager.transact(new AutomaticDOMTransaction{function () {
 scope.appendChild("foo");
}, 'append "foo"'));

instead of:

scope.undoManager.transact({executeAutomatic: function () {
 scope.appendChild("foo");
}, label: 'append "foo"'});

And

document.undoManager.transact(new DOMTransaction({function () {
 // Draw a line on canvas
 }, function () {
 // Undraw a line
 }, function () { this.execute(); },
 'Draw a line'
}));

instead of:

document.undoManager.transact({ execute: function () {
 // Draw a line on canvas
 }, undo: function () {
 // Undraw a line
 }, redo: function () { this.execute(); },
 label: 'Draw a line'
});



I think this proposed API looks particularly verbose and ugly. I thought 
we wanted to make new APIs more author friendly and less like refugees 
from Java-land.


Changing the design here seems fine as long as it is not to something 
that is worse for authors; priority of constituencies suggests that we 
favour authors over implementers. I think this proposed design is worse. 
Perhaps an event based approach is not, but I would need to see the 
detailed proposal.





Re: [webcomponents] Template element parser changes => Proposal for adding DocumentFragment.innerHTML

2012-05-09 Thread James Graham

On 05/09/2012 10:16 AM, James Graham wrote:

On 05/09/2012 09:52 AM, Henri Sivonen wrote:

On Tue, Apr 24, 2012 at 6:39 AM, Rafael Weinstein
wrote:

What doesn't appear to be controversial is the parser changes which
would allow the template element to have arbitrary top-level content
elements.


It's not controversial as long as an HTML context is assumed. I think
it is still controversial for SVG and MathML elements that aren't
wrapped in an or element.


I'd like to propose that we add DocumentFragment.innerHTML which
parses markup into elements without a context element.


Why should the programmer first create a document fragment and then
set a property on it? Why not introduce four methods on Document that
return a DocumentFragment: document.parseFragmentHTML (parses like
.innerHTML), document.parseFragementSVG (parses like
.innerHTML), document.parseFragmentMathML (parses like
.innerHTML) and document.parseFragmentXML (parses like innerHTML
in the XML mode without namespace context)? This would avoid magic for
distinguishing HTML and SVG.


I think introducing four seperate methodsWithLongNames on document is
not creating an API that authors will actually use. Instead it would
likely be wrapped in some less-verbose API with a single entry point and
library-specific magic and regexp to determine which entry point to use.
So I fear this solution may just be punting the problem to a higher
layer, where it will be more inconsistently solved.


By way of a concrete-strawman (or whatever it is one is supposed to say) 
proposal:


document.parse(string, ["auto"|"html"|"svg"|"mathml"|"xml"])

With "auto" being the default and doing magic, and the other options 
allowing one to disable the magic.




Re: [webcomponents] Template element parser changes => Proposal for adding DocumentFragment.innerHTML

2012-05-09 Thread James Graham

On 05/09/2012 09:52 AM, Henri Sivonen wrote:

On Tue, Apr 24, 2012 at 6:39 AM, Rafael Weinstein  wrote:

What doesn't appear to be controversial is the parser changes which
would allow the template element to have arbitrary top-level content
elements.


It's not controversial as long as an HTML context is assumed.  I think
it is still controversial for SVG and MathML elements that aren't
wrapped in an  or  element.


I'd like to propose that we add DocumentFragment.innerHTML which
parses markup into elements without a context element.


Why should the programmer first create a document fragment and then
set a property on it? Why not introduce four methods on Document that
return a DocumentFragment: document.parseFragmentHTML (parses like
.innerHTML), document.parseFragementSVG (parses like
.innerHTML), document.parseFragmentMathML (parses like
.innerHTML) and document.parseFragmentXML (parses like innerHTML
in the XML mode without namespace context)? This would avoid magic for
distinguishing HTML  and SVG.


I think introducing four seperate methodsWithLongNames on document is 
not creating an API that authors will actually use. Instead it would 
likely be wrapped in some less-verbose API with a single entry point and 
library-specific magic and regexp to determine which entry point to use. 
So I fear this solution may just be punting the problem to a higher 
layer, where it will be more inconsistently solved.




Re: [webcomponents] HTML Parsing and the element

2012-04-24 Thread James Graham

On 04/24/2012 05:57 PM, Yuval Sadan wrote:


Placing contents as CDATA is an option. I personally think the 
 tag as proposed is adhoc to somebody's notion of how 
templates should work. To avoid this I think they should be simpler. I 
am not seeing the added advantage of having the client parse the 
contents upon encountering it: there is no use for the contents before 
it is used programatically, and as such it could be prepared on first 
use, via the DocumentFragment suggestion mentioned earlier. 
Specifically, it's never considered to be part of the document's 
semantic content. Perhaps I'm overlooking something here.


That is actually quite a useful axis of distinction. If we want normal 
methods on the document like getElementsByClassName or getElementById to 
return elements in templates they obviously need to be parsed as actual 
elements in the DOM. If we don't it seems very unnatural to have them 
parsed as elements; making DOM Core methods, CSS selectors, etc. have 
some dependence on whether there is an element called  in the 
tree just seems like a recipe for pain.


My feeling is that the elements in templates aren't like the other 
elements in the document and so we don't want the normal 
lookup/traversal methods on document to work on them.






Re: [webcomponents] HTML Parsing and the element

2012-04-18 Thread James Graham

On Wed, 18 Apr 2012, Dimitri Glazkov wrote:


Wouldn't it make more sense to host the template contents as normal
descendants of the template element and to make templating APIs accept
either template elements or document fragments as template input?  Or
to make the template elements have a cloneAsFragment() method if the
template fragment is designed to be cloned as the first step anyway?

When implementing this, making embedded content inert is probably the
most time-consuming part and just using a document fragment as a
wrapper isn't good enough anyway, since for example img elements load
their src even when not inserted into the DOM tree. Currently, Gecko
can make imbedded content inert on a per-document basis.  This
capability is used for documents returned by XHR, createDocument and
createHTMLDocument. It looks like the template proposal will involve
computing inertness from the ancestor chain ( ancestor or
DocumentFragment marked as inert as an ancestor).  It's unclear to me
what the performance impact will be.


Right, ancestor-based inertness is exactly what we avoid with sticking
the parsed contents into a document fragment from an "inert" document.
Otherwise, things get hairy quick.



I am also pretty scared of tokenising stuff like it is markup but then 
sticking it into a different document. It seems like very surprising 
behaviour. Have you considered (and this may be a very bad idea) exposing 
the markup inside the template as a text node, but exposing the 
corresponding DOM as an IDL attribute on the HTMLTemplateElement (or 
whatever it's called) interface?

RE: [FileAPI] createObjectURL isReusable proposal

2011-12-14 Thread James Graham


On Wed, 14 Dec 2011, Adrian Bateman wrote:


On Wednesday, December 14, 2011 10:46 AM, Glenn Maynard wrote:

We can certainly talk through some of these issues, though the amount of
work we'd need to do doesn't go down. Our proposal is a small change to
the lifetime management of the Blob URL and was relatively simple (for
us) to implement. In our experience, createObjectURL is a good broker
in web developers minds for switching from object to URL space.


I'd expect making this fully interoperable to be a complex problem.  It makes
fetch order significant, where it currently isn't.

For example, if two images have their @src attribute set to a URL one after the
other, what guarantees which one succeeds (presumably the first) and which fails
(due to the first releasing the URL)?  The order in which synchronous sections
after "await a stable state" are run isn't specified.  Combining different APIs
which do similar things (eg. asynchronous XHR and HTMLMediaElement's resource
selection algorithm) would compound the problem.

Another possible problem, depending on where the blob release takes place: if
the UA doesn't support images, "update the image data" for HTMLImageElement
terminates at step 4; it would need to be careful to still release the blob
URL when terminating before the fetch.

This would probably have effects across a lot of specs, and couldn't be neatly
tucked away in one place (such as inside the resource fetch algorithm); and it
might force every API that performs fetches to worry about creating race
conditions with other APIs.  Assigning the blob directly would still affect
various specs, but it would be less likely to lead to blob leakage and subtle,
possibly racy interop failures.


I don't think we need interop for race conditions. Trying to use a one-time URL
twice is supposed to go wrong and I don't think it necessarily has to go wrong
in exactly the same way in all browsers. You might have the same problem based
on when you call revokeObjectURL in applications today.


Historically failure to interop on things that were "supposed to go wrong" 
hasn't resulted in people avoiding those things but instead has resulted 
in them depending on the specific behaviour of one implementation.

Re: XPath and Selectors are identical, and shouldn't be co-developed

2011-11-30 Thread James Graham



On Wed, 30 Nov 2011, Yehuda Katz wrote:



Yehuda Katz
(ph) 718.877.1325


On Wed, Nov 30, 2011 at 12:57 PM, Bjoern Hoehrmann  wrote:
  * Yehuda Katz wrote:
  >Most people would accomplish that using jQuery. Something like:
  >
  >var previous = $(current).closest("tr").prev()
  >
  >I'm not exactly sure what `current` is in this case. What edge-cases are
  >you worried about when you say that the JavaScript is "quite involved"?

It is unlikely that your code is equivalent to the code I provided, and
sure enough, you can point out in all discussions about convenience APIs
that people could use a library. I don't see how that is relevant here.


It's relevant because Tab's argument is that a mix of selectors and JS APIs 
will work, and I'm demonstrating that by showing that that's what *people 
actually do* today.


Of course that can be taken in one of two ways; it either shows that it's 
fine to have a limited selection DSL because people can fall back on using 
a full programming language, or shows that today's selction DSLs have 
failed because people are being forced to fall back on a full programming 
language and the whole of jQuery to satisfy their needs.




Re: XPath and find/findAll methods

2011-11-23 Thread James Graham

On Tue, 22 Nov 2011, Jonas Sicking wrote:


I'm not convinced that it's worth investing in XPath. At least not
beyond the low-hanging fruit of making most of the arguments to
.evaluate optional. But I think trying to make selectors compete in
expressiveness with XPath is a loosing battle.


Right, I think people agree on the general facts that XPath is more 
expresive than selectors at the cost of being more complex for simple 
cases and less familiar to "typical" authors. I presume everyone also 
agrees that DOM3 XPath is so awful as to be virtually unusable. The only 
remaining question is whether the cost of making the XPath API more 
palatable is worth it given the strength of the use cases and the fact 
that one can always solve any use case by resorting to manually walking 
the DOM.


My feeling is that the specific approach we should consider is "adopt the 
selectNodes and selectSingleNode APIs that Opera implements". Of course 
that is rather easy for me to suggest because it doesn't require me to do 
any work :) On the other hand, other vendors get to suggest standardising 
their already implemented APIs way more often, so I don't feel that bad :) 
Since these APIs are just wrappers around existing functionality it seems 
like they should be quite trivial to implement (much easier than adding 
new features to selectors, for example).




Re: XPath and find/findAll methods

2011-11-22 Thread James Graham

On Tue 22 Nov 2011 01:05:18 PM CET, Simon Pieters wrote:
On Mon, 21 Nov 2011 20:34:14 +0100, Martin Kadlec 
 wrote:



Hello everyone,
I've noticed that the find/findAll methods are currently being 
discussed and there is one thing that might be a good idea to consider.


Currently, it's quite uncomfortable to use XPath in javascript. The 
document.evalute method has lots of arguments and we have to remember 
plenty of constants to make it work. IE and Opera support selectNodes 
method on NodePrototype, which is really useful, but what's the point 
in using it when it doesn't work in FF/Chrome/Safari.


Maybe FF/Chrome/Safari should add support for selectNodes?



Right, one of the issues with XPath is that the DOM3 XPath API is 
without doubt the worst API on the web platform. With a sane API there 
might be more demand for XPath since it can be used for things that the 
CSSWG are unlikely to ever allow in selectors for performance reasons. 
As Simon points out, there is even a preexisting API that does more or 
less the right thing and is implemented in IE and Opera.




Re: innerHTML in DocumentFragment

2011-11-03 Thread James Graham

On Thu, 3 Nov 2011, Tim Down wrote:


Have you looked at the createContextualFragment() method of Range?

http://html5.org/specs/dom-parsing.html#dom-range-createcontextualfragment


That doesn't meet the use case where you don't know the contextual 
element upfront. As I understand it that is important for some of the use 
cases.


I think this is possible to solve, but needs an extra mode in the parser. 
Also, createcontextualFragment could be modified to take null as the 
context to work in this mode.




Re: how to organize the DOM specs [Was: CfC: publish new WD of DOM Core]

2011-08-22 Thread James Graham

On 08/22/2011 11:22 AM, Jonas Sicking wrote:

http://www.whatwg.org/specs/web-apps/current-work/complete/

I *always* used the much smaller document that used to be available here:

www.whatwg.org/specs/web-workers/current-work/


I don't really understand your point here. If you used the smaller 
document presumably you could just have easily have read the relevant 
chapter from the larger document.



When implementing a spec, the first thing I'd like to do is to read
the whole spec front to back. This in order to get a sense for the
various concepts involved which affects the implementation strategy.
It is also important as it's required to review the specification.
With a spec the size of, for example, the HTML5 spec, this is
substantially more difficult. Not only does it take significantly
longer to read the full HTML5 spec if all I want to implement is the
pushState feature. It's also impossible to hold the fully spec in
memory, even at a high level.


Why would you read the whole spec to implement features contained in a 
single subsection? Alternatively, why wouldn't you read the whole HTML5 
spec to implement web workers since there are normative dependences? It 
seems very arbitrary to base your choice of what is enough background 
information on someone else's choice of multiple files vs multiple 
sections in a single file.



Small specs are absolutely more easily implemented and reviewed.


I think this is an illusion.

Self-contained features are more easily implemented and reviewed. There 
is no reason that a relatively self-contained feature can't be a section 
of a larger document.


Small specs encourage people - including the spec editors - to perceive 
that features are more self-contained than they really are, leading to 
the problem of important detail dropping into the gaps between the specs.



Additionally, having releases of a spec makes it possible to know what
browser vendors and other significant players agree on. A ever
changing slowly evolving spec doesn't say what browser vendors agree
is good as opposed to what the editor happened to have put in since
the various stake holders took a look at it.


What browser vendors agree on is entirely unimportant to authors. What 
they care about is what actually ships. Once things are shipping browser 
vendors typically don't have much leeway to change things even when they 
all agree that it would be a nice idea.


We should fix the "authors need to know what is stable" problem by 
understanding it is actually an "authors need to know what is shipping" 
problem and implementing something like caniuse.com directly in the 
spec, with links to the relevant parts of the testsuite where appropriate.




Re: CfC: WebApps testing process; deadline April 20

2011-04-21 Thread James Graham

On 04/21/2011 01:10 AM, Adrian Bateman wrote:

First, thanks to Art for pulling all this content together. We're looking
forward to a more structured process for testing as various specifications
in the WebApps increase in maturity.

I have a couple of small comments related to the issues Aryeh raised.
Apologies for the lateness of these comments; I spent time sharing this
process with a number of teams at Microsoft before responding.

1. Early approval of tests

We think that waiting for Last Call or Candidate Recommendation before
approving tests loses some of the value of tests in driving ambiguity out
of the specification. The CSS WG found many issues in CSS 2.1 as a
consequence of tests and some of these issues we substantive enough that
the spec went back to Working Draft status. Avoiding this by reviewing test
cases earlier in the process will help to improve spec quality. I think
of this in the following way: a bug filed against the spec requesting a
change represents someone's view that the spec is wrong. On the other hand,
an approved test with consensus of reviewers in the working group helps to
identify more stable sections of the spec. It doesn't mean it can't change
but it does mean the spec has had additional review on the assertions made
in the test and that's useful.


I agree that late review is not helpful to anyone.

The way I think test review should work is similar to any other form of 
code review:


A user pushes a series of commits to the repository
The user requests review of those commits
Different reviewers comment on the patch
If no problems are found, the tests are considered approved

This encourages early review, makes it obvious what people consider 
stable enough to be reviewed, and allows for sharing of not-yet-ready 
works in progress without having to have multiple public repositories. 
It also eliminates the need for seperate submitted and approved folders 
since approved tests are ones where all the related commits have 
positive review and there are no open bugs.


As far as I can tell the main problem with adopting this workflow is 
that some tooling support is required.




Re: [FileAPI] File.slice spec bug

2011-04-14 Thread James Graham

On 04/14/2011 03:04 AM, Jonas Sicking wrote:


It would be nice to hear from someone at Opera about their willingness to
commit to this change
as well.


As a general point we think that making breaking changes to APIs with 
multiple compatible implementations that are already shipping is a 
really bad idea. So let's try and make this a one off.


That said, we will go along with the plan, but we can't commit to a 
timeline for releasing the changed version.




Re: RfC: WebApps Testing Process

2011-04-04 Thread James Graham

(setting followup to public-testinfra)

On 04/04/2011 01:45 AM, Garrett Smith wrote:


I'd rather see the `format_value` function broken up. It makes
non-standard expectations of host objects (`val`) and that there is a
global `Node` object. Which standard requires that?


Well Web DOM Core does. I assume DOM Core 2 also does but I'm not sure 
where the IDL variant it uses is specified.



Instead of making decisions based on what is on the object's prototype
chain, It is safer to make a more direct inference.

However, taking a step back, I want to know why the function is so generalized.

I see that the function `format_value` is called by `assert_equals`
and by itself, recursively. It is expected to take all of number,
string, and Node because assert_equals pushes down the requirement to
be generalized. I would rather see this functionality broken up so
that assertions about Node objects are passed Nodes, and then the
formatting can be in format_node, or stringify_node, etc. And it can
get worse when you have more object types or subtypes, such as any of
the various DOM collections.


That doesn't obviously sound like a win. Why would we want to implement 
one function per type when there could be huge numbers of types, weak 
typing is idiomatic javascript, and the langauge doesn't make 
implementing type-specific functions easy?


Maybe I am not understanding your proposal correctly. Could you flesh it 
out a bit more?



I've attacked this `assert_*` multiplicity variance by using what is
called "constraints" in NUnit. Essentially, "encapsulate the parts
that vary. In javascript, a constraint can be written very easily as a
function. That will also allow for cleanup of the messiness of `-0`
and NaN's and their accompanying obsolete comments.


So as far as I can tell, NUnit syntax would give a syntax like

assert(4, equals(function() {return 2+2}))

Or, perhaps, more like

test(function() {assert(4, equals(2+2))}

So far I have not really understood why this is an improvement.


Comments on any of the above are welcome, especially regarding the
various "@@@ TBD: ..." tags that are sprinkled throughout the above
documents.

A couple of questions too ...

1. What is the level of uptake of testharness.js within the HTML WG


What is the HTML WG using a javascript test harness for?


Running in-browser automated tests.



Re: Testing Requirements

2011-02-17 Thread James Graham

On 02/17/2011 01:03 PM, Arthur Barstow wrote:

On Feb/17/2011 5:04 AM, ext James Graham wrote:

On 02/17/2011 09:55 AM, Dominique Hazael-Massieux wrote:


(I see that Art documented most of this in
http://www.w3.org/2008/webapps/wiki/Testing_Requirements but thought
this ought to be confirmed on the list)


Is there some way to make put this documentation in some common
location rather than having essentially the same facts documented once
for HTML, once for WebApps, etc.?


Where are the HTML WG's testing requirements and "etc.'s" requirements?


There is some test-related text at [1] (and links therein).

[1] http://www.w3.org/html/wg/wiki/Testing





Re: Testing Requirements

2011-02-17 Thread James Graham

On 02/17/2011 09:55 AM, Dominique Hazael-Massieux wrote:


(I see that Art documented most of this in
http://www.w3.org/2008/webapps/wiki/Testing_Requirements but thought
this ought to be confirmed on the list)


Is there some way to make put this documentation in some common location 
rather than having essentially the same facts documented once for HTML, 
once for WebApps, etc.?




Re: ECMA TC 39 / W3C HTML and WebApps WG coordination

2009-09-24 Thread James Graham

Sam Ruby wrote:

A concern specific to HTML5 uses WebIDL in a way that precludes 
implementation of these objects in ECMAScript (i.e., they can only be 
implemented as host objects), and an explicit goal of ECMA TC39 has been 
to reduce such.  Ideally ECMA TC39 and the W3C HTML WG would jointly 
develop guidance on developing web APIs, and the W3C HTML WG would apply 
that guidance in HTML5.


Meanwhile, I would encourage members of ECMA TC 39 who are aware of 
specific issues to open bug reports:


  http://www.w3.org/Bugs/Public/

And I would encourage members of the HTML WG who are interested in this 
topic to read up on the following emails (suggested by Brendan Eich):


https://mail.mozilla.org/pipermail/es5-discuss/2009-September/003312.html
  and the rest of that thread

https://mail.mozilla.org/pipermail/es5-discuss/2009-September/003343.html
  (not the transactional behavior, which is out -- just the
  interaction with Array's custom [[Put]]).

https://mail.mozilla.org/pipermail/es-discuss/2009-May/009300.html
   on an "ArrayLike interface" with references to DOM docs at the bottom

https://mail.mozilla.org/pipermail/es5-discuss/2009-June/002865.html
   about a WebIDL float terminal value issue.



Would it be possible to summarise the known issues in an email (or on a 
wiki page or something)? I read those threads and it was unclear to me 
which specific points are considered outstanding problems with the 
HTML5/WebIDL specs.