Service Worker issues

2016-07-27 Thread Sam Ruby
The following is a mix of spec and implementation issues that I 
encountered in my as-of-yet unsuccessful attempt to make use of service 
workers in the ASF Board Agenda tool.


1) the "offline fallback" use case for Service Workers involves 
intercepting fetch requests, issuing the request, and then recovering 
from any errors that may occur.  Unfortunately, issuing the 
event.request will not faithfully reproduce the original navigate 
request, in particular, credentials are not included.


2) passing credentials: 'include' on fetch requests within a service 
worker will succeed if the browser has access to the credentials (e.g. 
due to prompting the user earlier in the browser session).  If the 
credentials are not present, Firefox will prompt for this information. 
Chrome will not.  This is a showstopper for offline access to an 
authenticated web page with Chrome.


3) event.request.headers.get("Authorization") is available on Firefox 
but not on Chrome.


4) cache.keys() is supposed to return a promise.  On both Firefox and 
Chrome, cache.keys() returns an empty array instead.  cache.matchAll() 
can be used to get all of the responses, but not the keys.


5) calling registration.unregister() followed by 
navigator.serviceWorker.getRegistrations() will still return a list 
containing the now unregistered service worker in both Firefox and Chrome.


6) EventSource is available to service workers in Chrome but not in 
Firefox.  Using importScripts in an attempt to load the Yaffle polyfill 
doesn't work.


Advice on how to proceed with these issues (filing bugs on the spec 
and/or one or more implementations, perhaps?) would be greatly appreciated.


Thanks!

- Sam Ruby



Re: [url] Feedback from TPAC

2014-12-04 Thread Sam Ruby

On 11/25/2014 03:52 PM, David Walp wrote:

Apologies for being a late comer to the discussion, but here is some feedback 
in our implementation.  We're looking forward to engaging on these interactions 
more proactively in the future.

On Wednesday, October 29, 2014 6:55 PM, Sam Ruby ru...@intertwingly.net wrote:


Now to get to what I personally am most interested in: identifying
changes to the expected test results, and therefore to the URL
specification -- independent of the approach that specification takes
to describing parsing. To kick off the discussion, here are three examples:

1) http://intertwingly.net/projects/pegurl/urltest-results/7357a04b5b

A number of browsers, namely Internet Explorer, Opera(Presto), and
Safari seem to be of the opinion that exposing passwords is a bad
idea. I suggest that this is a defensible position, and that the
specification should either standardize on this approach or at a minimum permit 
this.


Yes, we, Microsoft, are of the opinion that exposing passwords is a bad idea.  
Based on received feedback, customers agree and I suspect our customers are not 
unique on this opinion.


I've filed a bug on your behalf:

https://www.w3.org/Bugs/Public/show_bug.cgi?id=27516

There already is a discussion as a result.  I encourage you to register 
with bugzilla and add yourself to the cc-list for this bug.



2) http://intertwingly.net/projects/pegurl/urltest-results/4b60e32190

This is not a valid URL syntax, nor does any browser vendor implement
it.  I think it is fairly safe to say that given this state that there
isn't a wide corpus of existing web content that depends on it.  I'd
suggest that the specification be modified to adopt the behavior that
Chrome, Internet Explorer, and
Opera(Presto) implement.


Agreed.  Standardizing something not used that is not in anyone's interest.  What you 
have posted on Github:  
https://github.com/rubys/url/tree/peg.js/reference-implementation#readme .. I found 
I had a hard time determining what should be the parsing output for a number of 
cases. rings true here. There is no advantage to adding complexity when it is not 
required.


I've filed a bug on your behalf:

https://www.w3.org/Bugs/Public/show_bug.cgi?id=27517

Hopefully you find the following work-in-progress easier to follow:

https://specs.webplatform.org/url/webspecs/develop/

If not, please let me know how it could be improved.


3) http://intertwingly.net/projects/pegurl/urltest-results/61a4a14209

This is an example of a problem that Anne is currently wrestling with.
Note in particular the result produced by Chrome, which identifies the
host as a IPV4 address and canonicalizes it.


This is the type of interop issue we think should be a focus of the URL 
specification and the W3C efforts.


This is the subject of an existing bug: 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=26431


The webspecs link above contains a concrete proposal for resolving this.


Finally we are focused at identifying and fixing real-world interop bugs that we see in live sites 
in support our goal of The web should just work 
(http://blogs.msdn.com/b/ie/archive/2014/05/27/launching-status-modern-ie-amp-internet-explorer-platform-priorities.aspx).
 For example, I think you had at one time listed an IE issue in the discussion section of the URL 
spec - http://intertwingly.net/projects/pegurl/url.html#discuss.  This bug was related to a missing 
/ at the front of URLs under certain conditions.  Since this issue has been removed 
from the discussion section, I am hoping you have seen that we have fixed the issue.  We are 
actively pursuing and fixing similar interop bugs.  We want the URL spec to be source of interop 
behavior and believe that our goal is in line with your direction.


To the best of my knowledge, the fix has not been released, but a 
workaround has been published.  See:


https://connect.microsoft.com/IE/feedbackdetail/view/1002846/pathname-incorrect-for-out-of-document-elements


Cheers,
_dave_


- Sam Ruby



Re: Interoperability vs file: URLs

2014-12-04 Thread Sam Ruby

On 12/02/2014 02:22 AM, Jonas Sicking wrote:

On Mon, Dec 1, 2014 at 7:58 PM, Sam Ruby ru...@intertwingly.net wrote:

On 12/01/2014 10:22 PM, Jonas Sicking wrote:


On Mon, Dec 1, 2014 at 7:11 PM, Domenic Denicola d...@domenic.me wrote:


What we really need to do is get some popular library or website to take
a
dependency on mobile Chrome or mobile Safari's file URL parsing. *Then*
we'd
get interoperability, and quite quickly I'd imagine.



To my knowledge, all browsers explicitly block websites from having
any interactions with file:// URLs. I.e. they don't allow loading an
img from file:// or even link to a file:// HTML page using a
href=file:// Even though both those are generally allowed cross
origin.

So it's very difficult for webpages to depend on the behavior of
file:// parsing, even if they were to intentionally try.


Relevant related reading, look at the description that the current URL
Living Standard provides for the origin for file: URLs:

https://url.spec.whatwg.org/#origin

I tend to agree with Jonas.  Ideally the spec would match existing browser
behavior.  When that's not possible, getting agreements from browser vendors
on the direction would suffice.

When neither exist, a more accurate description (such as the one cited above
in the Origin part of the URL Standard) is appropriate.


To be clear, I'm proposing to remove any and all normative definition
of file:// handling from the spec. Because I don't think there is
interoperability, nor do I think that it's particularly high priority
to archive it.


A bug has been file on your behalf:

https://www.w3.org/Bugs/Public/show_bug.cgi?id=27518

In response, I suggest that your proposal is a bit too extreme, and I 
suggest dialing it back a bit.



/ Jonas


- Sam Ruby



Re: PSA: Publishing working draft of URL spec

2014-12-03 Thread Sam Ruby

On 12/02/2014 06:54 PM, cha...@yandex-team.ru wrote:


If the document doesn't meet pubrules, that will cause a delay as
Sam and I deal with it.


I'm new to being a W3C Editor, but I did manage to find: 
http://www.w3.org/2005/07/pubrules


I made a number of fixes:

https://github.com/whatwg/url/commit/0b3840580f92a7d15a76235d8ee67254ca0824da

https://github.com/w3ctag/url/commit/1ea9cc3ac4594daa670864d1568832251199dfa7

Updated draft:

https://rawgit.com/w3ctag/url/develop/url.html

Pubrules results:

http://tinyurl.com/lphqp9b

---

Open issues:

1) The title page date and the date at the end of the This Version URI 
MUST match.


Issue: Bikeshed adds a level identifier to the URI[sic].

Options: request a variance; patch bikeshed; have the webmaster fix this 
downstream.


2) The editors'/authors' names MUST be listed.

Issue: one of the editors is not a WG member, both editors prefer 
editors NOT be listed.  This is not unique:


http://lists.w3.org/Archives/Public/public-w3process/2014Sep/0105.html

Recommendation: request a variance.

3) The copyright MUST use the following markup

Issue: the pubrules mandated markup doesn't match the charter specified 
license for this specification.


Recommendation: request a variance.

4) All proposed XML namespaces created by the publication of the 
document MUST follow URIs for W3C Namespaces.


Issue: the pubrules checker seems to have found an XML namespace where 
none is present nor intended.


Recommendation: request a variance.


There is an open CfC to move the document to the 2014 Process, but it
doesn't really matter whether this or the next Public Working Draft
is published under that process so it won't hold up a Public Working
Draft if we can get the pubrules etc sorted in time.


I've left the 2005 process link for now; will update once the CfC completes.


cheers

Chaals

-- Charles McCathie Nevile - web standards - CTO Office, Yandex
cha...@yandex-team.ru - - - Find more at http://yandex.com


- Sam Ruby




Re: PSA: Publishing working draft of URL spec

2014-12-03 Thread Sam Ruby

On 12/03/2014 10:57 AM, Arthur Barstow wrote:

On 12/3/14 10:42 AM, Sam Ruby wrote:

On 12/02/2014 06:54 PM, cha...@yandex-team.ru wrote:

I'm new to being a W3C Editor, but I did manage to find:
http://www.w3.org/2005/07/pubrules


Besides the above, the following which includes links to the various
validators:

  
https://www.w3.org/wiki/Webapps/SpecEditing#TR_Publication_Process_and_WebApps.


Thanks!


Updated draft:

https://rawgit.com/w3ctag/url/develop/url.html


Please run validator.w3.org/checklink and it appears you want to delete
the ...-1-... in the This version link.


This is a consequence of the first issue I mentioned, namely that 
Bikeshed adds a level identifier to the URL:


http://lists.w3.org/Archives/Public/public-webapps/2014OctDec/0547.html

---

Running the WebIDL checker results in three errors being reported.

http://tinyurl.com/kcwx2hj

Can somebody confirm that these are real errors?


-Thanks, AB


- Sam Ruby



Re: Help with WebIDL v1?

2014-12-03 Thread Sam Ruby

On 12/03/2014 11:10 AM, Boris Zbarsky wrote:

On 12/3/14, 6:02 AM, Yves Lafon wrote:

Pretty much like refactoring XHR using Fetch or not. Most
implementations will most probably move to the latest version, but the
external interface will be the same.


External interface being the IDL syntax in this case, not the
resulting web-exposed interface, right?


In the case of Sequence, the ES
binding says in the two versions IDL sequenceT values are represented
by ECMAScript Array values.


That's only really true for sequence return values in v2.  sequence
arguments are represented by ES iterables.


The option 2 you outline seems best here, the syntax is considered as
stable (well, can be expanded, things can be declared obsolete, but
there won't be breaking changes), but implementations (as in the es
binding algorithms) may change to match the evolution of the underlying
language.


OK.  I can live with this as long as the people referencing v1 can live
with it.


Or another example: in v1 having an indexed getter implies nothing
about being iterable, but in v2 it implies ES6 iterability.


This is an example of v1 plus one feature.


Not plus an IDL feature.  As in, this is not a case of v2 adds some IDL
syntax compared to v1, but if you never use it in your spec you never
have to worry about it.  This is a case of the same syntax has
different resulting behavior in implementations depending on which
version of IDL they implement, leading to possible lack of interop for
different implementations of your spec depending on which IDL version
they choose to use.

This is why in option 2 it's important to spell out what the actual
requirements on implementations are that arise from the IDL reference.


Another option would be to define only the syntax and leave the bindings
to v2 only, but it wouldn't help much for testing.


Indeed.  Or, again, actual interop.


Another way to phrase this question: what would the CR exit criteria be 
for such a WebIDL v1?  The reason why I bring this up is that if they 
are too low to be meaningful, that brings into the question whether or 
not this exercise is meaningful.  Similarly, if they are too high to be 
likely to be met.


If, on the other hand, there is a sweet spot some place in the middle; 
then perhaps this effort should proceed.


By analogy, the parsing of http: absolute URLs is solid and unlikely to 
change, but determining the origin of file: URLs isn't.  Clearly 
identifying what parts are widely deployed and unlikely to change vs 
those that aren't may be a path forward.



-Boris


- Sam Ruby



Re: URL Spec WorkMode

2014-12-02 Thread Sam Ruby



On 12/02/2014 06:55 AM, cha...@yandex-team.ru wrote:

TL;DR: Administrative details from the W3C Webapps cochair
responsible for URL in that group. Relevant in practice is a request
to minimise channels of communication to simplify spec archaeology,
and especially to prefer public-webapps over www-archive, but I don't
see there is any reason this WorkMode cannot be used.


TL;DR: the Invited Expert Agreement and public statements regarding the 
contents of private Member Agreements is an obstacle; as are the lack of 
substantive technical feedback on the public-webapps list.



02.12.2014, 04:19, Sam Ruby ru...@intertwingly.net:

On 11/18/2014 03:18 PM, Sam Ruby wrote:

Meanwhile, I'm working to integrate the following first into the
WHATWG version of the spec, and then through the WebApps
process:

http://intertwingly.net/projects/pegurl/url.html


Integration is proceeding, current results can be seen here:

https://specs.webplatform.org/url/webspecs/develop/

It is no longer clear to me what through the WebApps process
means. In an attempt to help define such, I'm making a proposal:

https://github.com/webspecs/url/blob/develop/docs/workmode.md#preface

At this point, I'm looking for general feedback.  I'm particularly
interested in things I may have missed.


A bunch of comments about how to work with a W3C group:

Participation and Communication… In W3C there is a general desire to
track contributions, and ensure that contributors have made patent
commitments. When discussion is managed through the W3C working
group, the chairs and staff contact take responsibility for this, in
conjunction with editors. If the editor wants to use other sources,
then we ask the editor to take responsibility for tracking those
sources. The normal approach is to request that contributors join the
Working Group, either as invited experts or because they represent a
member organisation. In many cases, contributors are already
represented in webapps - for instance while Anne van Kesteren isn't
personally a member, his employer is, and there is therefore a
commitment from them.


Examples of obstacles:

1) The no «Branching» language in the Invited Expert agreement:

http://www.w3.org/Consortium/Legal/2007/06-invited-expert

2) Public assertions that the Member agreements limit ways that specs 
can be used to ways permitted by the W3C Document license.  Example:


http://lists.w3.org/Archives/Public/public-w3process/2014Nov/0166.html

I plan to work closely with W3C Legal to address both of these issues.


While webapps generally prefers conversations to be on the webapps
list (because it makes it easier to do the archaeology in a decade or
so if someone needs to), there is no formal ban on using other
sources. However, I would ask that you request comments on publicly
archived lists, and specifically that you strongly prefer
public-webapps@w3.org (which is a list designated for technical
discussion whose subscribers include W3C members who expect to
discuss work items in the scope of the webapps group, such as the URL
spec) to www-archive (which is just a place to give a public anchor
to random email - the subscription list is completely random and
likely not to include many interested W3C members).


Recent posts by David Walp, Jonas Sicking and Domenic Denicola give me 
some hope that there can be meaningful technical discussion on this 
list.  That being said, this is an ongoing concern that needs to be 
addressed.



The TR Process… The WHATWG document is not a Public Working Draft
in the sense of the W3C Process (which has implications for e.g.
patent policy). Regularly publishing a Public Working Draft to
w3.org/TR is part of what makes the patent policy work, since
commitments are bound to various stages including the latest Public
Working Draft (i.e. TR version, not editors' draft) before someone
left the group [wds]. Those snapshots are required to be hosted by
W3C and to meet the team's requirements, as determined by the Team
from time to time. If there is an issue there, let's deal with it
when we see it.


What is the hold-up for publishing a Public Working Draft?  Can I ask 
that you respond to the following email?


http://lists.w3.org/Archives/Public/public-webapps/2014OctDec/0315.html

Let me know what I need to do.


Webapps still generally works under the 2005 version of the Process -
but we could change this document to the 2014 process. The only
really noticeable difference will be that there is formally no Last
Call, and the final Patent Exclusion opportunity is instead for the
draft published as Candidate Recommendation. (In other words, you
need to be pretty bureaucratically-minded to notice a difference).


Lets update to 2014 now.  Its only a matter of time before not updating 
won't be an option any more.



Documents published by W3C are published under whatever license W3C
decides. The Webapps charter explicitly calls out the URL spec for
publishing under the CC-BY license [chart], so

Re: URL Spec WorkMode

2014-12-02 Thread Sam Ruby

On 12/02/2014 09:23 AM, cha...@yandex-team.ru wrote:

TL;DR: The administrative hold-ups are now all my fault, so I'm sorry if they 
persist, and I will start working to remove them...


Thanks in advance for helping clear administrative obstacles!


(other stuff later)

02.12.2014, 15:57, Sam Ruby ru...@intertwingly.net:


What is the hold-up for publishing a Public Working Draft?


It has been that the chairs have been pretty busy, and we dropped the ball 
between us. More recently, we sorted that out so the hold up is now me.


For discussion purposes:

https://rawgit.com/w3ctag/url/develop/url.html


  Can I ask that you respond to the following email?


Yes. That's a very fair request. I may be able to do so tonight…


http://lists.w3.org/Archives/Public/public-webapps/2014OctDec/0315.html

Let me know what I need to do.

  Webapps still generally works under the 2005 version of the Process -
  but we could change this document to the 2014 process. The only
  really noticeable difference will be that there is formally no Last
  Call, and the final Patent Exclusion opportunity is instead for the
  draft published as Candidate Recommendation. (In other words, you
  need to be pretty bureaucratically-minded to notice a difference).


Lets update to 2014 now.  Its only a matter of time before not updating
won't be an option any more.


Works for me. I'll start a Call for Consensus - but I imagine it will be a 
formality.

Cheers

Chaals

--
Charles McCathie Nevile - web standards - CTO Office, Yandex
cha...@yandex-team.ru - - - Find more at http://yandex.com


- Sam Ruby



Re: URL Spec WorkMode

2014-12-02 Thread Sam Ruby

[offlist]

I notice that you are on GitHub.  One thing I encourage you to do is to 
directly edit:


https://github.com/webspecs/url/blob/develop/docs/workmode.md

Expand the proposal, fix mistakes, correct typos -- everything is fair game.

This will result in pull requests, but pretty much anything that doesn't 
unnecessarily cause somebody to come unglued I'll take.


The same thing is true for the working draft:

https://github.com/w3ctag/url/tree/develop

Change the copyright, status, metadata at the top of url.bs.  Heck, if 
you feel so inclined, change the spec itself.


I'm making the same suggestion to Wendy.  I'd love for the end result to 
be a truly joint proposal.


- Sam Ruby

On 12/02/2014 10:28 AM, Sam Ruby wrote:

On 12/02/2014 09:23 AM, cha...@yandex-team.ru wrote:

TL;DR: The administrative hold-ups are now all my fault, so I'm sorry
if they persist, and I will start working to remove them...


Thanks in advance for helping clear administrative obstacles!


(other stuff later)

02.12.2014, 15:57, Sam Ruby ru...@intertwingly.net:


What is the hold-up for publishing a Public Working Draft?


It has been that the chairs have been pretty busy, and we dropped the
ball between us. More recently, we sorted that out so the hold up is
now me.


For discussion purposes:

https://rawgit.com/w3ctag/url/develop/url.html


  Can I ask that you respond to the following email?


Yes. That's a very fair request. I may be able to do so tonight…


http://lists.w3.org/Archives/Public/public-webapps/2014OctDec/0315.html

Let me know what I need to do.

  Webapps still generally works under the 2005 version of the Process -
  but we could change this document to the 2014 process. The only
  really noticeable difference will be that there is formally no Last
  Call, and the final Patent Exclusion opportunity is instead for the
  draft published as Candidate Recommendation. (In other words, you
  need to be pretty bureaucratically-minded to notice a difference).


Lets update to 2014 now.  Its only a matter of time before not updating
won't be an option any more.


Works for me. I'll start a Call for Consensus - but I imagine it will
be a formality.

Cheers

Chaals

--
Charles McCathie Nevile - web standards - CTO Office, Yandex
cha...@yandex-team.ru - - - Find more at http://yandex.com


- Sam Ruby





Re: URL Spec WorkMode

2014-12-02 Thread Sam Ruby

On 12/02/2014 04:37 PM, Sam Ruby wrote:

[offlist]


Oopsie.  Note to self: double check cc list before sending emails.

In any case, the suggestion for pull requests applies to everyone.

- Sam Ruby


I notice that you are on GitHub.  One thing I encourage you to do is to
directly edit:

https://github.com/webspecs/url/blob/develop/docs/workmode.md

Expand the proposal, fix mistakes, correct typos -- everything is fair
game.

This will result in pull requests, but pretty much anything that doesn't
unnecessarily cause somebody to come unglued I'll take.

The same thing is true for the working draft:

https://github.com/w3ctag/url/tree/develop

Change the copyright, status, metadata at the top of url.bs.  Heck, if
you feel so inclined, change the spec itself.

I'm making the same suggestion to Wendy.  I'd love for the end result to
be a truly joint proposal.

- Sam Ruby

On 12/02/2014 10:28 AM, Sam Ruby wrote:

On 12/02/2014 09:23 AM, cha...@yandex-team.ru wrote:

TL;DR: The administrative hold-ups are now all my fault, so I'm sorry
if they persist, and I will start working to remove them...


Thanks in advance for helping clear administrative obstacles!


(other stuff later)

02.12.2014, 15:57, Sam Ruby ru...@intertwingly.net:


What is the hold-up for publishing a Public Working Draft?


It has been that the chairs have been pretty busy, and we dropped the
ball between us. More recently, we sorted that out so the hold up is
now me.


For discussion purposes:

https://rawgit.com/w3ctag/url/develop/url.html


  Can I ask that you respond to the following email?


Yes. That's a very fair request. I may be able to do so tonight…


http://lists.w3.org/Archives/Public/public-webapps/2014OctDec/0315.html

Let me know what I need to do.

  Webapps still generally works under the 2005 version of the
Process -
  but we could change this document to the 2014 process. The only
  really noticeable difference will be that there is formally no Last
  Call, and the final Patent Exclusion opportunity is instead for the
  draft published as Candidate Recommendation. (In other words, you
  need to be pretty bureaucratically-minded to notice a difference).


Lets update to 2014 now.  Its only a matter of time before not updating
won't be an option any more.


Works for me. I'll start a Call for Consensus - but I imagine it will
be a formality.

Cheers

Chaals

--
Charles McCathie Nevile - web standards - CTO Office, Yandex
cha...@yandex-team.ru - - - Find more at http://yandex.com


- Sam Ruby







Re: Publishing working draft of URL spec

2014-12-02 Thread Sam Ruby

On 12/2/14 7:01 PM, Domenic Denicola wrote:

From: cha...@yandex-team.ru [mailto:cha...@yandex-team.ru]


There is no need for a CfC, per our Working Mode documents, so this is 
announcement that we intend to publish a new Public Working Draft of the URL 
spec, whose technical content will be based on what is found at 
https://specs.webplatform.org/url/webspecs/develop/ and 
https://url.spec.whatwg.org/


Which of these two? They are quite different.


https://url.spec.whatwg.org/

The only content differences are a matter of propagation delay.  The 
content at https://specs.webplatform.org/url/webspecs/develop/ isn't 
ready yet.


Once it is ready, I plan to sync all documents.

- Sam Ruby



URL Spec WorkMode (was: PSA: Sam Ruby is co-Editor of URL spec)

2014-12-01 Thread Sam Ruby

On 11/18/2014 03:18 PM, Sam Ruby wrote:


Meanwhile, I'm working to integrate the following first into the WHATWG
version of the spec, and then through the WebApps process:

http://intertwingly.net/projects/pegurl/url.html


Integration is proceeding, current results can be seen here:

https://specs.webplatform.org/url/webspecs/develop/

It is no longer clear to me what through the WebApps process means. 
In an attempt to help define such, I'm making a proposal:


https://github.com/webspecs/url/blob/develop/docs/workmode.md#preface

At this point, I'm looking for general feedback.  I'm particularly 
interested in things I may have missed.  Pull requests welcome!


Once discussion dies down, I'll try go get agreement between the URL 
editors, the WebApps co-chairs and W3C Legal.  If/when that is complete, 
this will go to W3C Management and whatever the WHATWG equivalent would be.


- Sam Ruby



Re: [url] Feedback from TPAC

2014-11-25 Thread Sam Ruby

On 11/25/2014 03:52 PM, David Walp wrote:

Apologies for being a late comer to the discussion, but here is some
feedback in our implementation.  We're looking forward to engaging on
these interactions more proactively in the future.


Thanks!  Looking forward to it!

Can I ask that you either open an issue or a bug (it matters not which 
to me) on each of these items.


https://github.com/webspecs/url/issues
https://www.w3.org/Bugs/Public/enter_bug.cgi?product=WHATWGcomponent=URL

Feel free to link back to your original post on this topic in the 
issue/bug reports:


http://lists.w3.org/Archives/Public/public-webapps/2014OctDec/0505.html

I also actively encourage pull requests, so if you care to propose a 
change, I encourage you to do so.


Finally, I've expanded that list since October.  Here's a few more 
topics that you might want to weigh in on:


http://intertwingly.net/projects/pegurl/url.html#discuss

And by all means, don't stop there!

- Sam Ruby


On Wednesday, October 29, 2014 6:55 PM, Sam Ruby
ru...@intertwingly.net wrote:


Now to get to what I personally am most interested in: identifying
changes to the expected test results, and therefore to the URL
specification -- independent of the approach that specification
takes to describing parsing. To kick off the discussion, here are
three examples:

1)
http://intertwingly.net/projects/pegurl/urltest-results/7357a04b5b

A number of browsers, namely Internet Explorer, Opera(Presto), and
Safari seem to be of the opinion that exposing passwords is a bad
idea. I suggest that this is a defensible position, and that the
specification should either standardize on this approach or at a
minimum permit this.


Yes, we, Microsoft, are of the opinion that exposing passwords is a
bad idea.  Based on received feedback, customers agree and I suspect
our customers are not unique on this opinion.


2)
http://intertwingly.net/projects/pegurl/urltest-results/4b60e32190

This is not a valid URL syntax, nor does any browser vendor
implement it.  I think it is fairly safe to say that given this
state that there isn't a wide corpus of existing web content that
depends on it.  I'd suggest that the specification be modified to
adopt the behavior that Chrome, Internet Explorer, and
Opera(Presto) implement.


Agreed.  Standardizing something not used that is not in anyone's
interest.  What you have posted on Github:
https://github.com/rubys/url/tree/peg.js/reference-implementation#readme
.. I found I had a hard time determining what should be the parsing
output for a number of cases. rings true here. There is no advantage
to adding complexity when it is not required.


3)
http://intertwingly.net/projects/pegurl/urltest-results/61a4a14209

This is an example of a problem that Anne is currently wrestling
with. Note in particular the result produced by Chrome, which
identifies the host as a IPV4 address and canonicalizes it.


This is the type of interop issue we think should be a focus of the
URL specification and the W3C efforts.

Finally we are focused at identifying and fixing real-world interop
bugs that we see in live sites in support our goal of The web should
just work
(http://blogs.msdn.com/b/ie/archive/2014/05/27/launching-status-modern-ie-amp-internet-explorer-platform-priorities.aspx).
For example, I think you had at one time listed an IE issue in the
discussion section of the URL spec -
http://intertwingly.net/projects/pegurl/url.html#discuss.  This bug
was related to a missing / at the front of URLs under certain
conditions.  Since this issue has been removed from the discussion
section, I am hoping you have seen that we have fixed the issue.  We
are actively pursuing and fixing similar interop bugs.  We want the
URL spec to be source of interop behavior and believe that our goal
is in line with your direction.

Cheers, _dave_





Re: [url] follow-ups from the TPAC F2F Meeting

2014-11-18 Thread Sam Ruby

On 11/18/2014 09:51 AM, Arthur Barstow wrote:

On 10/29/14 9:54 PM, Sam Ruby wrote:

I am willing to help with this effort.


Thanks for this information [1] and sorry for the delayed reply.

Given URL is a joint deliverable between WebApps and TAG, perhaps it
would be helpful if you were a co-Editor. Are you interested in that role?


Yes.

- Sam Ruby



Re: PSA: Sam Ruby is co-Editor of URL spec

2014-11-18 Thread Sam Ruby

On 11/18/2014 03:08 PM, Arthur Barstow wrote:

On 11/18/14 3:02 PM, Sam Ruby wrote:

On 11/18/2014 09:51 AM, Arthur Barstow wrote:

Given URL is a joint deliverable between WebApps and TAG, perhaps it
would be helpful if you were a co-Editor. Are you interested in that
role?


Yes.


OK, PubStatus updated accordingly.


Thanks!

Would it be possible to fork https://github.com/whatwg/url into 
https://github.com/w3c/, and to give me the necessary access to update this?


I've recently converted the spec to bikeshed, and bikeshed has the 
ability to produce W3C style specifications.  I also plan to add a 
status section as described here:


http://lists.w3.org/Archives/Public/public-webapps/2014OctDec/0315.html

Once done, I'll post a message to this list (public-webapps) for a 
review, followed by a PSA when it is ready to be pushed out as a 
editors draft.


I plan to work with all the people who have formally objected to see if 
their concerns can be resolved.


Meanwhile, I'm working to integrate the following first into the WHATWG 
version of the spec, and then through the WebApps process:


http://intertwingly.net/projects/pegurl/url.html

Longer term (more specifically, in 1Q15), I plan to schedule a meeting 
with the Director to resolve whether or not there is a need for a 
WebApps version:


http://lists.w3.org/Archives/Public/public-html-admin/2014Nov/0036.html

- Sam Ruby



[url] Feedback from TPAC

2014-10-31 Thread Sam Ruby

bcc: WebApps, IETF, TAG in the hopes that replies go to a single place.

- - -

I took the opportunity this week to meet with a number of parties 
interested in the topic of URLs including not only a number of Working 
Groups, AC and AB members, but also members of the TAG and members of 
the IETF.


Some of the feedback related to the proposal I am working on[1].  Some 
of the feedback related to mechanics (example: employing Travis to do 
build checks, something that makes more sense on the master copy of a 
given specification than on a hopefully temporary branch.  These are not 
the topics of this email.


The remaining items are more general, and are the subject of this note. 
 As is often the case, they are intertwined.  I'll simply jump into the 
middle and work outwards from there.


---

The nature of the world is that there will continue to be people who 
define more schemes.  A current example is 
http://openjdk.java.net/jeps/220 (search for New URI scheme for naming 
stored modules, classes, and resources).  And people who are doing so 
will have a tendency to look to the IETF.


Meanwhile, The IETF is actively working on a update:

https://tools.ietf.org/html/draft-ietf-appsawg-uri-scheme-reg-04

They are meeting F2F in a little over a week[2].  URIs in general, and 
this proposal in specific will be discussed, and for that reason now 
would be a good time to provide feedback.  I've only quickly scanned it, 
but it appears sane to me in that it basically says that new schemes 
will not be viewed as relative schemes[3].


The obvious disconnect is that this is a registry for URI schemes, not 
URLs.  It looks to me like making a few, small, surgical updates to the 
URL Standard would stitch all this together.


1) Change the URL Goals to only obsolete RFC 3987, not RFC 3986 too.

2) Reference draft-ietf-appsawg-uri-scheme-reg in 
https://url.spec.whatwg.org/#url-writing as the way to register schemes, 
stating that the set of valid URI schemes is the set of valid URL schemes.


3) Explicitly state that canonical URLs (i.e., the output of the URL 
parse step) not only round trip but also are valid URIs.  If there are 
any RFC 3986 errata and/or willful violations necessary to make that a 
true statement, so be it.


That's it.  The rest of the URL specification can stand as is.

What this means operationally is that there are two terms, URIs and 
URLs.  URIs would be of a legacy, academic topic that may be of 
relevance to some (primarily back-end server) applications.  URLs are 
most people, and most applications, will be concerned with.  This 
includes all the specifications which today reference IRIs (as an 
example, RFC 4287, namely, Atom).


My sense was that all of the people I talked to were generally OK with 
this, and that we would be likely to see statements from both the IETF 
and the W3C TAG along these lines mid November-ish, most likely just 
after IETF meeting 91.


More specifically, if something along these lines I describe above were 
done, the IETF would be open to the idea of errata to RFC3987 and 
updating specs to reference URLs.


- Sam Ruby

[1] http://intertwingly.net/projects/pegurl/url.html
[2] https://www.ietf.org/meeting/91/index.html
[3] https://url.spec.whatwg.org/#relative-scheme



[url] follow-ups from the TPAC F2F Meeting

2014-10-29 Thread Sam Ruby

Minuted here:

http://www.w3.org/2014/10/28-webapps-minutes.html#item07

Note that this is a lengthy and comprehensive email covering a number of 
topics.  I encourage replies to have new subject lines and to limit 
themselves to only one part and to aggressively excerpt out the parts of 
this email that are not relevant to the reply.


---

Short term, there should be a heart-beat of the W3C URL document 
published ASAP.  The substantive content should be identical to the 
current WHATWG URL Standard.  The spec should say this, likely do so 
with a huge red tab at the bottom like the one that can be found in the 
following document:


http://www.w3.org/TR/2014/WD-encoding-20140603/

The Status section should also reference the current Formal Objections 
so that any readers of this document may be aware that the final 
disposition of this draft may be in the form of a tombstone note.  The 
current Formal Objections I am aware of are listed here:


https://www.w3.org/wiki/HTML/wg/UrlStatus#Formal_Objections

Finally, I would encourage the status section to mention bug 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=25946 so that readers may 
be aware that the URL parsing section may be rewritten.  This indirectly 
references the work I am about to describe, and it does so in a 
non-exclusive manner meaning that others are welcome to propose 
alternate resolutions.


I am willing to help with this effort.

---

Separately, at this time I would like to solicit feedback on some work I 
have been doing which includes a JavaScript reference implementation, a 
concrete albeit incomplete proposal for resolution to bug 25946, and 
some comparative test results with a number of browser and non-browser 
implementations.  For the impatient, here are some links:


http://intertwingly.net/projects/pegurl/liveview.html
http://intertwingly.net/projects/pegurl/url.html
http://intertwingly.net/projects/pegurl/urltest-results/

For those that want to roll up their proverbial sleeves and dive in, 
check out the code here:


https://github.com/rubys/url

You will find a list of prerequisites that you need to install first at 
the top of the Makefile.  Possible ways to contribute (in order of 
preference): pull requests, github issues, and emails to this 
(public-webapps@w3.org) mailing list.  I've already gotten and closed 
one, you can be next :-).


https://github.com/rubys/url/pulls?q=is%3Apr

My plans include addressing the Todos listed in the document, and begin 
work on the merge.  That work is complicated by a need to migrate the 
URL Standard from anolis to bikeshed.  You can see progress on that 
effort in a separate branch, as well as the discussion that has happened 
to date:


https://github.com/rubys/url/tree/anolis2bikeshed
https://github.com/rubys/url/commit/e617fd66135bd75b1052700081de5319914168a5#commitcomment-8259740

To be clear, my proposed resolution for bug 25946 requires this 
conversion, but this conversion doesn't require my proposed resolution 
to bug 25946.  I mention this as Anne seems to want this document to be 
converted, and that effort can be pulled separately.


---

Now to get to what I personally am most interested in: identifying 
changes to the expected test results, and therefore to the URL 
specification -- independent of the approach that specification takes to 
describing parsing.  To kick off the discussion, here are three examples:


1) http://intertwingly.net/projects/pegurl/urltest-results/7357a04b5b

A number of browsers, namely Internet Explorer, Opera(Presto), and 
Safari seem to be of the opinion that exposing passwords is a bad idea. 
 I suggest that this is a defensible position, and that the 
specification should either standardize on this approach or at a minimum 
permit this.


2) http://intertwingly.net/projects/pegurl/urltest-results/4b60e32190

This is not a valid URL syntax, nor does any browser vendor implement 
it.  I think it is fairly safe to say that given this state that there 
isn't a wide corpus of existing web content that depends on it.  I'd 
suggest that the specification be modified to adopt the behavior that 
Chrome, Internet Explorer, and Opera(Presto) implement.


3) http://intertwingly.net/projects/pegurl/urltest-results/61a4a14209

This is an example of a problem that Anne is currently wrestling with. 
Note in particular the result produced by Chrome, which identifies the 
host as a IPV4 address and canonicalizes it.


These are a few that caught my eye.  Feel free to comment on these, or 
any others, or even to propose new tests.


- Sam Ruby



Re: ECMA TC 39 / W3C HTML and WebApps WG coordination

2009-09-25 Thread Sam Ruby
On Fri, Sep 25, 2009 at 5:57 AM, Anne van Kesteren ann...@opera.com wrote:
 On Fri, 25 Sep 2009 11:38:08 +0200, Sam Ruby ru...@intertwingly.net wrote:

 Meanwhile, what we need is concrete bug reports of specific instances
 where the existing WebIDL description of key interfaces is done in a way
 that precludes a pure ECMAScript implementation of the function.

 Is there even agreement that is a goal?

This was expressed by ECMA TC39 as a goal.  There is no agreement as
of yet to this goal by the HTML WG.

I'm simply suggesting that they way forward at this time is via
specifics, ideally in the form of bug reports.

 I personally think the catch-all pattern which Brendan mentioned is quite
 convenient and I do not think it would make sense to suddenly stop using it.
 Also, the idea of removing the feature from Web IDL so that future
 specifications cannot use it is something I disagree with since having it in
 Web IDL simplifies writing specifications for the (legacy) platform and
 removes room for error.

 Having Web IDL is a huge help since it clarifies how a bunch of things map
 to ECMAScript. E.g. how the XMLHttpRequest constructor object is exposed,
 how you can prototype XMLHttpRequest, that objects implementing
 XMLHttpRequest also have all the members from EventTarget, etc. I'm fine
 with fiddling with the details, but rewriting everything from scratch seems
 like a non-starter. Especially when there is not even a proposal on the
 table.

I agree that either getting a proposal on the table or bug reports is
the right next step.  I further agree that removal of function and/or
a wholesale switch away from Web IDL is likely to be a non-starter.

 Anne van Kesteren
 http://annevankesteren.nl/

- Sam Ruby



Re: ECMA TC 39 / W3C HTML and WebApps WG coordination

2009-09-24 Thread Sam Ruby
On Sep 24, 2009, at 11:53 AM, Maciej Stachowiak wrote:

 Any TC39 members whose employers can't join could perhaps become Invited
 Experts to the W3C Web Applications Working Group, if that facilitates
 review.

Unfortunately, no.  See #2 and #3 below:

  http://www.w3.org/2004/08/invexp.html

On Thu, Sep 24, 2009 at 5:02 PM, Brendan Eich bren...@mozilla.com wrote:

 Are invited experts time-bound in some way? We learned in Ecma that experts
 were to be invited to one meeting only.

In general, no.  There is a time limit mentioned in #4 above, but that
is just for exceptional circumstances, ones that are not likely to
apply in this situation.

- Sam Ruby



Re: ECMA TC 39 / W3C HTML and WebApps WG coordination

2009-09-24 Thread Sam Ruby

Maciej Stachowiak wrote:


On Sep 24, 2009, at 2:16 PM, Sam Ruby wrote:


On Sep 24, 2009, at 11:53 AM, Maciej Stachowiak wrote:


Any TC39 members whose employers can't join could perhaps become Invited
Experts to the W3C Web Applications Working Group, if that facilitates
review.


Unfortunately, no.  See #2 and #3 below:

 http://www.w3.org/2004/08/invexp.html


It depends on the nature of the employer, and the reason they are unable 
to join. Historically there have been Invited Experts in W3C Working 
Groups who are employed by such organizations as universities or small 
start-ups. We even have some in the HTML Working Group. So it would 
probably be more accurate to say it depends and that it may be subject 
to the judgment of the W3C Team.


I've discussed the specific case with the W3C, and it is the case that 
in the judgment of the W3C Team, the answer in this specific case is no.


You, of course, are welcome to try again in the hopes of getting a 
different answer.



Regards,
Maciej


- Sam Ruby