Re: [Server-Sent Events] Network connection clarification

2012-07-19 Thread Neil Jenkins
On Tue, 17 Jul 2012, at 09:06 AM, Arthur Barstow wrote:
 Neil, Hixie, Chaals, All, - I don't see a clear conclusion/consensus and 
 no bug was filed. As such, please indicate - via a reply to the 
 _original_ thread - your take on this comment and please file a bug if 
 applicable.

I think all replies to this comment concurred that UAs are better placed
to handle reconnection after network loss, as they have access to
OS-level network connectivity information so can trigger a reconnect as
soon as the device regains network access. I also believe that most web
developers will find the currently specified behaviour unintuitive
(indeed, looking at Odin's analysis of existing UA behaviour, it's clear
that UA implementors also either misunderstood or deliberately ignored
this part of the spec). I can't see any use case where you *wouldn't*
want to reconnect again after network loss, so by prohibiting this in
the spec, you're forcing developers to continually re-implement the same
behaviour, but with less information available to them than the UA has.
And of course, most developers won't even realise and/or bother to do
so, leading to breakage of apps whenever there's a network hiccup.

This is a serious flaw in the spec and should be fixed for version one;
it is however the same issue as Odin raised, so we only need a single
bug to cover the two comments. Odin has already created
https://www.w3.org/Bugs/Public/show_bug.cgi?id=18320; I will simply
add my comments there.

Neil.



[Bug 18328] New: [IndexedDB] Editorial: Clarify that record value === referenced primary key for indeces

2012-07-19 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=18328

   Summary: [IndexedDB] Editorial: Clarify that record value ===
referenced primary key for indeces
   Product: WebAppsWG
   Version: unspecified
  Platform: All
OS/Version: All
Status: NEW
  Severity: minor
  Priority: P2
 Component: Indexed Database API
AssignedTo: dave.n...@w3.org
ReportedBy: odi...@opera.com
 QAContact: public-webapps-bugzi...@w3.org
CC: m...@w3.org, public-webapps@w3.org


In 3.1.6 Index you find this:

# The records in an index are always sorted according to the records key.
# However unlike object stores, a given index can contain multiple records
# with the same key. Such records are additionally sorted according to the
# records value.


And in a note in the steps for iterating a cursor you find:

# records is always sorted in ascending key order. In the case of 
# source being an index, records is secondarily sorted in ascending
# value order.

That is all fine and dandy, because for an index, the key is toyota and the
value is in fact a *pointer to the referenced record*, so value will be the
*key* of the objectstore record.


Technically there is no problem, however, those sentences up there might easily
confuse humans (I always have to read it twice). So I hope we can tweak the
wording; In 3.1.6 Index

# The records in an index are always sorted according to the records key.
# However unlike object stores, a given index can contain multiple records
# with the same key. Such records are additionally sorted according to the
# records value (meaning the key of a record in a referenced object store)

And in the steps for iterating a cursor:

# records is always sorted in ascending key order. In the case of 
# source being an index, records is secondarily sorted in ascending
# value order (value in an index is a key from the referenced object
# store).


Please feel free to make it even better or clearer, I'm not super happy about
the fixes.

Yeah, and sorry for making bugs at this time, but I hope we want to fix stuff
even after CR. ;-)

-- 
Configure bugmail: https://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are on the CC list for the bug.



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Cameron Jones
On Wed, Jul 18, 2012 at 4:41 AM, Henry Story henry.st...@bblfish.net wrote:
 And it is the experience of this being required that led me to build a CORS 
 proxy [1] - (I am not the first to write one, I add quickly)

Yes, the Origin and unauthenticated CORS restrictions are trivially
circumvented by a simple proxy.


 So my argument is that this restriction could be lifted since

  1. GET is indempotent - and should not affect the resource fetched

HTTP method semantics are an obligation for conformance and not
guaranteed technically. Any method can be mis-used for any purpose
from a security point of view.

The people at risk from the different method semantics are those who
use them incorrectly, for example a bank which issues transactions
using GET over a URI:
http://dontbankonus.com/transfer?to=xyzamount=100

  2. If there is no authentication, then the JS Agent could make the request 
 via a CORS praxy of its choosing, and so get the content of the resource 
 anyhow.

Yes, the restriction on performing an unauthenticated GET only serves
to promote the implementation of 3rd party proxy intermediaries and,
if they become established, will introduce new security issues by way
of indirection.

The pertinent question for cross-origin requests here is - who is
authoring the link and therefore in control of the request? The reason
that cross-origin js which executes 3rd party non-origin code within a
page is not a problem for web security is that the author of the page
must explicitly include such a link. The control is within the
author's domain to apply prudence on what they link to and include
from. Honorable sites with integrity seek to protect their integrity
by maintaining bona-fide links to trusted and reputable 3rd parties.

  3. One could still pass the Origin: header as a warning to sites who may be 
 tracking people in unusual ways.

This is what concerns people about implementing a proxy - essentially
you are circumventing a recommended security practice whereby sites
use this header as a means of attempting to protect themselves from
CSRF attacks. This is futile and these sites would do better to
implement CSRF tokens which is the method used by organizations which
must protect against online fraud with direct financial implications,
ie your bank.

There are too many recommendations for protecting against CRSF and the
message is being lost. On the reverse, the poor uptake of CORS is
because people do not understand it and are wary of implementing
anything which they regard as a potential for risk if they get it
wrong.

   Lifting this restriction would make a lot of public data available on the 
 web for use by JS agents cleanly. Where requests require authentication or 
 are non-indempotent CORS makes a lot of sense, and those are areas where data 
 publishes would need to be aware of CORS anyway, and should implement it as 
 part of a security review. But for people publishing open data, CORS should 
 not be something they need to consider.


The restriction is in place as the default method of cross-origin
requests prior to XHR applied HTTP auth and cookies without
restriction. If this were extended in the same manner to XHR it would
allow any page to issue scripted authenticated requests to any site
you have visited within the lifetime of your browsing application
session. This would allow seemingly innocuous sites to do complex
multi-request CSRF attacks as background processes and against as many
targets as they can find while you're on the page.

The more sensible option is to make all XHR requests unauthenticated
unless explicitly scripted for such operation. A request to a public
IP address which carries no user-identifiable information is
completely harmless by definition.

On Wed, Jul 18, 2012 at 4:47 AM, Ian Hickson i...@hixie.ch wrote:
 No, such a proxy can't get to intranet pages.

 Authentication on the Internet can include many things, e.g. IP
 addresses or mere connectivity, that are not actually included in the body
 of an HTTP GET request. It's more than just cookies and HTTP auth headers.

The vulnerability of unsecured intranets can be eliminated by applying
the restriction to private IP ranges which is the source of this
attack vector. It is unsound (and potentially legally disputable) for
public access resources to be restricted and for public access
providers to pay the costs for the protection of private resources. It
is the responsibility of the resource's owner to pay the costs of
enforcing their chosen security policies.

Thanks,
Cameron Jones



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Henry Story

On 19 Jul 2012, at 14:07, Cameron Jones wrote:

 On Wed, Jul 18, 2012 at 4:41 AM, Henry Story henry.st...@bblfish.net wrote:
 And it is the experience of this being required that led me to build a CORS 
 proxy [1] - (I am not the first to write one, I add quickly)
 
 Yes, the Origin and unauthenticated CORS restrictions are trivially
 circumvented by a simple proxy.
 
 
 So my argument is that this restriction could be lifted since
 
 1. GET is indempotent - and should not affect the resource fetched

I have to correct myself here: GET and HEAD are nullipotent (they have no 
sideffects
and the result is the same if they are executed 0 or more times) whereas PUT 
and DELETE
(with GET and HEAD) are indempotent ( they have the same result when executed 1 
or more times). 

 
 HTTP method semantics are an obligation for conformance and not
 guaranteed technically. Any method can be mis-used for any purpose
 from a security point of view.
 
 The people at risk from the different method semantics are those who
 use them incorrectly, for example a bank which issues transactions
 using GET over a URI:
 http://dontbankonus.com/transfer?to=xyzamount=100

yes, that is of course their problem, and one should not design to help people 
who do silly
things like that. 

 
 2. If there is no authentication, then the JS Agent could make the request 
 via a CORS praxy of its choosing, and so get the content of the resource 
 anyhow.
 
 Yes, the restriction on performing an unauthenticated GET only serves
 to promote the implementation of 3rd party proxy intermediaries and,
 if they become established, will introduce new security issues by way
 of indirection.
 
 The pertinent question for cross-origin requests here is - who is
 authoring the link and therefore in control of the request? The reason
 that cross-origin js which executes 3rd party non-origin code within a
 page is not a problem for web security is that the author of the page
 must explicitly include such a link. The control is within the
 author's domain to apply prudence on what they link to and include
 from. Honorable sites with integrity seek to protect their integrity
 by maintaining bona-fide links to trusted and reputable 3rd parties.

yes, though in the case of a JS based linked data application, like the 
semi-functionaing one I wrote and described earlier 
  http://bblfish.github.com/rdflib.js/example/people/social_book.html
( not all links work, you can click on Tim Berners Lee, and a few others )
the original javascript is not fetching more javascript, but fetching more data 
from the web.
Still your point remains valid. That address book needs to find ways to help 
show who says what, and of course not just upload any JS it finds on the web or 
else its reputation will suffer. My CORS proxy
only uploads RDFizable data.

 
 3. One could still pass the Origin: header as a warning to sites who may be 
 tracking people in unusual ways.
 
 This is what concerns people about implementing a proxy - essentially
 you are circumventing a recommended security practice whereby sites
 use this header as a means of attempting to protect themselves from
 CSRF attacks. This is futile and these sites would do better to
 implement CSRF tokens which is the method used by organizations which
 must protect against online fraud with direct financial implications,
 ie your bank.

I was suggesting the browser still pass the Origin: header even on a
request to a non authenticated page, for informational reasons. 

 
 There are too many recommendations for protecting against CRSF and the
 message is being lost. On the reverse, the poor uptake of CORS is
 because people do not understand it and are wary of implementing
 anything which they regard as a potential for risk if they get it
 wrong.
 
  Lifting this restriction would make a lot of public data available on the 
 web for use by JS agents cleanly. Where requests require authentication or 
 are non-nullipotent CORS makes a lot of sense, and those are areas where 
 data publishes would need to be aware of CORS anyway, and should implement 
 it as part of a security review. But for people publishing open data, CORS 
 should not be something they need to consider.
 
 
 The restriction is in place as the default method of cross-origin
 requests prior to XHR applied HTTP auth and cookies without
 restriction. If this were extended in the same manner to XHR it would
 allow any page to issue scripted authenticated requests to any site
 you have visited within the lifetime of your browsing application
 session. This would allow seemingly innocuous sites to do complex
 multi-request CSRF attacks as background processes and against as many
 targets as they can find while you're on the page.

indeed. Hence my suggestion that this restriction only be lifted for 
nullipotent and non
authenticated requests.

 The more sensible option is to make all XHR requests unauthenticated
 unless explicitly scripted for such operation. A 

Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Cameron Jones
 On Wed, Jul 18, 2012 at 4:41 AM, Henry Story henry.st...@bblfish.net wrote:

 2. If there is no authentication, then the JS Agent could make the request 
 via a CORS praxy of its choosing, and so get the content of the resource 
 anyhow.

 Yes, the restriction on performing an unauthenticated GET only serves
 to promote the implementation of 3rd party proxy intermediaries and,
 if they become established, will introduce new security issues by way
 of indirection.

 The pertinent question for cross-origin requests here is - who is
 authoring the link and therefore in control of the request? The reason
 that cross-origin js which executes 3rd party non-origin code within a
 page is not a problem for web security is that the author of the page
 must explicitly include such a link. The control is within the
 author's domain to apply prudence on what they link to and include
 from. Honorable sites with integrity seek to protect their integrity
 by maintaining bona-fide links to trusted and reputable 3rd parties.

 yes, though in the case of a JS based linked data application, like the 
 semi-functionaing one I wrote and described earlier
   http://bblfish.github.com/rdflib.js/example/people/social_book.html
 ( not all links work, you can click on Tim Berners Lee, and a few others )
 the original javascript is not fetching more javascript, but fetching more 
 data from the web.
 Still your point remains valid. That address book needs to find ways to help 
 show who says what, and of course not just upload any JS it finds on the web 
 or else its reputation will suffer. My CORS proxy
 only uploads RDFizable data.


yes, i think you have run into a fundamental problem which must be
addressed in order for linked data to exist. dismissal of early
implementation experience is unhelpful at best.

i find myself in a similar situation whereby i have to write, maintain
and pay for the bandwidth of providing an intermediary proxy just to
service public requests. this has real financial consequences and is
unacceptable when there is no technical grounding for the
restrictions. as is stated before, it could even be regarded as a form
of censorship of freedom of expression for both the author publishing
their work freely and the consumer expressing new ideas.


 On Wed, Jul 18, 2012 at 4:47 AM, Ian Hickson i...@hixie.ch wrote:
 No, such a proxy can't get to intranet pages.

 Authentication on the Internet can include many things, e.g. IP
 addresses or mere connectivity, that are not actually included in the body
 of an HTTP GET request. It's more than just cookies and HTTP auth headers.

 The vulnerability of unsecured intranets can be eliminated by applying
 the restriction to private IP ranges which is the source of this
 attack vector. It is unsound (and potentially legally disputable) for
 public access resources to be restricted and for public access
 providers to pay the costs for the protection of private resources. It
 is the responsibility of the resource's owner to pay the costs of
 enforcing their chosen security policies.

 Thanks a lot for this suggestion. Ian Hickson's argument had convinced me, 
 but you have just provided a clean answer to it.

 If a mechanism can be found to apply restrictions for private IP ranges then 
 that should be used in preference to forcing the rest of the web to implement 
 CORS restrictions on public data. And indeed the firewall servers use private 
 ip ranges, which do in fact make a good distinguisher for public and non 
 public space.

 So the proposal is still alive it seems :-)


+1

i have complete support for the proposal.



 Social Web Architect
 http://bblfish.net/


Thanks,
Cameron Jones



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Anne van Kesteren
On Thu, Jul 19, 2012 at 2:43 PM, Henry Story henry.st...@bblfish.net wrote:
 If a mechanism can be found to apply restrictions for private IP ranges then 
 that
 should be used in preference to forcing the rest of the web to implement CORS
 restrictions on public data. And indeed the firewall servers use private ip 
 ranges,
 which do in fact make a good distinguisher for public and non public space.

It's not just private servers (there's no guarantee those only use
private IP ranges either). It's also IP-based authentication to
private resources as e.g. W3C has used for some time.


-- 
http://annevankesteren.nl/



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Cameron Jones
On Thu, Jul 19, 2012 at 2:54 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Jul 19, 2012 at 2:43 PM, Henry Story henry.st...@bblfish.net wrote:
 If a mechanism can be found to apply restrictions for private IP ranges then 
 that
 should be used in preference to forcing the rest of the web to implement CORS
 restrictions on public data. And indeed the firewall servers use private ip 
 ranges,
 which do in fact make a good distinguisher for public and non public space.

 It's not just private servers (there's no guarantee those only use
 private IP ranges either). It's also IP-based authentication to
 private resources as e.g. W3C has used for some time.



Isn't this mitigated by the Origin header?

Also, what about the point that this is unethically pushing the costs
of securing private resources onto public access providers?

Thanks,
Cameron Jones



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Anne van Kesteren
On Thu, Jul 19, 2012 at 4:10 PM, Cameron Jones cmhjo...@gmail.com wrote:
 Isn't this mitigated by the Origin header?

No.


 Also, what about the point that this is unethically pushing the costs
 of securing private resources onto public access providers?

It is far more unethical to expose a user's private data.


-- 
http://annevankesteren.nl/



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Cameron Jones
On Thu, Jul 19, 2012 at 3:06 PM, Eric Rescorla e...@rtfm.com wrote:
 On Thu, Jul 19, 2012 at 6:54 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Jul 19, 2012 at 2:43 PM, Henry Story henry.st...@bblfish.net wrote:
 If a mechanism can be found to apply restrictions for private IP ranges 
 then that
 should be used in preference to forcing the rest of the web to implement 
 CORS
 restrictions on public data. And indeed the firewall servers use private ip 
 ranges,
 which do in fact make a good distinguisher for public and non public space.

 It's not just private servers (there's no guarantee those only use
 private IP ranges either). It's also IP-based authentication to
 private resources as e.g. W3C has used for some time.

 Moreover, some companies have public IP ranges that are
 firewall blocked. It's not in general possible for the browser
 to distinguish publicly accessible IP addresses from non-publicly
 accessible IP addresses.

Yes it is impossible for a browser to detect intranet configurations.

The problem i have is that public providers are being forced to
changed their configurations over internal company networks changing
theirs.

Company IT departments have far more technical skills, and the ability
to perform the changes, than public publishers who may not even be
able to add CORS headers if they wanted to.


 More generally, CORS is designed to replicate the restrictions that non-CORS
 already imposes on browsers. Currently, browsers prevent JS from obtaining
 the result of this kind of cross-origin GET, thus CORS retains this 
 restriction.
 This is consistent with the general policy of not adding new features to
 browsers that would break people's existing security models, no matter
 how broken one might regard those models as being.


Aside form the intranet public IP concern, isn't this due to the
ambient authority applied to cross-origin GET requests? This turns
otherwise public information into a potentially private resource.

Removing all user-identifiable information from a request would
alleviate this restriction being necessary and not break anyone's
security policy (without considering the public-IP behind a firewall
scenario).


 I believe the WG already has consensus on this point.

 -Ekr

Thank-you for the response in light of the existing consensus.
Potential new information, even if not new, being addressed aids
understanding and can assist in further adoption and advocacy.

Thanks,
Cameron Jones



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-19 Thread Cameron Jones
On Thu, Jul 19, 2012 at 3:19 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Jul 19, 2012 at 4:10 PM, Cameron Jones cmhjo...@gmail.com wrote:
 Isn't this mitigated by the Origin header?

 No.



Could you expand on this response, please?

My understanding is that requests generate from XHR will have Origin
applied. This can be used to reject requests from 3rd party websites
within browsers. Therefore, intranets have the potential to restrict
access from internal user browsing habits.


 Also, what about the point that this is unethically pushing the costs
 of securing private resources onto public access providers?

 It is far more unethical to expose a user's private data.



Yes, but if no user private data is being exposed then there is cost
being paid for no benefit.

 --
 http://annevankesteren.nl/

Thanks,
Cameron Jones



Re: CfC: publish Candidate Recommendation of Web Sockets API; deadline July 18

2012-07-19 Thread Arthur Barstow

On 7/12/12 8:06 AM, ext Julian Reschke wrote:

On 2012-07-12 13:47, Arthur Barstow wrote:

I agree with Hixie that ideally the fix would apply to the original
source rather than 1-off versions in dev.w3. However, if that isn't
worked out, I will apply Julian's patch to the CR version.


Sounds good.


FYI, your patch was applied to this document that will be used as the 
basis for the CR 
http://dev.w3.org/html5/websockets/publish/CR-websockets-Aug-2012.html.



What do we do about the normative reference to a non-W3C-part of the 
WhatWG spec? (decoded as UTF-8, with error handling)


What about using http://dev.w3.org/html5/spec/single-page.html#utf-8?

-Thanks, AB




Re: CfC: publish Candidate Recommendation of Web Sockets API; deadline July 18

2012-07-19 Thread Julian Reschke

On 2012-07-19 17:30, Arthur Barstow wrote:

On 7/12/12 8:06 AM, ext Julian Reschke wrote:

On 2012-07-12 13:47, Arthur Barstow wrote:

I agree with Hixie that ideally the fix would apply to the original
source rather than 1-off versions in dev.w3. However, if that isn't
worked out, I will apply Julian's patch to the CR version.


Sounds good.


FYI, your patch was applied to this document that will be used as the
basis for the CR
http://dev.w3.org/html5/websockets/publish/CR-websockets-Aug-2012.html.



What do we do about the normative reference to a non-W3C-part of the
WhatWG spec? (decoded as UTF-8, with error handling)


What about using http://dev.w3.org/html5/spec/single-page.html#utf-8?


Actually, 
http://dev.w3.org/html5/spec/single-page.html#decoded-as-utf-8-with-error-handling; 
dunno why I missed that.


Best regards, Julian






[Bug 16927] HTML reference was changed from W3C version to WHATWG version

2012-07-19 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=16927

Ian 'Hixie' Hickson i...@hixie.ch changed:

   What|Removed |Added

 Status|REOPENED|RESOLVED
 Resolution||INVALID

--- Comment #3 from Ian 'Hixie' Hickson i...@hixie.ch 2012-07-20 02:27:07 UTC 
---
That file has never said W3C. That file was forked to make the TR/ CR, and in
that fork, it was changed to W3C.

Please stop filing bugs about nonsense like this. It is highly wasteful of my
time.

-- 
Configure bugmail: https://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are on the CC list for the bug.



[selectors-api] RfC: LCWD of Selectors API Level 1; deadline July 19

2012-07-19 Thread Kang-Hao (Kenny) Lu
Sorry for my late comment.

While I think it's fine to publish LCWD Selectors API as it is, it would
be nice if it can address my comment in [1]. By address, I mean either
define the desired behavior or explicitly mark it as undefined (which I
think is better than not saying anything because an explicit undefined
tells a spec reader that relevant discussions can be found in the list).

[1]
http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/thread#msg518


Cheers,
Kenny