Re: [widgets] white space handling

2009-12-21 Thread Cyril Concolato

Hi Robin,

Le 18/12/2009 18:01, Robin Berjon a écrit :

On Dec 18, 2009, at 16:36 , Cyril Concolato wrote:

Le 18/12/2009 15:58, Robin Berjon a écrit :

P+C doesn't tie processors to a particular version of XML, and lists its white 
space characters accordingly (and defensively). If you're certain that you will 
only ever get content that comes from a conforming XML 1.0 implementation, then 
you probably don't need to check for this.

I don't read it like that. PC explicitely references XML 1.0 and never 
mentions 1.1. So I thought the behavior was conformant to 1.0. It's fine if the 
spec also handles 1.1 but it should be mentioned. Also the rationale for the 
choices of space characters should also be indicated and the differences between 
XML 1.0 and XML 1.1 should be present.


I beg to differ. I think that we should build specifications that can handle 
future changes to the stack

I'm fine with that.


without listing all the versions that are supported.

It's not because you cite what you support that you're restricted to that. I 
think it helps understanding a spec.


P+C is built for XML 1.0, and it's great that it has the resilience to handle 
changes to 1.1 without a hitch — but who knows what XML 4.2 might add? We can't 
guarantee that it'll work, but we can try (and if it does work, I don't think 
that we should list it either). I certainly don't think that it's the right 
place to document potential differences between versions of XML — as your XHTML 
example shows, that kind of information goes stale.

If you're explicitely citing dated version of the spec, since they're cast in 
stone, I don't see how they can go stale.



Furthermore, I didn't say that the differences between XML 1.0 and 1.1 are the 
rationale for this choice — I was merely indicating that using 1.1 you could 
get such characters and that P+C's robustness against that was a plus. I wasn't 
in Marcos's brain when that part was written but my specification exegesis 
antennae suspect that the listed class of characters corresponds to the Unicode 
white space character class (and therefore to what Unicode-aware processors 
would consider white space, notably \s in regular expressions).

Well, you know my concern. I want to understand the spec in order to implement it 
properly. I'm not asking for any new normative statement, nor any change to the 
existing ones. I would be fine with informative notes explaining the intents of 
some choices. For example, as you know, I'm implementing an SVG UA and an PC 
UA, I want to know what's reusable, what's common without doing XML archaeology. 
Such notes would help me and I suspected it would help others. Nothing more.

Regards,

Cyril
--
Cyril Concolato
Maître de Conférences/Associate Professor
Groupe Mutimedia/Multimedia Group
Telecom ParisTech
46 rue Barrault
75 013 Paris, France
http://concolato.blog.telecom-paristech.fr/widgets/



Re: XMLHttpRequest Comments from W3C Forms WG

2009-12-21 Thread Marcos Caceres
On Mon, Dec 21, 2009 at 1:12 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Sun, Dec 20, 2009 at 2:39 PM, Marcos Caceres marc...@opera.com wrote:
 On Sun, Dec 20, 2009 at 10:43 PM, Julian Reschke julian.resc...@gmx.de 
 wrote:
 Marcos Caceres wrote:

 ...
 Yeah, you are right. I guess we get so used to having these crappy
 retrospective APIs around that one forgets that things could be done
 in better ways - thankfully decent frameworks have been built around
 them to make these things usable.
 ...

 Maybe that could be a lesson for XHR2?

 Perhaps, but I haven't been following the XHR2 work - it could already
 address all this, for all I know:) Nevertheless, if there hasn't
 already happened, it would be good if people who have worked on making
 XHR actually usable would contribute to making XHR Level 2 more
 aligned with how XHR is used on the ground - thinking Prototype, Dojo,
 JQuery, etc.

 Seems a bit ridiculous that everyone is building effectively the same
 wrappers around XHR to make it usable when all this could be done much
 faster if it was implemented natively in the browser. Apart from
 having a whinge, I don't have a better proposal for how this could be
 done - I haven't thought about it, and there are people much more
 qualified then me to do that. I can only hope that those working on
 the spec have looked at how the frameworks do ajax and if lessons
 can be taken and specified out of that... or that framework creators
 contribute back to the standardization process from the wild.

 Note that just because something is implemented natively in the
 browser doesn't mean it's faster. For example what a lot of libraries
 that wrap XHR do is to cover up browser differences, as well as
 present a friendlier syntax. The overhead of doing this in JS is in
 the order of fractions of milliseconds, whereas the the full request
 usually take several tenths of a second.

 Performance optimizing the JS overhead here is clearly not worth it.

I agree, there are instances where it doesn't really make a
difference. However, there is evidence that in some APIs, it did make
some sense to implement native support (Selectors API):

http://ejohn.org/blog/queryselectorall-in-firefox-31/

I guess by faster I really meant more logical and usable, meaning
hopefully faster for developers to work with. Like i don't seem to get
as nauseous when I use JQuery compared to when I use native XHR.
Anyway, that's just me and I don't want to take this discussion down
some rathole.

 I do however definitely agree that we should be talking to web
 developers in any spec we develop, XHR included.

If you mean average web developer (say, someone that uses JQuery to
build some website), I think we can all agree that this would be nice
but probably impractical. I think it would be better to engage those
who built the great JS frameworks, which already engage with
developers on the ground and have made the innovations that have made
the unruly mess that is the Web development a bit more tolerable.
These are usually the people that, in my experience, know a lot about
incompatibilities across UAs and, more importantly, can give an
insight as to how developers actually wish JavaScript and related APIs
should look and behave though, it could lead to creepy method
names, like $$() :D

Kind regards,
Marcos

-- 
Marcos Caceres
http://datadriven.com.au



Re: [widgets] Request for Comments: LCWD of Widget Access Request Policy spec; deadline 13-Jan-2010

2009-12-21 Thread Marcos Caceres
On Tue, Dec 8, 2009 at 8:06 PM, Arthur Barstow art.bars...@nokia.com wrote:
 On December 8, Last Call Working Draft (#2) of the Widget Access Request
 Policy spec was published:

  http://www.w3.org/TR/2009/WD-widgets-access-20091208/

 Widget Access Request Policy
 2. Definitions

 An access request is a request made by an author to the user agent for
 the ability to retrieve one or more network resources. The network
 resources and author requests to access are identified using access
 elements in the widget's configuration document.


I gots me confused by the second sentence, maybe change to:

Requests by an author to access network resources can be identified
by the presence of access
elements in the widget's configuration document.



 3. Conformance
 This specification defines conformance criteria that apply to a single
 product: user agents that implement the interfaces that it contains.

It's confusing to talk about interfaces here, though I understand
you are talking about interfaces in general terms. I would prefer if
the spec just said:

This specification defines conformance criteria that apply to a single
product: user agents.

And a user agent be defined as a software application that
implements this specification and the [WIDGETS] specification and it's
dependencies.

 4. Policy

 A user agent enforces an access request policy. In the default policy,
 a user agent must deny access to network resources external to the
 widget by default, whether this access is requested through APIs (e.g.
 XMLHttpRequest) or through markup (e.g. iframe, script, img).

i think you need to make it really clear that you've just defined the
default policy for a WUA. Please make it a sub-section or something.


 The exact rules defining which execution scope applies to network
 resources loaded into a document running in the widget execution scope
 depend on the language that is being used inside the the widget.

Typo: the the


 5. The access Element
 Context in which this element may be used:

PC uses Context in which this element is used:. It would be nice if
this one said the same thing :)

 5.1 Attributes

 origin
 An IRI attribute that defines the specifics of the access request that
 is requested.

that is requested seems tautological... and makes the sentence read
funny (and not ha ha funny.)

 Additionally, the special value of U+002A ASTERISK (*)
 may be used.

may  can.

I would rewrite:

Additionally, an author can use the special value of U+002A ASTERISK (*):

 This special value provides a means for an author to
 request from the user agent unrestricted access to network resources.

Break here.

 Only the scheme and authority components can be present in the IRI
 that this attribute contains ([URI], [RFC3987]).

I'm really sorry, I'm having a hard time parsing the above sentence.
At first, I thought it was related to the sentence about *. Can you
change the order of these sentences above. Also, the * value is
pretty important, maybe it deserves it's own sub-section even if it
just contains one short paragraph. i'm sure people will come back
asking for clarification once we go to CR as to how it's supposed to
work.

 subdomains
 A boolean attribute that indicates whether or not the host component
 part of the access request applies to subdomains of domain in the
 origin attribute.

It should be clear that subdomains and domain here refers to
components of RFC-such-and-such, right?

 The default value when this attribute is absent is
 false, meaning that access to subdomains is not requested.

what does it means when I have:

access domain=http://foo.bar.woo.com; subdomains=true/

Everything before woo.com is allowed, right? Maybe could be clear in
the spec for people like me :)

 5.2 Usage example

 This example contains multiple uses of the access element (not
 contained in the same configuration as the last one would make the
 others useless).

The above sentence doesn't tell me anything (that I can understand)
about the example. It be nice if it told me what I was looking at a
bit more. Maybe you need to break this up into multiple examples,
showing when it would be appropriate to use *.

 They presume that http://www.w3.org/ns/widgets is the
 default namespace defined in their context:

Instead of the fancy sentence above, why don't you just add a widget
xmlns=... around the access elements?

 access origin=https://example.net/
 access origin=http://example.org; subdomains=true/
 access origin=http://dahut.example.com:4242/
 access origin=*/

 6. Processing access elements in the Configuration Document

 A user agent must add the following to the Table of Configuration
 Defaults [WIDGETS].

Say that this needs to happen in Step 3, please.

 This processing takes place as part of Step 7 - Process the
 Configuration Document in [WIDGETS].

Can you please move the above sentence above the table. First thing I
thought when I read the first para of this section was when! when do
I need to do 

Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Tyler Close
On Thu, Dec 17, 2009 at 5:49 PM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 17 Dec 2009, Tyler Close wrote:

 Starting from the X-FRAME-OPTIONS proposal, say the response header
 also applies to all embedding that the page renderer does. So it also
 covers img, video, etc. In addition to the current values, the
 header can also list hostname patterns that may embed the content. So,
 in your case:

 X-FRAME-OPTIONS: *.example.com
 Access-Control-Allow-Origin: *

 Which means anyone can access this content, but sites outside
 *.example.com should host their own copy, rather than framing or
 otherwise directly embedding my copy.

 Why is this better than:

   Access-Control-Allow-Origin: *.example.com

X-FRAME-OPTIONS is a rendering instruction and
Access-Control-Allow-Origin is part of an access-control mechanism.
Combining the two in the way you propose creates an access-control
mechanism that is inherently vulnerable to CSRF-like attacks, because
it determines read access to bits based on the identity of the
requestor.

Using your example, assume an XML resource sitting on an intranet
server at resources.example.com. The author of this resource is trying
to restrict access to the XML data to only other intranet resources
hosted at *.example.com. The author believes this can be accomplished
by simply setting the Access-Control-Allow-Origin header as you've
show above, but that's not strictly true. Every page hosted on
*.example.com is now a potential target for a CSRF-like attack that
reveals the secret data. For example, consider a page at
victim.example.com that uses a third party storage service. To copy
data, the page does a GET on the location of the existing data,
followed by a POST to another location with the data to be copied. If
the storage service says the location of the existing data is the URL
for the secret XML data (http://resources.example.com/...), then the
victim page suffers a CSRF-like attack that exposes the secret data.
The victim page may know nothing of the existence or purpose of the
secret XML resource.

To avoid this pitfall, we instead design the access-control mechanism
to not create these traps. With the bogus technique removed, the
author of a protected resource can now choose amongst techniques that
actually work.

To address your bandwidth stealing concerns, and other similar issues,
we define X-FRAME-OPTIONS so that a resource author can inform the
browser's renderer of these preferences. So your XBL resource can
declare that it was only expecting to be applied to another resource
from *.example.com. The browser can detect this misconfiguration and
raise an error notification.

By separating the two mechanisms, we make the access-control model
clear and correct, while still providing the rendering control you
desired.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Ian Hickson
On Mon, 21 Dec 2009, Tyler Close wrote:
 On Thu, Dec 17, 2009 at 5:49 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 17 Dec 2009, Tyler Close wrote:
 
  Starting from the X-FRAME-OPTIONS proposal, say the response header
  also applies to all embedding that the page renderer does. So it also
  covers img, video, etc. In addition to the current values, the
  header can also list hostname patterns that may embed the content. So,
  in your case:
 
  X-FRAME-OPTIONS: *.example.com
  Access-Control-Allow-Origin: *
 
  Which means anyone can access this content, but sites outside
  *.example.com should host their own copy, rather than framing or
  otherwise directly embedding my copy.
 
  Why is this better than:
 
    Access-Control-Allow-Origin: *.example.com
 
 X-FRAME-OPTIONS is a rendering instruction and
 Access-Control-Allow-Origin is part of an access-control mechanism.
 Combining the two in the way you propose creates an access-control
 mechanism that is inherently vulnerable to CSRF-like attacks, because
 it determines read access to bits based on the identity of the
 requestor.
 
 Using your example, assume an XML resource sitting on an intranet
 server at resources.example.com. The author of this resource is trying
 to restrict access to the XML data to only other intranet resources
 hosted at *.example.com. The author believes this can be accomplished
 by simply setting the Access-Control-Allow-Origin header as you've
 show above, but that's not strictly true. Every page hosted on
 *.example.com is now a potential target for a CSRF-like attack that
 reveals the secret data. For example, consider a page at
 victim.example.com that uses a third party storage service. To copy
 data, the page does a GET on the location of the existing data,
 followed by a POST to another location with the data to be copied. If
 the storage service says the location of the existing data is the URL
 for the secret XML data (http://resources.example.com/...), then the
 victim page suffers a CSRF-like attack that exposes the secret data.
 The victim page may know nothing of the existence or purpose of the
 secret XML resource.
 
 To avoid this pitfall, we instead design the access-control mechanism
 to not create these traps. With the bogus technique removed, the
 author of a protected resource can now choose amongst techniques that
 actually work.
 
 To address your bandwidth stealing concerns, and other similar issues,
 we define X-FRAME-OPTIONS so that a resource author can inform the
 browser's renderer of these preferences. So your XBL resource can
 declare that it was only expecting to be applied to another resource
 from *.example.com. The browser can detect this misconfiguration and
 raise an error notification.
 
 By separating the two mechanisms, we make the access-control model
 clear and correct, while still providing the rendering control you
 desired.

I don't understand the difference between opaque string origin 
opaque string and opaque string origin.

With XBL in particular, what we need is something that decides whether a 
page can access the DOM of the XBL file or not, on a per-origin basis. 
Whether the magic string is:

   X-FRAME-OPTIONS: *.example.com
   Access-Control-Allow-Origin: *

...or:

   X-FRAME-OPTIONS: *.example.com

...or:

   Access-Control-Allow-Origin: *.example.com

...or:

   X: *.example.com

...or some other sequence of bytes doesn't seem to make any difference to 
any actual concrete security. There's only one mechanism here. Either 
access is granted to that origin, or it isn't.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Tyler Close
On Mon, Dec 21, 2009 at 2:16 PM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 21 Dec 2009, Tyler Close wrote:
 On Thu, Dec 17, 2009 at 5:49 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 17 Dec 2009, Tyler Close wrote:
 
  Starting from the X-FRAME-OPTIONS proposal, say the response header
  also applies to all embedding that the page renderer does. So it also
  covers img, video, etc. In addition to the current values, the
  header can also list hostname patterns that may embed the content. So,
  in your case:
 
  X-FRAME-OPTIONS: *.example.com
  Access-Control-Allow-Origin: *
 
  Which means anyone can access this content, but sites outside
  *.example.com should host their own copy, rather than framing or
  otherwise directly embedding my copy.
 
  Why is this better than:
 
    Access-Control-Allow-Origin: *.example.com

 X-FRAME-OPTIONS is a rendering instruction and
 Access-Control-Allow-Origin is part of an access-control mechanism.
 Combining the two in the way you propose creates an access-control
 mechanism that is inherently vulnerable to CSRF-like attacks, because
 it determines read access to bits based on the identity of the
 requestor.

 Using your example, assume an XML resource sitting on an intranet
 server at resources.example.com. The author of this resource is trying
 to restrict access to the XML data to only other intranet resources
 hosted at *.example.com. The author believes this can be accomplished
 by simply setting the Access-Control-Allow-Origin header as you've
 show above, but that's not strictly true. Every page hosted on
 *.example.com is now a potential target for a CSRF-like attack that
 reveals the secret data. For example, consider a page at
 victim.example.com that uses a third party storage service. To copy
 data, the page does a GET on the location of the existing data,
 followed by a POST to another location with the data to be copied. If
 the storage service says the location of the existing data is the URL
 for the secret XML data (http://resources.example.com/...), then the
 victim page suffers a CSRF-like attack that exposes the secret data.
 The victim page may know nothing of the existence or purpose of the
 secret XML resource.

 To avoid this pitfall, we instead design the access-control mechanism
 to not create these traps. With the bogus technique removed, the
 author of a protected resource can now choose amongst techniques that
 actually work.

 To address your bandwidth stealing concerns, and other similar issues,
 we define X-FRAME-OPTIONS so that a resource author can inform the
 browser's renderer of these preferences. So your XBL resource can
 declare that it was only expecting to be applied to another resource
 from *.example.com. The browser can detect this misconfiguration and
 raise an error notification.

 By separating the two mechanisms, we make the access-control model
 clear and correct, while still providing the rendering control you
 desired.

 I don't understand the difference between opaque string origin
 opaque string and opaque string origin.

 With XBL in particular, what we need is something that decides whether a
 page can access the DOM of the XBL file or not, on a per-origin basis.
 Whether the magic string is:

   X-FRAME-OPTIONS: *.example.com
   Access-Control-Allow-Origin: *

 ...or:

   X-FRAME-OPTIONS: *.example.com

 ...or:

   Access-Control-Allow-Origin: *.example.com

 ...or:

   X: *.example.com

 ...or some other sequence of bytes doesn't seem to make any difference to
 any actual concrete security. There's only one mechanism here. Either
 access is granted to that origin, or it isn't.

No, there is a difference in access-control between the two designs.

In the two header design:
1) An XHR GET of the XBL file data by example.org *is* allowed.
2) An xbl import of the XBL data by example.org triggers a rendering error.

In the one header design:
1) An XHR GET of the XBL file data by example.org is *not* allowed.
2) An xbl import of the XBL data by example.org triggers a rendering error.

Under the two header design, everyone has read access to the raw bits
of the XBL file. The one header design makes an empty promise to
protect read access to the XBL file.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Ian Hickson
On Mon, 21 Dec 2009, Tyler Close wrote:
 
 No, there is a difference in access-control between the two designs.
 
 In the two header design:
 1) An XHR GET of the XBL file data by example.org *is* allowed.
 2) An xbl import of the XBL data by example.org triggers a rendering error.

That's a bad design. It would make people think they had secured the file 
when they had not.

Security should be consistent across everything.


 In the one header design:
 1) An XHR GET of the XBL file data by example.org is *not* allowed.
 2) An xbl import of the XBL data by example.org triggers a rendering error.

That's what I want.


 Under the two header design, everyone has read access to the raw bits
 of the XBL file.

That's a bad thing.


 The one header design makes an empty promise to protect read access to 
 the XBL file.

How is it an empty promise?

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Tyler Close
On Mon, Dec 21, 2009 at 2:39 PM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 21 Dec 2009, Tyler Close wrote:

 No, there is a difference in access-control between the two designs.

 In the two header design:
 1) An XHR GET of the XBL file data by example.org *is* allowed.
 2) An xbl import of the XBL data by example.org triggers a rendering error.

 That's a bad design. It would make people think they had secured the file
 when they had not.

The headers explicitly say that a read request from any Origin is allowed:

Access-Control-Allow-Origin: *

The above syntax is the one CORS came up with. How could it be made clearer?

 Security should be consistent across everything.

It is. All Origins have read access. The data just renders in a
different way depending on if/how it is embedded.

 In the one header design:
 1) An XHR GET of the XBL file data by example.org is *not* allowed.
 2) An xbl import of the XBL data by example.org triggers a rendering error.

 That's what I want.

What you want, and the mechanism you propose to get it, are at odds.
I've described the CSRF-like attack multiple times. The access control
model you propose doesn't actually work.

To actually control access to the XBL file data you need to use
something like the secret token designs we've discussed.

 Under the two header design, everyone has read access to the raw bits
 of the XBL file.

 That's a bad thing.

In the scenario you described, everyone *does*  have read access to
the raw bits. Anyone can just direct their browser to example.org and
save the data. In your scenario, we were just trying to discourage
bandwidth stealing.

 The one header design makes an empty promise to protect read access to
 the XBL file.

 How is it an empty promise?

See above.

We don't seem to be making any progress at understanding each other,
so I'm going to give up on this thread until I see some signs of
progress. Thanks for your time.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Ian Hickson
On Mon, 21 Dec 2009, Tyler Close wrote:
 On Mon, Dec 21, 2009 at 2:39 PM, Ian Hickson i...@hixie.ch wrote:
  On Mon, 21 Dec 2009, Tyler Close wrote:
 
  No, there is a difference in access-control between the two designs.
 
  In the two header design:
  1) An XHR GET of the XBL file data by example.org *is* allowed.
  2) An xbl import of the XBL data by example.org triggers a rendering 
  error.
 
  That's a bad design. It would make people think they had secured the file
  when they had not.
 
 The headers explicitly say that a read request from any Origin is allowed:
 
 Access-Control-Allow-Origin: *
 
 The above syntax is the one CORS came up with. How could it be made clearer?

By not having two headers, but just having one.


  Security should be consistent across everything.
 
 It is. All Origins have read access. The data just renders in a
 different way depending on if/how it is embedded.

I am not interested in this kind of distinction. I think we should only 
have one distinction -- either an origin can use the data, or it can't.


  In the one header design:
  1) An XHR GET of the XBL file data by example.org is *not* allowed.
  2) An xbl import of the XBL data by example.org triggers a rendering 
  error.
 
  That's what I want.
 
 What you want, and the mechanism you propose to get it, are at odds.
 I've described the CSRF-like attack multiple times.

Sure, you can misuse Origin in complicated scenarios to introduce CSRF 
attacks. But XBL2 doesn't have those scenarios, and nor do video, 
img+canvas, and any number of other options. Most XHR2 uses don't 
involve the multiple sites either. We shouldn't make _everything_ far more 
complicated just because there is a way to misuse the feature in a case 
that is itself already complicated.


 The access control model you propose doesn't actually work.

It works fine for XBL2, Web Sockets, video, img+canvas, sharing 
data across multiple servers in one environment, etc.


 To actually control access to the XBL file data you need to use 
 something like the secret token designs we've discussed.

I'm sorry but it's simply a non-starter to have to use secret tokens for 
embedding XBL resources. That's orders of magnitude more complexity than 
most authors will be able to deal with.

There are no scripts involved in these scenarios. It would simply lead to 
the secret tokens being baked into public resources, which would make it 
trivial for them to be forged, which defeats the entire purpose.


  Under the two header design, everyone has read access to the raw bits 
  of the XBL file.
 
  That's a bad thing.
 
 In the scenario you described, everyone *does* have read access to the 
 raw bits.

Only people behind the intranet, or with the right cookies, or with the 
right HTTP authentication, or with the right IP addresses. That's not 
everyone.


 In your scenario, we were just trying to discourage bandwidth stealing.

I am trying to do many things. Bandwidth stealing is one. Securing 
semi-public resources is another. Securing resources behind intranets is 
yet another. These are all use cases that CORS makes trivial and which UM 
makes incredibly complicated.


Personally the more I discuss this the more convinced I am becoming that 
CORS is the way to go.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Kenton Varda
On Mon, Dec 21, 2009 at 5:35 PM, Adam Barth w...@adambarth.com wrote:

 On Mon, Dec 21, 2009 at 5:17 PM, Kenton Varda ken...@google.com wrote:
  The problem we're getting at is that CORS is being presented as a
 security
  mechanism, when in fact it does not provide security.  Yes, CORS is
  absolutely easier to use than UM in some cases -- I don't think anyone is
  going to dispute that.  The problem is that the security it provides in
  those cases simply doesn't exist unless you can ensure that no resource
 on
  *any* of your allowed origins can be tricked into fetching your
 protected
  resource for a third party.  In practice this will be nearly impossible
 to
  ensure except in the most simple cases.

 Why isn't this a big problem today for normal XMLHttpRequest?  Normal
 XMLHttpRequest is just like a CORS deployment in which every server
 has a policy of allowing its own origin.


It *is* a problem today with XMLHttpRequest.  This is, for example, one
reason why we cannot host arbitrary HTML documents uploaded by users on
google.com -- a rather large inconvenience!  If it were feasible, we'd be
arguing for removing this ability from XMLHttpRequest.  However, removing a
feature that exists is generally not possible; better to avoid adding it in
the first place.

With CORS, the problems would be worse, because now you not only have to
ensure that your own server is trust-worthy and free of CSRF, but also the
servers of everyone you allow to access your resource.  Problems are likely
to multiply exponentially.


RE: to publish new Working Draft of Indexed Database API; deadline December 21

2009-12-21 Thread Adrian Bateman
Microsoft supports publishing a new Working Draft.

However, there appears to be a problem with the Respec.js script at
http://dev.w3.org/2006/webapi/WebSimpleDB/.


On Monday, December 14, 2009 12:54 PM, Arthur Barstow wrote:
 This is a Call for Consensus (CfC) to publish a new Working Draft of
 the Indexed Database API spec with a new short-name of indexeddb:
 
   http://dev.w3.org/2006/webapi/WebSimpleDB/
 
 As with all of our CfCs, positive response is preferred and
 encouraged and silence will be assumed to be assent. The deadline for
 comments is 21 December.
 
 Since the comment period ends after the last day to request a 2009
 publication, assuming this CfC is agreed, the new WD will be
 published 6 January 2010.
 
 -Regards, Art Barstow
 
 
 Begin forwarded message:
 
  From: ext Nikunj R. Mehta nikunj.me...@oracle.com
  Date: December 14, 2009 2:26:22 PM EST
  To: public-webapps WG public-webapps@w3.org
  Subject: Indexed Database API (previously WebSimpleDB) ready for a
  new WD
  Archived-At: http://www.w3.org/mid/981D8DE7-B1F3-4075-ACC6-
  a895c6665...@oracle.com
 
  Dear Chairs,
 
  Indexed Database API [1] is ready for a new WD. I have addressed
  various issues reported to the WebApps WG so far. I propose the short
  name indexeddb to replace websimpledb at this time.
 
  I know of one issue reported by Pablo Castro that is not resolved [2]:
  Usability of asynchronous APIs. This discussion needs its own time and
  more study to improve upon the approach currently in the ED.
 
  Thanks,
  Nikunj
  http://o-micron.blogspot.com
 
  [1] http://dev.w3.org/2006/webapi/WebSimpleDB/
  [2] http://www.w3.org/mid/
  f753b2c401114141b426db383c8885e01b9cd...@tk5ex14mbxc126.redmond.corp.m
  icrosoft.com
 
 
 
 




RE: to publish new Working Draft of Indexed Database API; deadline December 21

2009-12-21 Thread Adrian Bateman
On Monday, December 21, 2009 6:43 PM, I wrote:
 Microsoft supports publishing a new Working Draft.
 
 However, there appears to be a problem with the Respec.js script at
 http://dev.w3.org/2006/webapi/WebSimpleDB/.

Apparently, the script takes some time to run (at least when I tried it in 
Firefox) and is incompatible with Internet Explorer. Is it possible to capture 
the result of the script running and publish that?

Thanks,

Adrian.