Re: =[xhr]

2014-11-18 Thread Anne van Kesteren
On Tue, Nov 18, 2014 at 5:45 AM, Domenic Denicola d...@domenic.me wrote:
 That would be very sad. There are many servers that will not accept chunked 
 upload (for example Amazon S3).

The only way I could imagine us doing this is by setting the
Content-Length header value through an option (not through Headers)
and by having the browser enforce the specified length somehow. It's
not entirely clear how a browser would go about that. Too many bytes
could be addressed through a transform stream I suppose, too few
bytes... I guess that would just leave the connection hanging. Not
sure if that is particularly problematic.


-- 
https://annevankesteren.nl/



RE: =[xhr]

2014-11-18 Thread Domenic Denicola
From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On Behalf Of 
Anne van Kesteren

 The only way I could imagine us doing this is by setting the Content-Length 
 header value through an option (not through Headers) and by having the 
 browser enforce the specified length somehow. It's not entirely clear how a 
 browser would go about that. Too many bytes could be addressed through a 
 transform stream I suppose, too few bytes... I guess that would just leave 
 the connection hanging. Not sure if that is particularly problematic.

I don't understand why the browser couldn't special-case the handling of 
`this.headers.get(Content-Length)`? I.e. why would a separate option be 
required? So for example the browser could stop sending any bytes past the 
number specified by reading the Content-Length header value. And if you 
prematurely close the request body stream before sending the specified number 
of bytes then the server just has to deal with it, as they normally do...

I still think we should just allow the developer full control over the 
Content-Length header if they've taken full control over the contents of the 
request body (by writing to its stream asynchronously and piecemeal). It gives 
no more power than using CURL. (Except the usual issues of ambient/cookie 
authority, but those seem orthogonal to Content-Length mismatch.)



Re: =[xhr]

2014-11-18 Thread Anne van Kesteren
On Tue, Nov 18, 2014 at 10:34 AM, Domenic Denicola d...@domenic.me wrote:
 I still think we should just allow the developer full control over the 
 Content-Length header if they've taken full control over the contents of the 
 request body (by writing to its stream asynchronously and piecemeal). It 
 gives no more power than using CURL. (Except the usual issues of 
 ambient/cookie authority, but those seem orthogonal to Content-Length 
 mismatch.)

Why? If a service behind a firewall is vulnerable to Content-Length
mismatches, you can now attack such a service by tricking a user
behind that firewall into visiting evil.com.


-- 
https://annevankesteren.nl/



Re: =[xhr]

2014-11-18 Thread Takeshi Yoshino
How about padding the remaining bytes forcefully with e.g. 0x20 if the
WritableStream doesn't provide enough bytes to us?

Takeshi

On Tue, Nov 18, 2014 at 7:01 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Tue, Nov 18, 2014 at 10:34 AM, Domenic Denicola d...@domenic.me wrote:
  I still think we should just allow the developer full control over the
 Content-Length header if they've taken full control over the contents of
 the request body (by writing to its stream asynchronously and piecemeal).
 It gives no more power than using CURL. (Except the usual issues of
 ambient/cookie authority, but those seem orthogonal to Content-Length
 mismatch.)

 Why? If a service behind a firewall is vulnerable to Content-Length
 mismatches, you can now attack such a service by tricking a user
 behind that firewall into visiting evil.com.


 --
 https://annevankesteren.nl/



Re: =[xhr]

2014-11-18 Thread Anne van Kesteren
On Tue, Nov 18, 2014 at 12:50 PM, Takeshi Yoshino tyosh...@google.com wrote:
 How about padding the remaining bytes forcefully with e.g. 0x20 if the
 WritableStream doesn't provide enough bytes to us?

How would that work? At some point when the browser decides it wants
to terminate the fetch (e.g. due to timeout, tab being closed) it
attempts to transmit a bunch of useless bytes? What if the value is
really large?


-- 
https://annevankesteren.nl/



Re: [url] follow-ups from the TPAC F2F Meeting

2014-11-18 Thread Arthur Barstow

On 10/29/14 9:54 PM, Sam Ruby wrote:

I am willing to help with this effort.


Thanks for this information [1] and sorry for the delayed reply.

Given URL is a joint deliverable between WebApps and TAG, perhaps it 
would be helpful if you were a co-Editor. Are you interested in that role?


Regardless, I think it would be useful if you and/or Anne would briefly 
describe your efforts, in particular: is your problem space the same or 
different; is there mutual interest to work on a single standard (or 
layered approach) that addresses the union of UCs, requirements, etc.


-Thanks, AB

[1] http://lists.w3.org/Archives/Public/public-webapps/2014OctDec/0315.html




RE: =[xhr]

2014-11-18 Thread Domenic Denicola
From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On Behalf Of 
Anne van Kesteren

 On Tue, Nov 18, 2014 at 12:50 PM, Takeshi Yoshino tyosh...@google.com wrote:
 How about padding the remaining bytes forcefully with e.g. 0x20 if the 
 WritableStream doesn't provide enough bytes to us?

 How would that work? At some point when the browser decides it wants to 
 terminate the fetch (e.g. due to timeout, tab being closed) it attempts to 
 transmit a bunch of useless bytes? What if the value is really large?

I think there are several different scenarios under consideration.

1. The author says Content-Length 100, writes 50 bytes, then closes the stream.
2. The author says Content-Length 100, writes 50 bytes, and never closes the 
stream.
3. The author says Content-Length 100, writes 150 bytes, then closes the stream.
4. The author says Content-Length 100 , writes 150 bytes, and never closes the 
stream.

It would be helpful to know how most servers handle these. (Perhaps HTTP 
specifies a mandatory behavior.) My guess is that they are very capable of 
handling such situations. 2 in particular resembles a long-polling setup.

As for whether we consider this kind of thing an attack, instead of just a 
new capability, I'd love to get some security folks to weigh in. If they think 
it's indeed a bad idea, then we can discuss mitigation strategies; 3 and 4 are 
easily mitigatable, whereas 1 could be addressed by an idea like Takeshi's. I 
don't think mitigating 2 makes much sense as we can't know when the author 
intends to send more data.



Re: =[xhr]

2014-11-18 Thread Rui Prior
 I think there are several different scenarios under consideration.
 
 1. The author says Content-Length 100, writes 50 bytes, then closes the 
 stream.

Depends on what exactly closing the stream does:

(1) Closing the stream includes closing the the TCP connection = the
body of the HTTP message is incomplete, so the server should avoid
processing it;  no response is returned.

(2) Closing the stream includes half-closing the the TCP connection =
the body of the HTTP message is incomplete, so the server should avoid
processing it;  a 400 Bad Request response would be adequate.  (In
particular cases where partial bodies would be acceptable, perhaps it
might be different.)

(3) Closing the stream does nothing with the underlying TCP connection
= the server will wait for the remaining bytes (perhaps until a timeout).


 2. The author says Content-Length 100, writes 50 bytes, and never closes the 
 stream.

The server will wait for the remaining bytes (perhaps until a timeout).


 3. The author says Content-Length 100, writes 150 bytes, then closes the 
 stream.

The server thinks that the message is finished after the first 100 bytes
and tries to process them normally.  The last 50 bytes are interpreted
as the beginning of a new (pipelined) request, and the server will
generate a 400 Bad Request response.


 4. The author says Content-Length 100 , writes 150 bytes, and never closes 
 the stream.

This case should be similar to the previous one.


IMO, exposing such degree of (low level) control should be avoided.  In
cases where the size of the body is known beforehand, Content-Length
should be generated automatically;  in cases where it is not, chunked
encoding should be used.

Rui Prior




Re: CfC: publish WG Note of XHR Level 2; deadline November 14

2014-11-18 Thread Arthur Barstow

On 11/7/14 11:46 AM, Arthur Barstow wrote:

this is a Call for Consensus to:

a) Publish a gutted WG Note of the spec (see [Draft-Note])


FYI, this WG Note has been published 
http://www.w3.org/TR/2014/NOTE-XMLHttpRequest2-20141118/.




Re: CfC: publish a WG Note of Fullscreen; deadline November 14

2014-11-18 Thread Arthur Barstow

On 11/7/14 8:39 AM, Arthur Barstow wrote:

this is a formal Call for Consensus to:

a) Stop work on the spec (and remove it as a deliverable if/when 
WebApps' charter is updated)


b) Publish a WG Note of this spec; (see [Draft-Note] for the proposed 
document)


c) gut the WG Note of all technical content (as WebApps did recently 
with [e.g.])


d) gut the ED [ED] of all technical content (note: this hasn't been 
done yet but I will do so if/when this CfC passes)


FYI, the WG Note was published 
http://www.w3.org/TR/2014/NOTE-fullscreen-20141118/.





Re: [url] follow-ups from the TPAC F2F Meeting

2014-11-18 Thread Sam Ruby

On 11/18/2014 09:51 AM, Arthur Barstow wrote:

On 10/29/14 9:54 PM, Sam Ruby wrote:

I am willing to help with this effort.


Thanks for this information [1] and sorry for the delayed reply.

Given URL is a joint deliverable between WebApps and TAG, perhaps it
would be helpful if you were a co-Editor. Are you interested in that role?


Yes.

- Sam Ruby



PSA: Sam Ruby is co-Editor of URL spec

2014-11-18 Thread Arthur Barstow

On 11/18/14 3:02 PM, Sam Ruby wrote:

On 11/18/2014 09:51 AM, Arthur Barstow wrote:

Given URL is a joint deliverable between WebApps and TAG, perhaps it
would be helpful if you were a co-Editor. Are you interested in that 
role?


Yes.


OK, PubStatus updated accordingly.

-Thanks, AB



Re: PSA: Sam Ruby is co-Editor of URL spec

2014-11-18 Thread Sam Ruby

On 11/18/2014 03:08 PM, Arthur Barstow wrote:

On 11/18/14 3:02 PM, Sam Ruby wrote:

On 11/18/2014 09:51 AM, Arthur Barstow wrote:

Given URL is a joint deliverable between WebApps and TAG, perhaps it
would be helpful if you were a co-Editor. Are you interested in that
role?


Yes.


OK, PubStatus updated accordingly.


Thanks!

Would it be possible to fork https://github.com/whatwg/url into 
https://github.com/w3c/, and to give me the necessary access to update this?


I've recently converted the spec to bikeshed, and bikeshed has the 
ability to produce W3C style specifications.  I also plan to add a 
status section as described here:


http://lists.w3.org/Archives/Public/public-webapps/2014OctDec/0315.html

Once done, I'll post a message to this list (public-webapps) for a 
review, followed by a PSA when it is ready to be pushed out as a 
editors draft.


I plan to work with all the people who have formally objected to see if 
their concerns can be resolved.


Meanwhile, I'm working to integrate the following first into the WHATWG 
version of the spec, and then through the WebApps process:


http://intertwingly.net/projects/pegurl/url.html

Longer term (more specifically, in 1Q15), I plan to schedule a meeting 
with the Director to resolve whether or not there is a need for a 
WebApps version:


http://lists.w3.org/Archives/Public/public-html-admin/2014Nov/0036.html

- Sam Ruby



[Bug 24338] Spec should have Fetch for Blob URLs

2014-11-18 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=24338

Arun a...@mozilla.com changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #12 from Arun a...@mozilla.com ---
Resolving per Comment 11

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: Bringing APIs for experimental hardware/software to the Web

2014-11-18 Thread Dimitri Glazkov
On Sun, Nov 16, 2014 at 8:30 PM, Robert O'Callahan rob...@ocallahan.org
wrote:

 On Sun, Nov 16, 2014 at 5:35 PM, Dimitri Glazkov dglaz...@google.com
 wrote:

 On Wed, Nov 12, 2014 at 8:44 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:

 On Wed, Nov 12, 2014 at 12:36 PM, Dimitri Glazkov dglaz...@google.com
 wrote:

 Nevertheless, I am optimistic. I would like to have this discussion and
 hear your ideas.


 OK. The following ideas are somewhat half-baked so don't judge me too
 harshly :-).

 Rapid deployment of experimental APIs in the browser clashes with our
 existing goals. It takes time to negotiate APIs, to create multiple
 interoperable implementations, to validate safety, etc. So I conclude we
 shouldn't bake them into the browser. Instead we could achieve most of our
 goals by enabling third parties to deploy experimental functionality in
 browser-independent, device-independent packages that can be used by ---
 and if necessary shipped alongside --- Web applications. To the extent we
 can do that, we make an end-run around the standardization problem.


 This implies that we need to first design/specify an environment
 (execution/packaging/etc.) for this. Right? This seems like a massive
 investment of time. Not saying it's a bad idea. Unless you have a piano
 neatly stashed away in the bushes waiting for me to ask this question :) [
 http://en.wikipedia.org/wiki/Grigori_Gorin]


 This need not be a hard problem, depending on other decisions we make. It
 might be as simple as load an IFRAME with a well-known URL at some
 vendor's Web site and exchange postMessages with it, or load a script
 directly from the vendor's Web site (which might do the IFRAME/postmessage
 thing under the hood and vend a friendlier API). We may not need to
 standardize anything specific here, although it would be good to have some
 best practices to recommend.


Okay. That lines up well with my thinking on the Tubes proposal.




 For software, this means we need a browser-based execution environment
 that can run the code that implements these APIs. I think for C/C++ code,
 we're nearly there already. For GPU code the situation is murkier and we
 need to solve it (but we already need to).


 Agreed. Somewhat tangentially, would like to know your opinion on the
 bedrock and extensible manifesto thing. It seems that primitive archeology
 presupposes that C/C++ is ultimately brushed down to nothing.


 I don't understand what you're saying here.


If you look at https://extensiblewebmanifesto.org/ and project the path
forward with those guidelines in mind, at some point, you will hit the
bedrock -- a set of platform APIs that can no longer be explained in terms
of other, more low-level APIs.

Given that, what is beneath this bedrock? To what degree will we succeed
with JS as a platform host language? Will there remain things like C++
bindings and the exotic objects that they bring along?



 Okay. I think I understand this part. Since USB is basically the common
 bus for new hardware innovation anyway, we could just go down to USB and
 build up from there.

 I have a couple of concerns:
 1) This means you'll effectively have to implement device drivers twice
 -- once for host OS, once for Web platform. That seems like doubling the
 work and lessening the chance of the developer actually going through it.


 We ought to be able to provide glue code to ease porting, e.g. by
 emulating libusb or the Android USB interface on top of whatever Web USB
 API we provide.


Sounds like there's some prior art in this space:
https://developer.chrome.com/apps/app_usb and
https://wiki.mozilla.org/WebAPI/WebUSB. Interesting bit about caveats:
https://developer.chrome.com/apps/app_usb#caveats

I immediately feel ill-informed to determine whether exposing a USB API
satisfies most developer/vendor needs. Sounds like a bug for me to fix ;)




 2) Instilling a device-independent device driver culture seems like a
 tough challenge. Two device vendors need to have an incentive to
 collaborate on a driver that works for both of their nearly identical crazy
 camera tricks. The slope here is not going to be leaning in our favor.


 There seems to be a misunderstanding. In this context, I explicitly
 abandon the goal of having a single driver or API for diverse hardware.


Okay. Makes sense.



 It might be good to start enumerating the types of APIs we're interested
 in.

 For example, we could take Apple's HealthKit (
 https://developer.apple.com/healthkit/) and Google's Fit (
 https://developers.google.com/fit/overview) as some of the things
 developers might need, and see how we could deliver them.

 One key result I am looking for here is adapting to fit into existing
 frameworks like that, rather than building our own. Though rebuilding both
 of the things from scratch for the Web platform -- however NIH -- should
 still be on the table.


 These frameworks do not fit very well into the scope of my problem
 statement; I don't see them as 

[Bug 25038] [Shadow]: Non-normative text about selection should be removed

2014-11-18 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=25038

Hayato Ito hay...@chromium.org changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 CC||hay...@chromium.org
 Resolution|--- |DUPLICATE

--- Comment #2 from Hayato Ito hay...@chromium.org ---


*** This bug has been marked as a duplicate of bug 15444 ***

-- 
You are receiving this mail because:
You are on the CC list for the bug.



[Bug 25562] [Shadow]: Inert HTML elements normative text is not focused enough

2014-11-18 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=25562

Hayato Ito hay...@chromium.org changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 CC||hay...@chromium.org
 Resolution|--- |DUPLICATE

--- Comment #12 from Hayato Ito hay...@chromium.org ---


*** This bug has been marked as a duplicate of bug 26365 ***

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: What I am missing

2014-11-18 Thread Marc Fawzi
Allowing this script to run may open you to all kinds of malicious attacks
by 3rd parties not associated with the party whom you're trusting.

If I give App XYZ super power to do anything, and XYZ gets
compromised/hacked then I'll be open to all sorts of attacks.

It's not an issue of party A trusting party B. It's an issue of trusting
that party B has no security holes in their app whatsoever, and that is one
of the hardest things to guarantee.


On Tue, Nov 18, 2014 at 8:00 PM, Michaela Merz michaela.m...@hermetos.com
wrote:


 Yes Boris - I know. As long as it doesn't have advantages for the user
 or the developer - why bother with it? If signed code would allow
 special features - like true fullscreen or direct file access  - it
 would make sense. Signed code would make script much more resistant to
 manipulation and therefore would help in environments where trust and/or
 security is important.

 We use script for much, much more than we did just a year or so ago.

 Michaela



 On 11/19/2014 04:40 AM, Boris Zbarsky wrote:
  On 11/18/14, 10:26 PM, Michaela Merz wrote:
  First: We need signed script code.
 
  For what it's worth, Gecko supported this for a while.  See
  
 http://www-archive.mozilla.org/projects/security/components/signed-scripts.html
 .
   In practice, people didn't really use it, and it made the security
  model a _lot_ more complicated and hard to reason about, so the
  feature was dropped.
 
  It would be good to understand how proposals along these lines differ
  from what's already been tried and failed.
 
  -Boris
 






Re: What I am missing

2014-11-18 Thread Florian Bösch
On Wed, Nov 19, 2014 at 4:26 AM, Michaela Merz michaela.m...@hermetos.com
wrote:

 First: We need signed script code. We are doing a lot of stuff with
 script - we could safely do even more, if we would be able to safely
 deliver script that has some kind of a trust model.

TLS exists.


 I am thinking about
 signed JAR files - just like we did with java applets not too long ago.
 Maybe as an extension to the CSP enviroment .. and a nice frame around
 the browser telling the user that the site is providing trusted / signed
 code.

Which is different than TLS how?


 Signed code could allow more openness, like true full screen,

Fullscreen is possible today,
https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/Using_full_screen_mode


 or simpler ajax downloads.

Simpler how?


 Second: It would be great to finally be able to accept incoming
 connections.

WebRTC allows the browser to accept incoming connections. The WebRTC data
channel covers both TCP and UDP connectivity.


 There's access to cameras and microphones - why not allow
 us the ability to code servers in the browser?

You can. There's even P2P overlay networks being done with WebRTC. Although
they're mostly hampered by the existing support for WebRTC data channels,
which isn't great yet.


Re: What I am missing

2014-11-18 Thread Michaela Merz
Well .. it would be a all scripts signed or no script signed kind of
a deal. You can download malicious code everywhere - not only as
scripts. Signed code doesn't protect against malicious or bad code. It
only guarantees that the code is actually from the the certificate owner
.. and has not been altered without the signers consent.

Michaela
 


On 11/19/2014 06:14 AM, Marc Fawzi wrote:
 Allowing this script to run may open you to all kinds of malicious
 attacks by 3rd parties not associated with the party whom you're
 trusting. 

 If I give App XYZ super power to do anything, and XYZ gets
 compromised/hacked then I'll be open to all sorts of attacks.

 It's not an issue of party A trusting party B. It's an issue of
 trusting that party B has no security holes in their app whatsoever,
 and that is one of the hardest things to guarantee.


 On Tue, Nov 18, 2014 at 8:00 PM, Michaela Merz
 michaela.m...@hermetos.com mailto:michaela.m...@hermetos.com wrote:


 Yes Boris - I know. As long as it doesn't have advantages for the user
 or the developer - why bother with it? If signed code would allow
 special features - like true fullscreen or direct file access  - it
 would make sense. Signed code would make script much more resistant to
 manipulation and therefore would help in environments where trust
 and/or
 security is important.

 We use script for much, much more than we did just a year or so ago.

 Michaela



 On 11/19/2014 04:40 AM, Boris Zbarsky wrote:
  On 11/18/14, 10:26 PM, Michaela Merz wrote:
  First: We need signed script code.
 
  For what it's worth, Gecko supported this for a while.  See
 
 
 http://www-archive.mozilla.org/projects/security/components/signed-scripts.html.
   In practice, people didn't really use it, and it made the security
  model a _lot_ more complicated and hard to reason about, so the
  feature was dropped.
 
  It would be good to understand how proposals along these lines
 differ
  from what's already been tried and failed.
 
  -Boris
 







Re: What I am missing

2014-11-18 Thread Florian Bösch
On Wed, Nov 19, 2014 at 5:00 AM, Michaela Merz michaela.m...@hermetos.com
wrote:

 If signed code would allow
 special features - like true fullscreen

https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/Using_full_screen_mode



 or direct file access

http://www.html5rocks.com/en/tutorials/file/filesystem/


Re: What I am missing

2014-11-18 Thread Michaela Merz

TLS doesn't protect you against code that has been altered server side -
without the signers consent. It would alert the user, if unsigned
updates would be made available.

Ajax downloads still require a download link (with the bloburl) to be
displayed requiring an additional click. User clicks download .. ajax
downloads the data, creates blob url as src which the user has to click
to 'copy' the blob onto the userspace drive. Would be better to skip the
final part.

In regard to accept: I wasn't aware of the fact that I can accept a
socket on port 80 to serve a HTTP session. You're saying I could with
what's available today?

Michaela



On 11/19/2014 06:34 AM, Florian Bösch wrote:
 On Wed, Nov 19, 2014 at 4:26 AM, Michaela Merz
 michaela.m...@hermetos.com mailto:michaela.m...@hermetos.com wrote:

 First: We need signed script code. We are doing a lot of stuff with
 script - we could safely do even more, if we would be able to safely
 deliver script that has some kind of a trust model.

 TLS exists.
  

 I am thinking about
 signed JAR files - just like we did with java applets not too long
 ago.
 Maybe as an extension to the CSP enviroment .. and a nice frame around
 the browser telling the user that the site is providing trusted /
 signed
 code.

 Which is different than TLS how?
  

 Signed code could allow more openness, like true full screen, 

 Fullscreen is possible
 today, 
 https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/Using_full_screen_mode
  

 or simpler ajax downloads.

 Simpler how?
  

 Second: It would be great to finally be able to accept incoming
 connections.

 WebRTC allows the browser to accept incoming connections. The WebRTC
 data channel covers both TCP and UDP connectivity.
  

 There's access to cameras and microphones - why not allow
 us the ability to code servers in the browser?

 You can. There's even P2P overlay networks being done with WebRTC.
 Although they're mostly hampered by the existing support for WebRTC
 data channels, which isn't great yet.



[Bug 26815] [Shadow]:

2014-11-18 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=26815

Hayato Ito hay...@chromium.org changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #2 from Hayato Ito hay...@chromium.org ---
Fixed
https://github.com/w3c/webcomponents/commit/8ed68226dcd1df36513e9b00d2ea1be99d83d1ee

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: What I am missing

2014-11-18 Thread Jeffrey Walton
On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
michaela.m...@hermetos.com wrote:
 Well .. it would be a all scripts signed or no script signed kind of a
 deal. You can download malicious code everywhere - not only as scripts.
 Signed code doesn't protect against malicious or bad code. It only
 guarantees that the code is actually from the the certificate owner .. and
 has not been altered without the signers consent.

Seems relevant: Java’s Losing Security Legacy,
http://threatpost.com/javas-losing-security-legacy and Don't Sign
that Applet!, https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

Dormann advises don't sign so that the code can't escape its sandbox
and it stays restricted (malware regularly signs to do so).



Re: What I am missing

2014-11-18 Thread Marc Fawzi

Signed code doesn't protect against malicious or bad code. It only
guarantees that the code is actually from the the certificate owner


if I trust you and allow your signed script the permissions it asks for and
you can't guarantee that it would be used by some malicious 3rd party site
to hack me (i.e. the security holes in your script get turned against me)
then there is just too much risk in allowing the permissions

the concern is that the average user will not readily grasp the risk
involved in granting certain powerful permissions to some insecure script
from a trusted source

On Tue, Nov 18, 2014 at 9:35 PM, Michaela Merz michaela.m...@hermetos.com
wrote:

  Well .. it would be a all scripts signed or no script signed kind of
 a deal. You can download malicious code everywhere - not only as scripts.
 Signed code doesn't protect against malicious or bad code. It only
 guarantees that the code is actually from the the certificate owner .. and
 has not been altered without the signers consent.

 Michaela




 On 11/19/2014 06:14 AM, Marc Fawzi wrote:

 Allowing this script to run may open you to all kinds of malicious
 attacks by 3rd parties not associated with the party whom you're
 trusting.

  If I give App XYZ super power to do anything, and XYZ gets
 compromised/hacked then I'll be open to all sorts of attacks.

  It's not an issue of party A trusting party B. It's an issue of trusting
 that party B has no security holes in their app whatsoever, and that is one
 of the hardest things to guarantee.


 On Tue, Nov 18, 2014 at 8:00 PM, Michaela Merz michaela.m...@hermetos.com
  wrote:


 Yes Boris - I know. As long as it doesn't have advantages for the user
 or the developer - why bother with it? If signed code would allow
 special features - like true fullscreen or direct file access  - it
 would make sense. Signed code would make script much more resistant to
 manipulation and therefore would help in environments where trust and/or
 security is important.

 We use script for much, much more than we did just a year or so ago.

 Michaela



 On 11/19/2014 04:40 AM, Boris Zbarsky wrote:
  On 11/18/14, 10:26 PM, Michaela Merz wrote:
  First: We need signed script code.
 
  For what it's worth, Gecko supported this for a while.  See
  
 http://www-archive.mozilla.org/projects/security/components/signed-scripts.html
 .
   In practice, people didn't really use it, and it made the security
  model a _lot_ more complicated and hard to reason about, so the
  feature was dropped.
 
  It would be good to understand how proposals along these lines differ
  from what's already been tried and failed.
 
  -Boris
 








Re: What I am missing

2014-11-18 Thread Florian Bösch
On Wed, Nov 19, 2014 at 6:35 AM, Michaela Merz michaela.m...@hermetos.com
wrote:

  Well .. it would be a all scripts signed or no script signed kind of
 a deal. You can download malicious code everywhere - not only as scripts.
 Signed code doesn't protect against malicious or bad code. It only
 guarantees that the code is actually from the the certificate owner .. and
 has not been altered without the signers consent.


On Wed, Nov 19, 2014 at 5:00 AM, Michaela Merz michaela.m...@hermetos.com
 wrote:

 it would make sense. Signed code would make script much more resistant to
 manipulation and therefore would help in environments where trust and/or
 security is important.

 We use script for much, much more than we did just a year or so ago.


On Wed, Nov 19, 2014 at 6:41 AM, Michaela Merz michaela.m...@hermetos.com
 wrote:

 TLS doesn't protect you against code that has been altered server side -
 without the signers consent. It would alert the user, if unsigned updates
 would be made available.


Signing allows you to verify that an entity did produce a run of bytes, and
not another entity. Entity here meaning the holder of the private key who
put his signature onto that run of bytes. How do you know this entity did
that? Said entity also broadcast their public key, so that the recipient
can compare.

TLS solves this problem somewhat by securing the delivery channel. It
doesn't sign content, but via TLS it is (at least proverbially) impossible
for somebody to deliver content over a channel you control.

Ajax downloads still require a download link (with the bloburl) to be
 displayed requiring an additional click. User clicks download .. ajax
 downloads the data, creates blob url as src which the user has to click to
 'copy' the blob onto the userspace drive. Would be better to skip the final
 part.

Signing, technically would have an advantage where you wish to deliver
content over a channel that you cannot control. Such as over WebRTC, from
files, and so forth.

In regard to accept: I wasn't aware of the fact that I can accept a socket
 on port 80 to serve a HTTP session. You're saying I could with what's
 available today?

You cannot. You can however let the browser accept an incoming connection
under the condition that they're browsing the same origin. The port doesn't
matter as much, as WebRTC largely relegates it to an implementation detail
of the channel negotiator so that two of the same origins can communicate.



Suppose you get a piece of signed content, over whatever way it was
delivered. Suppose also that this content you got has the ability to read
all your private data, or reformat your machine. So it's basically about
trust. You need to establish a secure channel of communication to obtain a
public key that matches a signature, in such a way that an attackers
attempt to self-sign malicious content is foiled. And you need to have a
way to discover (after having established that the entity is the one who
was intended and that the signature is correct), that you indeed trust that
entity.

These are two very old problems in cryptography, and they cannot be solved
by cryptography. There are various approaches to this problem in use today:

   - TLS and its web of trust: The basic idea being that there is a
   hierarchy of signatories. It works like this. An entity provides a
   certificate for the connection, signing it with their private key. Since
   you cannot establish a connection without a public key that matches the
   private key, verifying the certificate is easy. This entity in turn, refers
   to another entity which provided the signature for that private key. They
   refer to another one, and so forth, until you arrive at the root. You
   implicitly trust root. This works, but it has some flaws. At the edge of
   the web, people are not allowed to self-sign, so they obtain their (pricey)
   key from the next tier up. But the next tier up can't go and bother the
   next tier up everytime they need to provide a new set of keys to the edge.
   So they get blanket permission to self-sign, implying that it's possible
   for the next tier up to establish and maintain a trust relationship to
   them. As is easily demonstratable, this can, and often does, go wrong,
   where some CA gets compromised. This is always bad news to whomever
   obtained a certificate from them, because now a malicious party can pass
   themselves off as them.
   - App-stores and trust royalty: This is really easy to describe, the app
   store you obtain something from signs the content, and you trust the
   app-store, and therefore you trust the content. This can, and often does go
   wrong, as android/iOS malware amply demonstrates.

TSL cannot work perfectly, because it is built on implied trust along the
chain, and this can get compromised. App-stores cannot work perfectly
because the ability to review content is quickly exceeded by the flood of
content. Even if app-stores where provided with the full source, 

Re: What I am missing

2014-11-18 Thread Florian Bösch
There are some models that are a bit better than trust by royalty
(app-stores) and trust by hirarchy (TLS). One of them is trust flowing
along flow limited edges in a graph (as in Advogato). This model however
isn't free from fault, as when a highly trusted entity gets compromised,
there's no quick or easy way to revoke that trust for that entity. Also, a
trust graph such as this doesn't solve the problem of stake. We trust say,
the twitter API, because we know that twitter has staked a lot into it. If
they violate that trust, they suffer proportionally more. A graph doesn't
solve that problem, because it cannot offer a proof of stake.

Interestingly, there are way to provide a proof of stake (see various
cryptocurrencies that attempt to do that). Of course proof of stake
cryptocurrencies have their own problems, but that doesn't entirely
invalidate the idea. If you can prove you have a stake of a given size,
then you can enhance a flow limited trust graph insofar as to make it less
likely an entity gets compromised. The difficulty with that approach of
course is, it would make aquiring high levels of trust prohibitively
expensive (as in getting the priviledge to access the filesystem could run
you into millions of $ of stake shares).


Re: What I am missing

2014-11-18 Thread Marc Fawzi
So there is no way for an unsigned script to exploit security holes in a
signed script?

Funny you mention crypto currencies as an idea to get inspiration
from...Trust but verify is detached from that... a browser can monitor
what the signed scripts are doing and if it detects a potentially malicious
pattern it can halt the execution of the script and let the user decide if
they want to continue...


On Tue, Nov 18, 2014 at 10:34 PM, Florian Bösch pya...@gmail.com wrote:

 There are some models that are a bit better than trust by royalty
 (app-stores) and trust by hirarchy (TLS). One of them is trust flowing
 along flow limited edges in a graph (as in Advogato). This model however
 isn't free from fault, as when a highly trusted entity gets compromised,
 there's no quick or easy way to revoke that trust for that entity. Also, a
 trust graph such as this doesn't solve the problem of stake. We trust say,
 the twitter API, because we know that twitter has staked a lot into it. If
 they violate that trust, they suffer proportionally more. A graph doesn't
 solve that problem, because it cannot offer a proof of stake.

 Interestingly, there are way to provide a proof of stake (see various
 cryptocurrencies that attempt to do that). Of course proof of stake
 cryptocurrencies have their own problems, but that doesn't entirely
 invalidate the idea. If you can prove you have a stake of a given size,
 then you can enhance a flow limited trust graph insofar as to make it less
 likely an entity gets compromised. The difficulty with that approach of
 course is, it would make aquiring high levels of trust prohibitively
 expensive (as in getting the priviledge to access the filesystem could run
 you into millions of $ of stake shares).




Re: What I am missing

2014-11-18 Thread Florian Bösch
On Wed, Nov 19, 2014 at 7:54 AM, Marc Fawzi marc.fa...@gmail.com wrote:

 So there is no way for an unsigned script to exploit security holes in a
 signed script?

Of course there's a way. But by the same token, there's a way a signed
script can exploit security holes in another signed script. Signing itself
doesn't establish any trust, or security.


 Funny you mention crypto currencies as an idea to get inspiration
 from...Trust but verify is detached from that... a browser can monitor
 what the signed scripts are doing and if it detects a potentially malicious
 pattern it can halt the execution of the script and let the user decide if
 they want to continue...

That's not working for a variety of reasons. The first reason is that
identifying what a piece of software does intelligently is one of those
really hard problems. As in Strong-AI hard. Failing that, you can monitor
what APIs a piece of software makes use of, and restrict access to those.
However, that's already satisfied without signing by sandboxing.
Furthermore, it doesn't entirely solve the problem as any android user will
know. You get a ginormeous list of premissions a given piece of software
would like to use and the user just clicks yes. Alternatively, you get
malware that's not trustworthy, that nobody managed to properly review,
because the non trusty part was burried/hidden by the author somewhere deep
down, to activate only long after trust extension by fiat has happened.

But even if you'd assume that this somehow would be an acceptable model,
what do you define as malicious? Reformatting your machine would be
malicious, but so would be posting on your facebook wall. What constitutes
a malicious pattern is actually more of a social than a technical problem.


Re: What I am missing

2014-11-18 Thread Jonas Sicking
On Tue, Nov 18, 2014 at 7:40 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 11/18/14, 10:26 PM, Michaela Merz wrote:

 First: We need signed script code.

 For what it's worth, Gecko supported this for a while.  See
 http://www-archive.mozilla.org/projects/security/components/signed-scripts.html.
 In practice, people didn't really use it, and it made the security model a
 _lot_ more complicated and hard to reason about, so the feature was dropped.

 It would be good to understand how proposals along these lines differ from
 what's already been tried and failed.

The way we did script signing back then was nutty in several ways. The
signing we do in FirefoxOS is *much* simpler. Simple enough that no
one has complained about the complexity that it has added to Gecko.

Sadly enhanced security models that use signing by a trusted party
inherently looses a lot of the advantages of the web. It means that
you can't publish a new version of you website by simply uploading
files to your webserver whenever you want. And it means that you can't
generate the script and markup that make up your website dynamically
on your webserver.

So I'm by no means arguing that FirefoxOS has the problem of signing solved.

Unfortunately no one has been able to solve the problem of how to
grant web content access to capabilities like raw TCP or UDP sockets
in order to access legacy hardware and protocols, or how to get
read/write acccess to your photo library in order to build a photo
manager, without relying on signing.

Which has meant that the web so far is unable to compete with native
in those areas.

/ Jonas



Re: What I am missing

2014-11-18 Thread Jonas Sicking
On Tue, Nov 18, 2014 at 9:38 PM, Florian Bösch pya...@gmail.com wrote:
 or direct file access

 http://www.html5rocks.com/en/tutorials/file/filesystem/

This is no more direct file access than IndexedDB is. IndexedDB also
allow you to store File objects, but also doesn't allow you to access
things like your photo or music library.

/ Jonas