Re: What I am missing

2014-11-20 Thread rektide
On Wed, Nov 19, 2014 at 04:26:48AM +0100, Michaela Merz wrote:
> Second: It would be great to finally be able to accept incoming
> connections. There's access to cameras and microphones - why not allow
> us the ability to code servers in the browser? Maybe in combination with
> my suggestion above? Websites would be able to offer webdav simply by
> 'mounting' the browser (no pun intended) and the browser would do
> caching/forwarding/encrypting ..  Imaging being able to directly access
> files on a web site without web download.

It's not connection oriented, but there is the ability to push opaque
junk at the browser via.

https://github.com/w3c/push-api

You seem to want a more connection oriented stream capability. To satsify,
a push-api could have a payload that is a wss:// url it could connect to.

Alas push-api is pretty crap- it's just a opaque transfer of whatever.
There's no content-type for the incoming push data, no resource identifier,
it's just whatever the pusher felt like tossing on the line and no way to
do any normal HTTP enveloping. It's utterly mad Push is so opaque. No love
on the issue, #81, to spec out some web-like characteristics for it. I have
no idea why Push is so ridiculously un-web. :/

https://github.com/w3c/push-api/issues/81

-r



Re: What I am missing

2014-11-19 Thread Michaela Merz
Yes - it establishes provenance and protects against unauthorized 
manipulation. CSP is only as good as the content it protects. If the 
content has been manipulated server side - e.g. by unauthorized access - 
CSP is worthless.


Michaela



On 11/19/2014 10:03 AM, ☻Mike Samuel wrote:

Browser signature checking gives you nothing that CSP doesn't as far
as the security of pages composed from a mixture of content from
different providers.

As Florian points out, signing only establishes provenance, not any
interesting security properties.

I can always write a page that runs an interpreter on data loaded from
a third-party source even if that data is not loaded as script, so a
signed application is always open to confused deputy problems.

Signing is a red herring which distracts from the real problem : the
envelope model in which secrets are scoped to the whole content of the
envelope, makes it hard to decompose a document into multiple trust
domains that reflect the fact that the document is really composed of
content with multiple different provenances.

By the envelope model, I mean the assumption that the user-agent
receives a document and a wrapper, and makes trust decisions for the
whole of the document based on the content of the wrapper.

The envelope does all of
1. establishing a trust domain, the origin
2. bundles secrets, usually in cookies and headers
3. bundles content, possibly from multiple sources.

Ideally our protocols would be written in such a way that secrets can
be scoped to content with a way to allow nesting without inheritance.

This can be kludged on top of iframes, but only at the cost of a lot
of engineering effort.

On Wed, Nov 19, 2014 at 10:27 AM, Michaela Merz
 wrote:

I don't disagree. But what is wrong with the notion of introducing an
_additional_ layer of certification? Signed script and/or html would most
certainly make it way harder to de-face a website or sneak malicious code
into an environment.  I strongly believe that just for this reason alone, we
should think about signed content - even without additional potentially
unsafe functionality.

Michaela



On 11/19/2014 09:21 AM, Pradeep Kumar wrote:

Michaela,

As Josh said earlier, signing the code (somehow) will not enhance security.
It will open doors for more threats. It's better and more open, transparent
and in sync with the spirit of open web to give the control to end user and
not making them to relax today on behalf of other signing authorities.

On 19-Nov-2014 8:44 pm, "Michaela Merz"  wrote:

You are correct. But all those services are (thankfully) sand boxed or
read only. In order to make a browser into something even more useful, you
have to relax these security rules a bit. And IMHO that *should* require
signed code - in addition to the users consent.

Michaela



On 11/19/2014 09:09 AM, Pradeep Kumar wrote:

Even today, browsers ask for permission for geolocation, local storage,
camera etc... How it is different from current scenario?

On 19-Nov-2014 8:35 pm, "Michaela Merz" 
wrote:


That is relevant and also not so. Because Java applets silently grant
access to a out of sandbox functionality if signed. This is not what I am
proposing. I am suggesting a model in which the sandbox model remains intact
and users need to explicitly agree to access that would otherwise be
prohibited.

Michaela





On 11/19/2014 12:01 AM, Jeffrey Walton wrote:

On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
 wrote:

Well .. it would be a "all scripts signed" or "no script signed" kind
of a
deal. You can download malicious code everywhere - not only as scripts.
Signed code doesn't protect against malicious or bad code. It only
guarantees that the code is actually from the the certificate owner ..
and
has not been altered without the signers consent.

Seems relevant: "Java’s Losing Security Legacy",
http://threatpost.com/javas-losing-security-legacy and "Don't Sign
that Applet!", https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

Dormann advises "don't sign" so that the code can't escape its sandbox
and it stays restricted (malware regularly signs to do so).








Re: What I am missing

2014-11-19 Thread Michaela Merz
How would an unsigned script be able to exploit functionality from a 
signed script if it's an either/or case - you have either all scripts 
signed or no extended features? and: Think about this: a website can be 
totally safe today and deliver exploits tomorrow without the user even 
noticing. It happened before and it will happen again. Signed content 
would prevent this by warning the user about missing or wrong signatures 
- even if signed script would not add a single extended function. I 
understand that signing code does not lead to the solution of all evils. 
But it would add another layer that needs to be broken if somebody gains 
access to a website and starts to modify code.


Michaela

On 11/19/2014 11:14 AM, Marc Fawzi wrote:

<<

So there is no way for an unsigned script to exploit security
holes in a signed script?

Of course there's a way. But by the same token, there's a way a signed 
script can exploit security holes in another signed script. Signing 
itself doesn't establish any trust, or security.

>>

Yup, that's also what I meant. Signing does not imply secure, but to 
the average non-technical user a "signed app from a trusted party" may 
convey both trust and security, so they wouldn't think twice about 
installing such a script even if it asked for some powerful 
permissions that can be exploited by another script.


<<

Funny you mention crypto currencies as an idea to get inspiration
from..."Trust but verify" is detached from that... a browser can
monitor what the signed scripts are doing and if it detects a
potentially malicious pattern it can halt the execution of the
script and let the user decide if they want to continue...

That's not working for a variety of reasons. The first reason is that 
identifying what a piece of software does intelligently is one of 
those really hard problems. As in Strong-AI hard.

>>

Well, the user can setup the rules of what is considered a malicious 
action and that there would be ready made configurations (best 
practices codified in config) that would be the default in the 
browser. And then they can exempt certain scripts.


I realize this is an open ended problem and no solution is going to 
address it 100% ... It's the nature of open systems to be open to 
attacks but it's how the system deals with the attack that 
differentiates it. It's a wide open area of research I think, or 
should be.


But do we want a security model that's not extensible and not 
flexible? The answer is most likely NO.






On Tue, Nov 18, 2014 at 11:03 PM, Florian Bösch > wrote:


On Wed, Nov 19, 2014 at 7:54 AM, Marc Fawzi mailto:marc.fa...@gmail.com>> wrote:

So there is no way for an unsigned script to exploit security
holes in a signed script?

Of course there's a way. But by the same token, there's a way a
signed script can exploit security holes in another signed script.
Signing itself doesn't establish any trust, or security.

Funny you mention crypto currencies as an idea to get
inspiration from..."Trust but verify" is detached from that...
a browser can monitor what the signed scripts are doing and if
it detects a potentially malicious pattern it can halt the
execution of the script and let the user decide if they want
to continue...

That's not working for a variety of reasons. The first reason is
that identifying what a piece of software does intelligently is
one of those really hard problems. As in Strong-AI hard. Failing
that, you can monitor what APIs a piece of software makes use of,
and restrict access to those. However, that's already satisfied
without signing by sandboxing. Furthermore, it doesn't entirely
solve the problem as any android user will know. You get a
ginormeous list of premissions a given piece of software would
like to use and the user just clicks "yes". Alternatively, you get
malware that's not trustworthy, that nobody managed to properly
review, because the non trusty part was burried/hidden by the
author somewhere deep down, to activate only long after trust
extension by fiat has happened.

But even if you'd assume that this somehow would be an acceptable
model, what do you define as "malicious"? Reformatting your
machine would be malicious, but so would be posting on your
facebook wall. What constitutes a malicious pattern is actually
more of a social than a technical problem.






Re: What I am missing

2014-11-19 Thread Marc Fawzi
<<

> So there is no way for an unsigned script to exploit security holes in a
> signed script?
>
Of course there's a way. But by the same token, there's a way a signed
script can exploit security holes in another signed script. Signing itself
doesn't establish any trust, or security.
>>

Yup, that's also what I meant. Signing does not imply secure, but to the
average non-technical user a "signed app from a trusted party" may convey
both trust and security, so they wouldn't think twice about installing such
a script even if it asked for some powerful permissions that can be
exploited by another script.

<<

> Funny you mention crypto currencies as an idea to get inspiration
> from..."Trust but verify" is detached from that... a browser can monitor
> what the signed scripts are doing and if it detects a potentially malicious
> pattern it can halt the execution of the script and let the user decide if
> they want to continue...
>
That's not working for a variety of reasons. The first reason is that
identifying what a piece of software does intelligently is one of those
really hard problems. As in Strong-AI hard.
>>

Well, the user can setup the rules of what is considered a malicious action
and that there would be ready made configurations (best practices codified
in config) that would be the default in the browser. And then they can
exempt certain scripts.

I realize this is an open ended problem and no solution is going to address
it 100% ... It's the nature of open systems to be open to attacks but it's
how the system deals with the attack that differentiates it. It's a wide
open area of research I think, or should be.

But do we want a security model that's not extensible and not flexible? The
answer is most likely NO.





On Tue, Nov 18, 2014 at 11:03 PM, Florian Bösch  wrote:

> On Wed, Nov 19, 2014 at 7:54 AM, Marc Fawzi  wrote:
>>
>> So there is no way for an unsigned script to exploit security holes in a
>> signed script?
>>
> Of course there's a way. But by the same token, there's a way a signed
> script can exploit security holes in another signed script. Signing itself
> doesn't establish any trust, or security.
>
>
>> Funny you mention crypto currencies as an idea to get inspiration
>> from..."Trust but verify" is detached from that... a browser can monitor
>> what the signed scripts are doing and if it detects a potentially malicious
>> pattern it can halt the execution of the script and let the user decide if
>> they want to continue...
>>
> That's not working for a variety of reasons. The first reason is that
> identifying what a piece of software does intelligently is one of those
> really hard problems. As in Strong-AI hard. Failing that, you can monitor
> what APIs a piece of software makes use of, and restrict access to those.
> However, that's already satisfied without signing by sandboxing.
> Furthermore, it doesn't entirely solve the problem as any android user will
> know. You get a ginormeous list of premissions a given piece of software
> would like to use and the user just clicks "yes". Alternatively, you get
> malware that's not trustworthy, that nobody managed to properly review,
> because the non trusty part was burried/hidden by the author somewhere deep
> down, to activate only long after trust extension by fiat has happened.
>
> But even if you'd assume that this somehow would be an acceptable model,
> what do you define as "malicious"? Reformatting your machine would be
> malicious, but so would be posting on your facebook wall. What constitutes
> a malicious pattern is actually more of a social than a technical problem.
>


Re: What I am missing

2014-11-19 Thread ☻Mike Samuel
Browser signature checking gives you nothing that CSP doesn't as far
as the security of pages composed from a mixture of content from
different providers.

As Florian points out, signing only establishes provenance, not any
interesting security properties.

I can always write a page that runs an interpreter on data loaded from
a third-party source even if that data is not loaded as script, so a
signed application is always open to confused deputy problems.

Signing is a red herring which distracts from the real problem : the
envelope model in which secrets are scoped to the whole content of the
envelope, makes it hard to decompose a document into multiple trust
domains that reflect the fact that the document is really composed of
content with multiple different provenances.

By the envelope model, I mean the assumption that the user-agent
receives a document and a wrapper, and makes trust decisions for the
whole of the document based on the content of the wrapper.

The envelope does all of
1. establishing a trust domain, the origin
2. bundles secrets, usually in cookies and headers
3. bundles content, possibly from multiple sources.

Ideally our protocols would be written in such a way that secrets can
be scoped to content with a way to allow nesting without inheritance.

This can be kludged on top of iframes, but only at the cost of a lot
of engineering effort.

On Wed, Nov 19, 2014 at 10:27 AM, Michaela Merz
 wrote:
>
> I don't disagree. But what is wrong with the notion of introducing an
> _additional_ layer of certification? Signed script and/or html would most
> certainly make it way harder to de-face a website or sneak malicious code
> into an environment.  I strongly believe that just for this reason alone, we
> should think about signed content - even without additional potentially
> unsafe functionality.
>
> Michaela
>
>
>
> On 11/19/2014 09:21 AM, Pradeep Kumar wrote:
>
> Michaela,
>
> As Josh said earlier, signing the code (somehow) will not enhance security.
> It will open doors for more threats. It's better and more open, transparent
> and in sync with the spirit of open web to give the control to end user and
> not making them to relax today on behalf of other signing authorities.
>
> On 19-Nov-2014 8:44 pm, "Michaela Merz"  wrote:
>>
>> You are correct. But all those services are (thankfully) sand boxed or
>> read only. In order to make a browser into something even more useful, you
>> have to relax these security rules a bit. And IMHO that *should* require
>> signed code - in addition to the users consent.
>>
>> Michaela
>>
>>
>>
>> On 11/19/2014 09:09 AM, Pradeep Kumar wrote:
>>
>> Even today, browsers ask for permission for geolocation, local storage,
>> camera etc... How it is different from current scenario?
>>
>> On 19-Nov-2014 8:35 pm, "Michaela Merz" 
>> wrote:
>>>
>>>
>>> That is relevant and also not so. Because Java applets silently grant
>>> access to a out of sandbox functionality if signed. This is not what I am
>>> proposing. I am suggesting a model in which the sandbox model remains intact
>>> and users need to explicitly agree to access that would otherwise be
>>> prohibited.
>>>
>>> Michaela
>>>
>>>
>>>
>>>
>>>
>>> On 11/19/2014 12:01 AM, Jeffrey Walton wrote:

 On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
  wrote:
>
> Well .. it would be a "all scripts signed" or "no script signed" kind
> of a
> deal. You can download malicious code everywhere - not only as scripts.
> Signed code doesn't protect against malicious or bad code. It only
> guarantees that the code is actually from the the certificate owner ..
> and
> has not been altered without the signers consent.

 Seems relevant: "Java’s Losing Security Legacy",
 http://threatpost.com/javas-losing-security-legacy and "Don't Sign
 that Applet!", https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

 Dormann advises "don't sign" so that the code can't escape its sandbox
 and it stays restricted (malware regularly signs to do so).
>>>
>>>
>>>
>>
>



Re: What I am missing

2014-11-19 Thread Michaela Merz


First: You don't have to sign your code. Second: We rely on 
"centralization" for TLS as well. Third: Third-party verification can be 
done within the community itself 
(https://www.eff.org/deeplinks/2014/11/certificate-authority-encrypt-entire-web) 
.


Michaela


On 11/19/2014 09:41 AM, Anne van Kesteren wrote:

On Wed, Nov 19, 2014 at 4:27 PM, Michaela Merz
 wrote:

I don't disagree. But what is wrong with the notion of introducing an
_additional_ layer of certification?

Adding an additional layer of centralization.







Re: What I am missing

2014-11-19 Thread Anne van Kesteren
On Wed, Nov 19, 2014 at 4:27 PM, Michaela Merz
 wrote:
> I don't disagree. But what is wrong with the notion of introducing an
> _additional_ layer of certification?

Adding an additional layer of centralization.


-- 
https://annevankesteren.nl/



Re: What I am missing

2014-11-19 Thread Michaela Merz


I don't disagree. But what is wrong with the notion of introducing an 
_additional_ layer of certification? Signed script and/or html would 
most certainly make it way harder to de-face a website or sneak 
malicious code into an environment.  I strongly believe that just for 
this reason alone, we should think about signed content - even without 
additional potentially unsafe functionality.


Michaela


On 11/19/2014 09:21 AM, Pradeep Kumar wrote:


Michaela,

As Josh said earlier, signing the code (somehow) will not enhance 
security. It will open doors for more threats. It's better and more 
open, transparent and in sync with the spirit of open web to give the 
control to end user and not making them to relax today on behalf of 
other signing authorities.


On 19-Nov-2014 8:44 pm, "Michaela Merz" > wrote:


You are correct. But all those services are (thankfully) sand
boxed or read only. In order to make a browser into something even
more useful, you have to relax these security rules a bit. And
IMHO that *should* require signed code - in addition to the users
consent.

Michaela



On 11/19/2014 09:09 AM, Pradeep Kumar wrote:


Even today, browsers ask for permission for geolocation, local
storage, camera etc... How it is different from current scenario?

On 19-Nov-2014 8:35 pm, "Michaela Merz"
mailto:michaela.m...@hermetos.com>>
wrote:


That is relevant and also not so. Because Java applets
silently grant access to a out of sandbox functionality if
signed. This is not what I am proposing. I am suggesting a
model in which the sandbox model remains intact and users
need to explicitly agree to access that would otherwise be
prohibited.

Michaela





On 11/19/2014 12:01 AM, Jeffrey Walton wrote:

On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
mailto:michaela.m...@hermetos.com>> wrote:

Well .. it would be a "all scripts signed" or "no
script signed" kind of a
deal. You can download malicious code everywhere -
not only as scripts.
Signed code doesn't protect against malicious or bad
code. It only
guarantees that the code is actually from the the
certificate owner .. and
has not been altered without the signers consent.

Seems relevant: "Java's Losing Security Legacy",
http://threatpost.com/javas-losing-security-legacy and
"Don't Sign
that Applet!",
https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

Dormann advises "don't sign" so that the code can't
escape its sandbox
and it stays restricted (malware regularly signs to do so).









Re: What I am missing

2014-11-19 Thread Pradeep Kumar
Michaela,

As Josh said earlier, signing the code (somehow) will not enhance security.
It will open doors for more threats. It's better and more open, transparent
and in sync with the spirit of open web to give the control to end user and
not making them to relax today on behalf of other signing authorities.
 On 19-Nov-2014 8:44 pm, "Michaela Merz"  wrote:

>  You are correct. But all those services are (thankfully) sand boxed or
> read only. In order to make a browser into something even more useful, you
> have to relax these security rules a bit. And IMHO that *should* require
> signed code - in addition to the users consent.
>
> Michaela
>
>
>
> On 11/19/2014 09:09 AM, Pradeep Kumar wrote:
>
> Even today, browsers ask for permission for geolocation, local storage,
> camera etc... How it is different from current scenario?
> On 19-Nov-2014 8:35 pm, "Michaela Merz" 
> wrote:
>
>>
>> That is relevant and also not so. Because Java applets silently grant
>> access to a out of sandbox functionality if signed. This is not what I am
>> proposing. I am suggesting a model in which the sandbox model remains
>> intact and users need to explicitly agree to access that would otherwise be
>> prohibited.
>>
>> Michaela
>>
>>
>>
>>
>>
>> On 11/19/2014 12:01 AM, Jeffrey Walton wrote:
>>
>>> On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
>>>  wrote:
>>>
 Well .. it would be a "all scripts signed" or "no script signed" kind
 of a
 deal. You can download malicious code everywhere - not only as scripts.
 Signed code doesn't protect against malicious or bad code. It only
 guarantees that the code is actually from the the certificate owner ..
 and
 has not been altered without the signers consent.

>>> Seems relevant: "Java's Losing Security Legacy",
>>> http://threatpost.com/javas-losing-security-legacy and "Don't Sign
>>> that Applet!", https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.
>>>
>>> Dormann advises "don't sign" so that the code can't escape its sandbox
>>> and it stays restricted (malware regularly signs to do so).
>>>
>>
>>
>>
>


Re: What I am missing

2014-11-19 Thread Michaela Merz
You are correct. But all those services are (thankfully) sand boxed or 
read only. In order to make a browser into something even more useful, 
you have to relax these security rules a bit. And IMHO that *should* 
require signed code - in addition to the users consent.


Michaela



On 11/19/2014 09:09 AM, Pradeep Kumar wrote:


Even today, browsers ask for permission for geolocation, local 
storage, camera etc... How it is different from current scenario?


On 19-Nov-2014 8:35 pm, "Michaela Merz" > wrote:



That is relevant and also not so. Because Java applets silently
grant access to a out of sandbox functionality if signed. This is
not what I am proposing. I am suggesting a model in which the
sandbox model remains intact and users need to explicitly agree to
access that would otherwise be prohibited.

Michaela





On 11/19/2014 12:01 AM, Jeffrey Walton wrote:

On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
mailto:michaela.m...@hermetos.com>> wrote:

Well .. it would be a "all scripts signed" or "no script
signed" kind of a
deal. You can download malicious code everywhere - not
only as scripts.
Signed code doesn't protect against malicious or bad code.
It only
guarantees that the code is actually from the the
certificate owner .. and
has not been altered without the signers consent.

Seems relevant: "Java's Losing Security Legacy",
http://threatpost.com/javas-losing-security-legacy and "Don't Sign
that Applet!",
https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

Dormann advises "don't sign" so that the code can't escape its
sandbox
and it stays restricted (malware regularly signs to do so).







Re: What I am missing

2014-11-19 Thread Pradeep Kumar
Even today, browsers ask for permission for geolocation, local storage,
camera etc... How it is different from current scenario?
On 19-Nov-2014 8:35 pm, "Michaela Merz"  wrote:

>
> That is relevant and also not so. Because Java applets silently grant
> access to a out of sandbox functionality if signed. This is not what I am
> proposing. I am suggesting a model in which the sandbox model remains
> intact and users need to explicitly agree to access that would otherwise be
> prohibited.
>
> Michaela
>
>
>
>
>
> On 11/19/2014 12:01 AM, Jeffrey Walton wrote:
>
>> On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
>>  wrote:
>>
>>> Well .. it would be a "all scripts signed" or "no script signed" kind of
>>> a
>>> deal. You can download malicious code everywhere - not only as scripts.
>>> Signed code doesn't protect against malicious or bad code. It only
>>> guarantees that the code is actually from the the certificate owner ..
>>> and
>>> has not been altered without the signers consent.
>>>
>> Seems relevant: "Java's Losing Security Legacy",
>> http://threatpost.com/javas-losing-security-legacy and "Don't Sign
>> that Applet!", https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.
>>
>> Dormann advises "don't sign" so that the code can't escape its sandbox
>> and it stays restricted (malware regularly signs to do so).
>>
>
>
>


Re: What I am missing

2014-11-19 Thread Michaela Merz


That is relevant and also not so. Because Java applets silently grant 
access to a out of sandbox functionality if signed. This is not what I 
am proposing. I am suggesting a model in which the sandbox model remains 
intact and users need to explicitly agree to access that would otherwise 
be prohibited.


Michaela





On 11/19/2014 12:01 AM, Jeffrey Walton wrote:

On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
 wrote:

Well .. it would be a "all scripts signed" or "no script signed" kind of a
deal. You can download malicious code everywhere - not only as scripts.
Signed code doesn't protect against malicious or bad code. It only
guarantees that the code is actually from the the certificate owner .. and
has not been altered without the signers consent.

Seems relevant: "Java’s Losing Security Legacy",
http://threatpost.com/javas-losing-security-legacy and "Don't Sign
that Applet!", https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

Dormann advises "don't sign" so that the code can't escape its sandbox
and it stays restricted (malware regularly signs to do so).





Re: What I am missing

2014-11-19 Thread Michaela Merz
I am not sure if I understand your question. Browsers can't be code 
servers at least not today.


Michaela



On 11/19/2014 08:43 AM, Pradeep Kumar wrote:


How the browsers can be code servers? Could you please explain a 
little more...


On 19-Nov-2014 7:51 pm, "Michaela Merz" > wrote:


Thank you Jonas. I was actually thinking about the security model of
FirefoxOS or Android apps. We write powerful "webapps" nowadays. And
with "webapps" I mean regular web pages with a lot of script/html5
functionality. The browsers are fast enough to do a variety of things:
from running a linux kernel, to playing dos-games,  doing crypto,
decoding and streaming mp3. I understand a browser to be an operating
system on top of an operating system. But the need to protect the user
is a problem if you want to go beyond what is possible today.

I am asking to consider a model, where a signed script package
notifies
a user about is origin and signer and even may ask the user for
special
permissions like direct file system access or raw networking
sockets or
anything else that would, for safety reasons, not be possible today.

The browser would remember the origin ip and the signature of the
script
package and would re-ask for permission if something changes. It would
refuse to run if the signature isn't valid or expired.

It wouldn't change a thing in regard to updates. You would just
have to
re-sign your code before you make it available. I used to work a lot
with java applets (signed and un-signed) in the old days, I am working
with android apps today. Signing is just another step in the work
chain.

Signed code is the missing last element in the CSP - TLS environment .
Let's make the browser into something that can truly be seen as an
alternative operating system on top of an operating system.

Michaela

On 11/19/2014 08:33 AM, Jonas Sicking wrote:
> On Tue, Nov 18, 2014 at 7:40 PM, Boris Zbarsky mailto:bzbar...@mit.edu>> wrote:
>> On 11/18/14, 10:26 PM, Michaela Merz wrote:
>>> First: We need signed script code.
>> For what it's worth, Gecko supported this for a while.  See
>>

.
>> In practice, people didn't really use it, and it made the
security model a
>> _lot_ more complicated and hard to reason about, so the feature
was dropped.
>>
>> It would be good to understand how proposals along these lines
differ from
>> what's already been tried and failed.
> The way we did script signing back then was nutty in several
ways. The
> signing we do in FirefoxOS is *much* simpler. Simple enough that no
> one has complained about the complexity that it has added to Gecko.
>
> Sadly enhanced security models that use signing by a trusted party
> inherently looses a lot of the advantages of the web. It means that
> you can't publish a new version of you website by simply uploading
> files to your webserver whenever you want. And it means that you
can't
> generate the script and markup that make up your website dynamically
> on your webserver.
>
> So I'm by no means arguing that FirefoxOS has the problem of
signing solved.
>
> Unfortunately no one has been able to solve the problem of how to
> grant web content access to capabilities like raw TCP or UDP sockets
> in order to access legacy hardware and protocols, or how to get
> read/write acccess to your photo library in order to build a photo
> manager, without relying on signing.
>
> Which has meant that the web so far is unable to "compete with
native"
> in those areas.
>
> / Jonas
>







Re: What I am missing

2014-11-19 Thread Michaela Merz

Perfect is the enemy of good. I understand the principles and problems
of cryptography. And in the same way we rely on TLS and its security
model today we would be able to put some trust into the same
architecture for signing script.

FYI: Here's how signing works for java applets: You need to get a
software signing key from a CA who will do some form of verification in
regard to your organization. You use that key to sign you code package.

Will this forever prevent the spread of malware ? Of course not. Yes -
you can distribute malware if you get somebodies signing key into your
possession or even buy a signature key and sign your malware yourself.
But as long as the user is made aware of the fact that the code has the
permission to access the file system, he can decide for himself if he
wants to allow that or not. Much in the same way that he decides to
download a possibly malicious peace of software today.  There will never
be absolute security - especially not for users who give a damn about
what they are doing. Should that prevent us from continuing the
evolution of the web environment? We might as well kill ourselves
because we are afraid to die ;)

One last thing: Most Android malware was spread on systems, where owners
disabled the security of the system .. eg by rooting the devices.

Michaela

> Suppose you get a piece of signed content, over whatever way it was
> delivered. Suppose also that this content you got has the ability to
> read all your private data, or reformat your machine. So it's
> basically about trust. You need to establish a secure channel of
> communication to obtain a public key that matches a signature, in such
> a way that an attackers attempt to self-sign malicious content is
> foiled. And you need to have a way to discover (after having
> established that the entity is the one who was intended and that the
> signature is correct), that you indeed trust that entity.
>
> These are two very old problems in cryptography, and they cannot be
> solved by cryptography. There are various approaches to this problem
> in use today:
>
>   * TLS and its web of trust: The basic idea being that there is a
> hierarchy of signatories. It works like this. An entity provides a
> certificate for the connection, signing it with their private key.
> Since you cannot establish a connection without a public key that
> matches the private key, verifying the certificate is easy. This
> entity in turn, refers to another entity which provided the
> signature for that private key. They refer to another one, and so
> forth, until you arrive at the root. You implicitly trust root.
> This works, but it has some flaws. At the edge of the web, people
> are not allowed to self-sign, so they obtain their (pricey) key
> from the next tier up. But the next tier up can't go and bother
> the next tier up everytime they need to provide a new set of keys
> to the edge. So they get blanket permission to self-sign, implying
> that it's possible for the next tier up to establish and maintain
> a trust relationship to them. As is easily demonstratable, this
> can, and often does, go wrong, where some CA gets compromised.
> This is always bad news to whomever obtained a certificate from
> them, because now a malicious party can pass themselves off as them.
>   * App-stores and trust royalty: This is really easy to describe, the
> app store you obtain something from signs the content, and you
> trust the app-store, and therefore you trust the content. This
> can, and often does go wrong, as android/iOS malware amply
> demonstrates.
>
> TSL cannot work perfectly, because it is built on implied trust along
> the chain, and this can get compromised. App-stores cannot work
> perfectly because the ability to review content is quickly exceeded by
> the flood of content. Even if app-stores where provided with the full
> source, they would have no time to perform a proper review, and so
> time and time again malware slips trough the net.
>
> You can have a technical solution for signing, and you still haven't
> solved any bit of how to trust a piece of content. About the only way
> that'd be remotely feasible is if you piggyback on an existing
> implementation of a trust mechanism/transport security layer, to
> deliver the signature. For instance, many websites that allow you to
> d/l an executable provide you with a checksum of the content. The idea
> being that if the page is served up over TLS, then you've established
> that the checksum is delivered by the entity, which is also supposed
> to deliver the content. However, this has a hidden trust model that
> established trust without a technical solution. It means a user
> assumes he trusts that website, because he arrived there on his own
> volition. The same cannot be said about side-channel delivered pieces
> of content that composit into a larger whole (like signed scripts).
> For instan

Re: What I am missing

2014-11-19 Thread Pradeep Kumar
How the browsers can be code servers? Could you please explain a little
more...
On 19-Nov-2014 7:51 pm, "Michaela Merz"  wrote:

> Thank you Jonas. I was actually thinking about the security model of
> FirefoxOS or Android apps. We write powerful "webapps" nowadays. And
> with "webapps" I mean regular web pages with a lot of script/html5
> functionality. The browsers are fast enough to do a variety of things:
> from running a linux kernel, to playing dos-games,  doing crypto,
> decoding and streaming mp3. I understand a browser to be an operating
> system on top of an operating system. But the need to protect the user
> is a problem if you want to go beyond what is possible today.
>
> I am asking to consider a model, where a signed script package notifies
> a user about is origin and signer and even may ask the user for special
> permissions like direct file system access or raw networking sockets or
> anything else that would, for safety reasons, not be possible today.
>
> The browser would remember the origin ip and the signature of the script
> package and would re-ask for permission if something changes. It would
> refuse to run if the signature isn't valid or expired.
>
> It wouldn't change a thing in regard to updates. You would just have to
> re-sign your code before you make it available. I used to work a lot
> with java applets (signed and un-signed) in the old days, I am working
> with android apps today. Signing is just another step in the work chain.
>
> Signed code is the missing last element in the CSP - TLS environment .
> Let's make the browser into something that can truly be seen as an
> alternative operating system on top of an operating system.
>
> Michaela
>
> On 11/19/2014 08:33 AM, Jonas Sicking wrote:
> > On Tue, Nov 18, 2014 at 7:40 PM, Boris Zbarsky  wrote:
> >> On 11/18/14, 10:26 PM, Michaela Merz wrote:
> >>> First: We need signed script code.
> >> For what it's worth, Gecko supported this for a while.  See
> >> <
> http://www-archive.mozilla.org/projects/security/components/signed-scripts.html
> >.
> >> In practice, people didn't really use it, and it made the security
> model a
> >> _lot_ more complicated and hard to reason about, so the feature was
> dropped.
> >>
> >> It would be good to understand how proposals along these lines differ
> from
> >> what's already been tried and failed.
> > The way we did script signing back then was nutty in several ways. The
> > signing we do in FirefoxOS is *much* simpler. Simple enough that no
> > one has complained about the complexity that it has added to Gecko.
> >
> > Sadly enhanced security models that use signing by a trusted party
> > inherently looses a lot of the advantages of the web. It means that
> > you can't publish a new version of you website by simply uploading
> > files to your webserver whenever you want. And it means that you can't
> > generate the script and markup that make up your website dynamically
> > on your webserver.
> >
> > So I'm by no means arguing that FirefoxOS has the problem of signing
> solved.
> >
> > Unfortunately no one has been able to solve the problem of how to
> > grant web content access to capabilities like raw TCP or UDP sockets
> > in order to access legacy hardware and protocols, or how to get
> > read/write acccess to your photo library in order to build a photo
> > manager, without relying on signing.
> >
> > Which has meant that the web so far is unable to "compete with native"
> > in those areas.
> >
> > / Jonas
> >
>
>
>
>


Re: What I am missing

2014-11-19 Thread Michaela Merz
Thank you Jonas. I was actually thinking about the security model of
FirefoxOS or Android apps. We write powerful "webapps" nowadays. And
with "webapps" I mean regular web pages with a lot of script/html5
functionality. The browsers are fast enough to do a variety of things:
from running a linux kernel, to playing dos-games,  doing crypto,
decoding and streaming mp3. I understand a browser to be an operating
system on top of an operating system. But the need to protect the user
is a problem if you want to go beyond what is possible today.

I am asking to consider a model, where a signed script package notifies
a user about is origin and signer and even may ask the user for special
permissions like direct file system access or raw networking sockets or
anything else that would, for safety reasons, not be possible today.

The browser would remember the origin ip and the signature of the script
package and would re-ask for permission if something changes. It would
refuse to run if the signature isn't valid or expired.

It wouldn't change a thing in regard to updates. You would just have to
re-sign your code before you make it available. I used to work a lot
with java applets (signed and un-signed) in the old days, I am working
with android apps today. Signing is just another step in the work chain.

Signed code is the missing last element in the CSP - TLS environment .
Let's make the browser into something that can truly be seen as an
alternative operating system on top of an operating system.

Michaela

On 11/19/2014 08:33 AM, Jonas Sicking wrote:
> On Tue, Nov 18, 2014 at 7:40 PM, Boris Zbarsky  wrote:
>> On 11/18/14, 10:26 PM, Michaela Merz wrote:
>>> First: We need signed script code.
>> For what it's worth, Gecko supported this for a while.  See
>> .
>> In practice, people didn't really use it, and it made the security model a
>> _lot_ more complicated and hard to reason about, so the feature was dropped.
>>
>> It would be good to understand how proposals along these lines differ from
>> what's already been tried and failed.
> The way we did script signing back then was nutty in several ways. The
> signing we do in FirefoxOS is *much* simpler. Simple enough that no
> one has complained about the complexity that it has added to Gecko.
>
> Sadly enhanced security models that use signing by a trusted party
> inherently looses a lot of the advantages of the web. It means that
> you can't publish a new version of you website by simply uploading
> files to your webserver whenever you want. And it means that you can't
> generate the script and markup that make up your website dynamically
> on your webserver.
>
> So I'm by no means arguing that FirefoxOS has the problem of signing solved.
>
> Unfortunately no one has been able to solve the problem of how to
> grant web content access to capabilities like raw TCP or UDP sockets
> in order to access legacy hardware and protocols, or how to get
> read/write acccess to your photo library in order to build a photo
> manager, without relying on signing.
>
> Which has meant that the web so far is unable to "compete with native"
> in those areas.
>
> / Jonas
>





Re: What I am missing

2014-11-19 Thread Frederik Braun
On 19.11.2014 04:26, Michaela Merz wrote:

> First: We need signed script code. We are doing a lot of stuff with
> script - we could safely do even more, if we would be able to safely
> deliver script that has some kind of a trust model. I am thinking about
> signed JAR files - just like we did with java applets not too long ago.
> Maybe as an extension to the CSP enviroment .. and a nice frame around
> the browser telling the user that the site is providing trusted / signed
> code. Signed code could allow more openness, like true full screen, or
> simpler ajax downloads. 

Well, you can't sign or verify with Subresource Integrity (SRI), but SRI
allows you to make sure that it has not been tampered with on the
hosting side: 



Re: What I am missing

2014-11-18 Thread Florian Bösch
It gives you at least a sandboxed file system, which is about all you can
offer without a central authority to make infallible decisions, decisions
you'd pay for to get.

On Wed, Nov 19, 2014 at 8:35 AM, Jonas Sicking  wrote:

> On Tue, Nov 18, 2014 at 9:38 PM, Florian Bösch  wrote:
> >> or direct file access
> >
> > http://www.html5rocks.com/en/tutorials/file/filesystem/
>
> This is no more "direct file access" than IndexedDB is. IndexedDB also
> allow you to store File objects, but also doesn't allow you to access
> things like your photo or music library.
>
> / Jonas
>


Re: What I am missing

2014-11-18 Thread Jonas Sicking
On Tue, Nov 18, 2014 at 9:38 PM, Florian Bösch  wrote:
>> or direct file access
>
> http://www.html5rocks.com/en/tutorials/file/filesystem/

This is no more "direct file access" than IndexedDB is. IndexedDB also
allow you to store File objects, but also doesn't allow you to access
things like your photo or music library.

/ Jonas



Re: What I am missing

2014-11-18 Thread Jonas Sicking
On Tue, Nov 18, 2014 at 7:40 PM, Boris Zbarsky  wrote:
> On 11/18/14, 10:26 PM, Michaela Merz wrote:
>>
>> First: We need signed script code.
>
> For what it's worth, Gecko supported this for a while.  See
> .
> In practice, people didn't really use it, and it made the security model a
> _lot_ more complicated and hard to reason about, so the feature was dropped.
>
> It would be good to understand how proposals along these lines differ from
> what's already been tried and failed.

The way we did script signing back then was nutty in several ways. The
signing we do in FirefoxOS is *much* simpler. Simple enough that no
one has complained about the complexity that it has added to Gecko.

Sadly enhanced security models that use signing by a trusted party
inherently looses a lot of the advantages of the web. It means that
you can't publish a new version of you website by simply uploading
files to your webserver whenever you want. And it means that you can't
generate the script and markup that make up your website dynamically
on your webserver.

So I'm by no means arguing that FirefoxOS has the problem of signing solved.

Unfortunately no one has been able to solve the problem of how to
grant web content access to capabilities like raw TCP or UDP sockets
in order to access legacy hardware and protocols, or how to get
read/write acccess to your photo library in order to build a photo
manager, without relying on signing.

Which has meant that the web so far is unable to "compete with native"
in those areas.

/ Jonas



Re: What I am missing

2014-11-18 Thread Florian Bösch
On Wed, Nov 19, 2014 at 7:54 AM, Marc Fawzi  wrote:
>
> So there is no way for an unsigned script to exploit security holes in a
> signed script?
>
Of course there's a way. But by the same token, there's a way a signed
script can exploit security holes in another signed script. Signing itself
doesn't establish any trust, or security.


> Funny you mention crypto currencies as an idea to get inspiration
> from..."Trust but verify" is detached from that... a browser can monitor
> what the signed scripts are doing and if it detects a potentially malicious
> pattern it can halt the execution of the script and let the user decide if
> they want to continue...
>
That's not working for a variety of reasons. The first reason is that
identifying what a piece of software does intelligently is one of those
really hard problems. As in Strong-AI hard. Failing that, you can monitor
what APIs a piece of software makes use of, and restrict access to those.
However, that's already satisfied without signing by sandboxing.
Furthermore, it doesn't entirely solve the problem as any android user will
know. You get a ginormeous list of premissions a given piece of software
would like to use and the user just clicks "yes". Alternatively, you get
malware that's not trustworthy, that nobody managed to properly review,
because the non trusty part was burried/hidden by the author somewhere deep
down, to activate only long after trust extension by fiat has happened.

But even if you'd assume that this somehow would be an acceptable model,
what do you define as "malicious"? Reformatting your machine would be
malicious, but so would be posting on your facebook wall. What constitutes
a malicious pattern is actually more of a social than a technical problem.


Re: What I am missing

2014-11-18 Thread Marc Fawzi
So there is no way for an unsigned script to exploit security holes in a
signed script?

Funny you mention crypto currencies as an idea to get inspiration
from..."Trust but verify" is detached from that... a browser can monitor
what the signed scripts are doing and if it detects a potentially malicious
pattern it can halt the execution of the script and let the user decide if
they want to continue...


On Tue, Nov 18, 2014 at 10:34 PM, Florian Bösch  wrote:

> There are some models that are a bit better than trust by royalty
> (app-stores) and trust by hirarchy (TLS). One of them is trust flowing
> along flow limited edges in a graph (as in Advogato). This model however
> isn't free from fault, as when a highly trusted entity gets compromised,
> there's no quick or easy way to revoke that trust for that entity. Also, a
> trust graph such as this doesn't solve the problem of stake. We trust say,
> the twitter API, because we know that twitter has staked a lot into it. If
> they violate that trust, they suffer proportionally more. A graph doesn't
> solve that problem, because it cannot offer a proof of stake.
>
> Interestingly, there are way to provide a proof of stake (see various
> cryptocurrencies that attempt to do that). Of course proof of stake
> cryptocurrencies have their own problems, but that doesn't entirely
> invalidate the idea. If you can prove you have a stake of a given size,
> then you can enhance a flow limited trust graph insofar as to make it less
> likely an entity gets compromised. The difficulty with that approach of
> course is, it would make aquiring high levels of trust prohibitively
> expensive (as in getting the priviledge to access the filesystem could run
> you into millions of $ of stake shares).
>
>


Re: What I am missing

2014-11-18 Thread Florian Bösch
There are some models that are a bit better than trust by royalty
(app-stores) and trust by hirarchy (TLS). One of them is trust flowing
along flow limited edges in a graph (as in Advogato). This model however
isn't free from fault, as when a highly trusted entity gets compromised,
there's no quick or easy way to revoke that trust for that entity. Also, a
trust graph such as this doesn't solve the problem of stake. We trust say,
the twitter API, because we know that twitter has staked a lot into it. If
they violate that trust, they suffer proportionally more. A graph doesn't
solve that problem, because it cannot offer a proof of stake.

Interestingly, there are way to provide a proof of stake (see various
cryptocurrencies that attempt to do that). Of course proof of stake
cryptocurrencies have their own problems, but that doesn't entirely
invalidate the idea. If you can prove you have a stake of a given size,
then you can enhance a flow limited trust graph insofar as to make it less
likely an entity gets compromised. The difficulty with that approach of
course is, it would make aquiring high levels of trust prohibitively
expensive (as in getting the priviledge to access the filesystem could run
you into millions of $ of stake shares).


Re: What I am missing

2014-11-18 Thread Florian Bösch
On Wed, Nov 19, 2014 at 6:35 AM, Michaela Merz 
wrote:

>  Well .. it would be a "all scripts signed" or "no script signed" kind of
> a deal. You can download malicious code everywhere - not only as scripts.
> Signed code doesn't protect against malicious or bad code. It only
> guarantees that the code is actually from the the certificate owner .. and
> has not been altered without the signers consent.
>

On Wed, Nov 19, 2014 at 5:00 AM, Michaela Merz 
 wrote:
>
> it would make sense. Signed code would make script much more resistant to
> manipulation and therefore would help in environments where trust and/or
> security is important.
>
> We use script for much, much more than we did just a year or so ago.
>

On Wed, Nov 19, 2014 at 6:41 AM, Michaela Merz 
 wrote:
>
> TLS doesn't protect you against code that has been altered server side -
> without the signers consent. It would alert the user, if unsigned updates
> would be made available.


Signing allows you to verify that an entity did produce a run of bytes, and
not another entity. Entity here meaning the holder of the private key who
put his signature onto that run of bytes. How do you know this entity did
that? Said entity also broadcast their public key, so that the recipient
can compare.

TLS solves this problem somewhat by securing the delivery channel. It
doesn't sign content, but via TLS it is (at least proverbially) impossible
for somebody to deliver content over a channel you control.

Ajax downloads still require a download link (with the bloburl) to be
> displayed requiring an additional click. User clicks download .. ajax
> downloads the data, creates blob url as src which the user has to click to
> 'copy' the blob onto the userspace drive. Would be better to skip the final
> part.

Signing, technically would have an advantage where you wish to deliver
content over a channel that you cannot control. Such as over WebRTC, from
files, and so forth.

In regard to accept: I wasn't aware of the fact that I can accept a socket
> on port 80 to serve a HTTP session. You're saying I could with what's
> available today?

You cannot. You can however let the browser accept an incoming connection
under the condition that they're browsing the same origin. The port doesn't
matter as much, as WebRTC largely relegates it to an implementation detail
of the channel negotiator so that two of the same origins can communicate.



Suppose you get a piece of signed content, over whatever way it was
delivered. Suppose also that this content you got has the ability to read
all your private data, or reformat your machine. So it's basically about
trust. You need to establish a secure channel of communication to obtain a
public key that matches a signature, in such a way that an attackers
attempt to self-sign malicious content is foiled. And you need to have a
way to discover (after having established that the entity is the one who
was intended and that the signature is correct), that you indeed trust that
entity.

These are two very old problems in cryptography, and they cannot be solved
by cryptography. There are various approaches to this problem in use today:

   - TLS and its web of trust: The basic idea being that there is a
   hierarchy of signatories. It works like this. An entity provides a
   certificate for the connection, signing it with their private key. Since
   you cannot establish a connection without a public key that matches the
   private key, verifying the certificate is easy. This entity in turn, refers
   to another entity which provided the signature for that private key. They
   refer to another one, and so forth, until you arrive at the root. You
   implicitly trust root. This works, but it has some flaws. At the edge of
   the web, people are not allowed to self-sign, so they obtain their (pricey)
   key from the next tier up. But the next tier up can't go and bother the
   next tier up everytime they need to provide a new set of keys to the edge.
   So they get blanket permission to self-sign, implying that it's possible
   for the next tier up to establish and maintain a trust relationship to
   them. As is easily demonstratable, this can, and often does, go wrong,
   where some CA gets compromised. This is always bad news to whomever
   obtained a certificate from them, because now a malicious party can pass
   themselves off as them.
   - App-stores and trust royalty: This is really easy to describe, the app
   store you obtain something from signs the content, and you trust the
   app-store, and therefore you trust the content. This can, and often does go
   wrong, as android/iOS malware amply demonstrates.

TSL cannot work perfectly, because it is built on implied trust along the
chain, and this can get compromised. App-stores cannot work perfectly
because the ability to review content is quickly exceeded by the flood of
content. Even if app-stores where provided with the full source, they would
have no time to perform a proper review, 

Re: What I am missing

2014-11-18 Thread Marc Fawzi
<<
Signed code doesn't protect against malicious or bad code. It only
guarantees that the code is actually from the the certificate owner
>>

if I trust you and allow your signed script the permissions it asks for and
you can't guarantee that it would be used by some malicious 3rd party site
to hack me (i.e. the security holes in your script get turned against me)
then there is just too much risk in allowing the permissions

the concern is that the average user will not readily grasp the risk
involved in granting certain powerful permissions to some insecure script
from a trusted source

On Tue, Nov 18, 2014 at 9:35 PM, Michaela Merz 
wrote:

>  Well .. it would be a "all scripts signed" or "no script signed" kind of
> a deal. You can download malicious code everywhere - not only as scripts.
> Signed code doesn't protect against malicious or bad code. It only
> guarantees that the code is actually from the the certificate owner .. and
> has not been altered without the signers consent.
>
> Michaela
>
>
>
>
> On 11/19/2014 06:14 AM, Marc Fawzi wrote:
>
> "Allowing this script to run may open you to all kinds of malicious
> attacks by 3rd parties not associated with the party whom you're
> trusting."
>
>  If I give App XYZ super power to do anything, and XYZ gets
> compromised/hacked then I'll be open to all sorts of attacks.
>
>  It's not an issue of party A trusting party B. It's an issue of trusting
> that party B has no security holes in their app whatsoever, and that is one
> of the hardest things to guarantee.
>
>
> On Tue, Nov 18, 2014 at 8:00 PM, Michaela Merz  > wrote:
>
>>
>> Yes Boris - I know. As long as it doesn't have advantages for the user
>> or the developer - why bother with it? If signed code would allow
>> special features - like true fullscreen or direct file access  - it
>> would make sense. Signed code would make script much more resistant to
>> manipulation and therefore would help in environments where trust and/or
>> security is important.
>>
>> We use script for much, much more than we did just a year or so ago.
>>
>> Michaela
>>
>>
>>
>> On 11/19/2014 04:40 AM, Boris Zbarsky wrote:
>> > On 11/18/14, 10:26 PM, Michaela Merz wrote:
>> >> First: We need signed script code.
>> >
>> > For what it's worth, Gecko supported this for a while.  See
>> > <
>> http://www-archive.mozilla.org/projects/security/components/signed-scripts.html
>> >.
>> >  In practice, people didn't really use it, and it made the security
>> > model a _lot_ more complicated and hard to reason about, so the
>> > feature was dropped.
>> >
>> > It would be good to understand how proposals along these lines differ
>> > from what's already been tried and failed.
>> >
>> > -Boris
>> >
>>
>>
>>
>>
>
>


Re: What I am missing

2014-11-18 Thread Jeffrey Walton
On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
 wrote:
> Well .. it would be a "all scripts signed" or "no script signed" kind of a
> deal. You can download malicious code everywhere - not only as scripts.
> Signed code doesn't protect against malicious or bad code. It only
> guarantees that the code is actually from the the certificate owner .. and
> has not been altered without the signers consent.

Seems relevant: "Java’s Losing Security Legacy",
http://threatpost.com/javas-losing-security-legacy and "Don't Sign
that Applet!", https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

Dormann advises "don't sign" so that the code can't escape its sandbox
and it stays restricted (malware regularly signs to do so).



Re: What I am missing

2014-11-18 Thread Michaela Merz

TLS doesn't protect you against code that has been altered server side -
without the signers consent. It would alert the user, if unsigned
updates would be made available.

Ajax downloads still require a download link (with the bloburl) to be
displayed requiring an additional click. User clicks download .. ajax
downloads the data, creates blob url as src which the user has to click
to 'copy' the blob onto the userspace drive. Would be better to skip the
final part.

In regard to accept: I wasn't aware of the fact that I can accept a
socket on port 80 to serve a HTTP session. You're saying I could with
what's available today?

Michaela



On 11/19/2014 06:34 AM, Florian Bösch wrote:
> On Wed, Nov 19, 2014 at 4:26 AM, Michaela Merz
> mailto:michaela.m...@hermetos.com>> wrote:
>
> First: We need signed script code. We are doing a lot of stuff with
> script - we could safely do even more, if we would be able to safely
> deliver script that has some kind of a trust model.
>
> TLS exists.
>  
>
> I am thinking about
> signed JAR files - just like we did with java applets not too long
> ago.
> Maybe as an extension to the CSP enviroment .. and a nice frame around
> the browser telling the user that the site is providing trusted /
> signed
> code.
>
> Which is different than TLS how?
>  
>
> Signed code could allow more openness, like true full screen, 
>
> Fullscreen is possible
> today, 
> https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/Using_full_screen_mode
>  
>
> or simpler ajax downloads.
>
> Simpler how?
>  
>
> Second: It would be great to finally be able to accept incoming
> connections.
>
> WebRTC allows the browser to accept incoming connections. The WebRTC
> data channel covers both TCP and UDP connectivity.
>  
>
> There's access to cameras and microphones - why not allow
> us the ability to code servers in the browser?
>
> You can. There's even P2P overlay networks being done with WebRTC.
> Although they're mostly hampered by the existing support for WebRTC
> data channels, which isn't great yet.



Re: What I am missing

2014-11-18 Thread Florian Bösch
On Wed, Nov 19, 2014 at 5:00 AM, Michaela Merz 
wrote:
>
> If signed code would allow
> special features - like true fullscreen

https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/Using_full_screen_mode



> or direct file access

http://www.html5rocks.com/en/tutorials/file/filesystem/


Re: What I am missing

2014-11-18 Thread Michaela Merz
Well .. it would be a "all scripts signed" or "no script signed" kind of
a deal. You can download malicious code everywhere - not only as
scripts. Signed code doesn't protect against malicious or bad code. It
only guarantees that the code is actually from the the certificate owner
.. and has not been altered without the signers consent.

Michaela
 


On 11/19/2014 06:14 AM, Marc Fawzi wrote:
> "Allowing this script to run may open you to all kinds of malicious
> attacks by 3rd parties not associated with the party whom you're
> trusting." 
>
> If I give App XYZ super power to do anything, and XYZ gets
> compromised/hacked then I'll be open to all sorts of attacks.
>
> It's not an issue of party A trusting party B. It's an issue of
> trusting that party B has no security holes in their app whatsoever,
> and that is one of the hardest things to guarantee.
>
>
> On Tue, Nov 18, 2014 at 8:00 PM, Michaela Merz
> mailto:michaela.m...@hermetos.com>> wrote:
>
>
> Yes Boris - I know. As long as it doesn't have advantages for the user
> or the developer - why bother with it? If signed code would allow
> special features - like true fullscreen or direct file access  - it
> would make sense. Signed code would make script much more resistant to
> manipulation and therefore would help in environments where trust
> and/or
> security is important.
>
> We use script for much, much more than we did just a year or so ago.
>
> Michaela
>
>
>
> On 11/19/2014 04:40 AM, Boris Zbarsky wrote:
> > On 11/18/14, 10:26 PM, Michaela Merz wrote:
> >> First: We need signed script code.
> >
> > For what it's worth, Gecko supported this for a while.  See
> >
> 
> .
> >  In practice, people didn't really use it, and it made the security
> > model a _lot_ more complicated and hard to reason about, so the
> > feature was dropped.
> >
> > It would be good to understand how proposals along these lines
> differ
> > from what's already been tried and failed.
> >
> > -Boris
> >
>
>
>
>



Re: What I am missing

2014-11-18 Thread Florian Bösch
On Wed, Nov 19, 2014 at 4:26 AM, Michaela Merz 
wrote:

> First: We need signed script code. We are doing a lot of stuff with
> script - we could safely do even more, if we would be able to safely
> deliver script that has some kind of a trust model.

TLS exists.


> I am thinking about
> signed JAR files - just like we did with java applets not too long ago.
> Maybe as an extension to the CSP enviroment .. and a nice frame around
> the browser telling the user that the site is providing trusted / signed
> code.

Which is different than TLS how?


> Signed code could allow more openness, like true full screen,

Fullscreen is possible today,
https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/Using_full_screen_mode


> or simpler ajax downloads.
>
Simpler how?


> Second: It would be great to finally be able to accept incoming
> connections.

WebRTC allows the browser to accept incoming connections. The WebRTC data
channel covers both TCP and UDP connectivity.


> There's access to cameras and microphones - why not allow
> us the ability to code servers in the browser?

You can. There's even P2P overlay networks being done with WebRTC. Although
they're mostly hampered by the existing support for WebRTC data channels,
which isn't great yet.


Re: What I am missing

2014-11-18 Thread Marc Fawzi
"Allowing this script to run may open you to all kinds of malicious attacks
by 3rd parties not associated with the party whom you're trusting."

If I give App XYZ super power to do anything, and XYZ gets
compromised/hacked then I'll be open to all sorts of attacks.

It's not an issue of party A trusting party B. It's an issue of trusting
that party B has no security holes in their app whatsoever, and that is one
of the hardest things to guarantee.


On Tue, Nov 18, 2014 at 8:00 PM, Michaela Merz 
wrote:

>
> Yes Boris - I know. As long as it doesn't have advantages for the user
> or the developer - why bother with it? If signed code would allow
> special features - like true fullscreen or direct file access  - it
> would make sense. Signed code would make script much more resistant to
> manipulation and therefore would help in environments where trust and/or
> security is important.
>
> We use script for much, much more than we did just a year or so ago.
>
> Michaela
>
>
>
> On 11/19/2014 04:40 AM, Boris Zbarsky wrote:
> > On 11/18/14, 10:26 PM, Michaela Merz wrote:
> >> First: We need signed script code.
> >
> > For what it's worth, Gecko supported this for a while.  See
> > <
> http://www-archive.mozilla.org/projects/security/components/signed-scripts.html
> >.
> >  In practice, people didn't really use it, and it made the security
> > model a _lot_ more complicated and hard to reason about, so the
> > feature was dropped.
> >
> > It would be good to understand how proposals along these lines differ
> > from what's already been tried and failed.
> >
> > -Boris
> >
>
>
>
>


Re: What I am missing

2014-11-18 Thread Michaela Merz

Yes Boris - I know. As long as it doesn't have advantages for the user
or the developer - why bother with it? If signed code would allow
special features - like true fullscreen or direct file access  - it
would make sense. Signed code would make script much more resistant to
manipulation and therefore would help in environments where trust and/or
security is important.

We use script for much, much more than we did just a year or so ago.

Michaela



On 11/19/2014 04:40 AM, Boris Zbarsky wrote:
> On 11/18/14, 10:26 PM, Michaela Merz wrote:
>> First: We need signed script code.
>
> For what it's worth, Gecko supported this for a while.  See
> .
>  In practice, people didn't really use it, and it made the security
> model a _lot_ more complicated and hard to reason about, so the
> feature was dropped.
>
> It would be good to understand how proposals along these lines differ
> from what's already been tried and failed.
>
> -Boris
>





Re: What I am missing

2014-11-18 Thread Boris Zbarsky

On 11/18/14, 10:26 PM, Michaela Merz wrote:

First: We need signed script code.


For what it's worth, Gecko supported this for a while.  See 
. 
 In practice, people didn't really use it, and it made the security 
model a _lot_ more complicated and hard to reason about, so the feature 
was dropped.


It would be good to understand how proposals along these lines differ 
from what's already been tried and failed.


-Boris



What I am missing

2014-11-18 Thread Michaela Merz

Hi there:

Though I am not part of the browser developing community, I am doing web
development since before the days of Marc Andreessen - when we had
neither script or even text flowing around images. So you may understand
how much I I enjoy what you are doing and that I can't wait for new
functionality to emerge. webRTC, websockets, file systems, indexeddb you
name it - cool stuff that enables folks like me to change the web into
something truly awesome. Thanks a lot for your work.

But .. there's something missing. Actually: There are two things
missing. Maybe .. just maybe .. I am able to convince you to at least
think about what I am suggesting.

First: We need signed script code. We are doing a lot of stuff with
script - we could safely do even more, if we would be able to safely
deliver script that has some kind of a trust model. I am thinking about
signed JAR files - just like we did with java applets not too long ago.
Maybe as an extension to the CSP enviroment .. and a nice frame around
the browser telling the user that the site is providing trusted / signed
code. Signed code could allow more openness, like true full screen, or
simpler ajax downloads. 

Second: It would be great to finally be able to accept incoming
connections. There's access to cameras and microphones - why not allow
us the ability to code servers in the browser? Maybe in combination with
my suggestion above? Websites would be able to offer webdav simply by
'mounting' the browser (no pun intended) and the browser would do
caching/forwarding/encrypting ..  Imaging being able to directly access
files on a web site without web download.

I could go on for another hour or two. But I don't want to sound
ungrateful.

Thank you for reading this.

Michaela