[Bug 27366] New: Define how shadow DOM should be handled when host is adopted to a new document

2014-11-19 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=27366

Bug ID: 27366
   Summary: Define how shadow DOM should be handled when host is
adopted to a new document
   Product: WebAppsWG
   Version: unspecified
  Hardware: PC
OS: Linux
Status: NEW
  Severity: normal
  Priority: P2
 Component: Component Model
  Assignee: dglaz...@chromium.org
  Reporter: b...@pettay.fi
QA Contact: public-webapps-bugzi...@w3.org
CC: m...@w3.org, public-webapps@w3.org

The spec doesn't say what should happen to shadow DOM
if host is adopted to a new document.

The spec does have The ownerDocument property of a node in a shadow tree must
refers to the document of the shadow host which hosts the shadow tree.
but that misses 'adopting steps' from DOM spec which HTML relies on.

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: What I am missing

2014-11-19 Thread Michaela Merz
Thank you Jonas. I was actually thinking about the security model of
FirefoxOS or Android apps. We write powerful webapps nowadays. And
with webapps I mean regular web pages with a lot of script/html5
functionality. The browsers are fast enough to do a variety of things:
from running a linux kernel, to playing dos-games,  doing crypto,
decoding and streaming mp3. I understand a browser to be an operating
system on top of an operating system. But the need to protect the user
is a problem if you want to go beyond what is possible today.

I am asking to consider a model, where a signed script package notifies
a user about is origin and signer and even may ask the user for special
permissions like direct file system access or raw networking sockets or
anything else that would, for safety reasons, not be possible today.

The browser would remember the origin ip and the signature of the script
package and would re-ask for permission if something changes. It would
refuse to run if the signature isn't valid or expired.

It wouldn't change a thing in regard to updates. You would just have to
re-sign your code before you make it available. I used to work a lot
with java applets (signed and un-signed) in the old days, I am working
with android apps today. Signing is just another step in the work chain.

Signed code is the missing last element in the CSP - TLS environment .
Let's make the browser into something that can truly be seen as an
alternative operating system on top of an operating system.

Michaela

On 11/19/2014 08:33 AM, Jonas Sicking wrote:
 On Tue, Nov 18, 2014 at 7:40 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 11/18/14, 10:26 PM, Michaela Merz wrote:
 First: We need signed script code.
 For what it's worth, Gecko supported this for a while.  See
 http://www-archive.mozilla.org/projects/security/components/signed-scripts.html.
 In practice, people didn't really use it, and it made the security model a
 _lot_ more complicated and hard to reason about, so the feature was dropped.

 It would be good to understand how proposals along these lines differ from
 what's already been tried and failed.
 The way we did script signing back then was nutty in several ways. The
 signing we do in FirefoxOS is *much* simpler. Simple enough that no
 one has complained about the complexity that it has added to Gecko.

 Sadly enhanced security models that use signing by a trusted party
 inherently looses a lot of the advantages of the web. It means that
 you can't publish a new version of you website by simply uploading
 files to your webserver whenever you want. And it means that you can't
 generate the script and markup that make up your website dynamically
 on your webserver.

 So I'm by no means arguing that FirefoxOS has the problem of signing solved.

 Unfortunately no one has been able to solve the problem of how to
 grant web content access to capabilities like raw TCP or UDP sockets
 in order to access legacy hardware and protocols, or how to get
 read/write acccess to your photo library in order to build a photo
 manager, without relying on signing.

 Which has meant that the web so far is unable to compete with native
 in those areas.

 / Jonas






Re: CfC: publish WG Note of UI Events; deadline November 14

2014-11-19 Thread Arthur Barstow

On 11/7/14 10:28 AM, Arthur Barstow wrote:
During WebApps' October 27 meeting, the participants agreed to stop 
work on the UI Events spec and to publish it as a WG Note (see 
[Mins]). As such, this is a formal Call for Consensus (CfC) to:


a) Stop work on this spec

b) Publish a gutted WG Note of the spec; see [Draft-Note]

c) Gut the ED (this will be done if/when this CfC passes)

d) Prefix the spec's [Bugs] with HISTORICAL and turn off creating 
new bugs


e) Travis will move all bugs that are relevant to D3E to the D3E bug 
component



Although there appears to be agreement that work on the [uievents] spec 
should stop, the various replies raise sufficient questions that I 
consider this CfC (as written) as failed.


Travis, Gary - would you please make a specific proposal for these two 
specs? In particular, what is the title and shortname for each document, 
and which spec/shortname becomes the WG Note?


After we have agreed on a way forward, I'll start a new CfC.

(I believe the Principle of Least Surprise here means considering specs 
that currently reference [uievents] or [DOM-Level-3-Events]. F.ex., I 
suppose a document titled UI Events with a shortname of 
DOM-Level-3-Events could be a bit confusing to some, although strictly 
speaking could be done.)


-Thanks, AB

[uievents] http://www.w3.org/TR/uievents/
[DOM-Level-3-Events] http://www.w3.org/TR/DOM-Level-3-Events/



[Mins] http://www.w3.org/2014/10/27-webapps-minutes.html#item05
[Draft-Note] https://dvcs.w3.org/hg/d4e/raw-file/default/tr.html
[Bugs] 
https://www.w3.org/Bugs/Public/buglist.cgi?product=WebAppsWGcomponent=UI%20Eventsresolution=---
[Discuss] 
http://lists.w3.org/Archives/Public/public-webapps/2014OctDec/0262.html





Re: CfC: publish WG Note of UI Events; deadline November 14

2014-11-19 Thread Anne van Kesteren
On Wed, Nov 19, 2014 at 3:20 PM, Arthur Barstow art.bars...@gmail.com wrote:
 Although there appears to be agreement that work on the [uievents] spec
 should stop, the various replies raise sufficient questions that I consider
 this CfC (as written) as failed.

 Travis, Gary - would you please make a specific proposal for these two
 specs? In particular, what is the title and shortname for each document, and
 which spec/shortname becomes the WG Note?

 After we have agreed on a way forward, I'll start a new CfC.

 (I believe the Principle of Least Surprise here means considering specs that
 currently reference [uievents] or [DOM-Level-3-Events]. F.ex., I suppose a
 document titled UI Events with a shortname of DOM-Level-3-Events could be
 a bit confusing to some, although strictly speaking could be done.)

My proposal would be to update UI Events with the latest editor's
draft of DOM Level 3 Events (title renamed, of course) and have the
DOM Level 3 Events URL redirect to UI Events. That would communicate
clearly what happened.


-- 
https://annevankesteren.nl/



Re: CfC: publish WG Note of UI Events; deadline November 14

2014-11-19 Thread Arthur Barstow

On 11/19/14 9:35 AM, Anne van Kesteren wrote:

On Wed, Nov 19, 2014 at 3:20 PM, Arthur Barstow art.bars...@gmail.com wrote:

Although there appears to be agreement that work on the [uievents] spec
should stop, the various replies raise sufficient questions that I consider
this CfC (as written) as failed.

Travis, Gary - would you please make a specific proposal for these two
specs? In particular, what is the title and shortname for each document, and
which spec/shortname becomes the WG Note?

After we have agreed on a way forward, I'll start a new CfC.

(I believe the Principle of Least Surprise here means considering specs that
currently reference [uievents] or [DOM-Level-3-Events]. F.ex., I suppose a
document titled UI Events with a shortname of DOM-Level-3-Events could be
a bit confusing to some, although strictly speaking could be done.)

My proposal would be to update UI Events with the latest editor's
draft of DOM Level 3 Events (title renamed, of course) and have the
DOM Level 3 Events URL redirect to UI Events. That would communicate
clearly what happened.


Yves, Philippe - can Anne's proposal be done?





Re: What I am missing

2014-11-19 Thread Pradeep Kumar
How the browsers can be code servers? Could you please explain a little
more...
On 19-Nov-2014 7:51 pm, Michaela Merz michaela.m...@hermetos.com wrote:

 Thank you Jonas. I was actually thinking about the security model of
 FirefoxOS or Android apps. We write powerful webapps nowadays. And
 with webapps I mean regular web pages with a lot of script/html5
 functionality. The browsers are fast enough to do a variety of things:
 from running a linux kernel, to playing dos-games,  doing crypto,
 decoding and streaming mp3. I understand a browser to be an operating
 system on top of an operating system. But the need to protect the user
 is a problem if you want to go beyond what is possible today.

 I am asking to consider a model, where a signed script package notifies
 a user about is origin and signer and even may ask the user for special
 permissions like direct file system access or raw networking sockets or
 anything else that would, for safety reasons, not be possible today.

 The browser would remember the origin ip and the signature of the script
 package and would re-ask for permission if something changes. It would
 refuse to run if the signature isn't valid or expired.

 It wouldn't change a thing in regard to updates. You would just have to
 re-sign your code before you make it available. I used to work a lot
 with java applets (signed and un-signed) in the old days, I am working
 with android apps today. Signing is just another step in the work chain.

 Signed code is the missing last element in the CSP - TLS environment .
 Let's make the browser into something that can truly be seen as an
 alternative operating system on top of an operating system.

 Michaela

 On 11/19/2014 08:33 AM, Jonas Sicking wrote:
  On Tue, Nov 18, 2014 at 7:40 PM, Boris Zbarsky bzbar...@mit.edu wrote:
  On 11/18/14, 10:26 PM, Michaela Merz wrote:
  First: We need signed script code.
  For what it's worth, Gecko supported this for a while.  See
  
 http://www-archive.mozilla.org/projects/security/components/signed-scripts.html
 .
  In practice, people didn't really use it, and it made the security
 model a
  _lot_ more complicated and hard to reason about, so the feature was
 dropped.
 
  It would be good to understand how proposals along these lines differ
 from
  what's already been tried and failed.
  The way we did script signing back then was nutty in several ways. The
  signing we do in FirefoxOS is *much* simpler. Simple enough that no
  one has complained about the complexity that it has added to Gecko.
 
  Sadly enhanced security models that use signing by a trusted party
  inherently looses a lot of the advantages of the web. It means that
  you can't publish a new version of you website by simply uploading
  files to your webserver whenever you want. And it means that you can't
  generate the script and markup that make up your website dynamically
  on your webserver.
 
  So I'm by no means arguing that FirefoxOS has the problem of signing
 solved.
 
  Unfortunately no one has been able to solve the problem of how to
  grant web content access to capabilities like raw TCP or UDP sockets
  in order to access legacy hardware and protocols, or how to get
  read/write acccess to your photo library in order to build a photo
  manager, without relying on signing.
 
  Which has meant that the web so far is unable to compete with native
  in those areas.
 
  / Jonas
 






Re: CfC: publish WG Note of UI Events; deadline November 14

2014-11-19 Thread Pradeep Kumar
+1
On 19-Nov-2014 8:07 pm, Anne van Kesteren ann...@annevk.nl wrote:

 On Wed, Nov 19, 2014 at 3:20 PM, Arthur Barstow art.bars...@gmail.com
 wrote:
  Although there appears to be agreement that work on the [uievents] spec
  should stop, the various replies raise sufficient questions that I
 consider
  this CfC (as written) as failed.
 
  Travis, Gary - would you please make a specific proposal for these two
  specs? In particular, what is the title and shortname for each document,
 and
  which spec/shortname becomes the WG Note?
 
  After we have agreed on a way forward, I'll start a new CfC.
 
  (I believe the Principle of Least Surprise here means considering specs
 that
  currently reference [uievents] or [DOM-Level-3-Events]. F.ex., I suppose
 a
  document titled UI Events with a shortname of DOM-Level-3-Events could
 be
  a bit confusing to some, although strictly speaking could be done.)

 My proposal would be to update UI Events with the latest editor's
 draft of DOM Level 3 Events (title renamed, of course) and have the
 DOM Level 3 Events URL redirect to UI Events. That would communicate
 clearly what happened.


 --
 https://annevankesteren.nl/




Re: What I am missing

2014-11-19 Thread Michaela Merz

Perfect is the enemy of good. I understand the principles and problems
of cryptography. And in the same way we rely on TLS and its security
model today we would be able to put some trust into the same
architecture for signing script.

FYI: Here's how signing works for java applets: You need to get a
software signing key from a CA who will do some form of verification in
regard to your organization. You use that key to sign you code package.

Will this forever prevent the spread of malware ? Of course not. Yes -
you can distribute malware if you get somebodies signing key into your
possession or even buy a signature key and sign your malware yourself.
But as long as the user is made aware of the fact that the code has the
permission to access the file system, he can decide for himself if he
wants to allow that or not. Much in the same way that he decides to
download a possibly malicious peace of software today.  There will never
be absolute security - especially not for users who give a damn about
what they are doing. Should that prevent us from continuing the
evolution of the web environment? We might as well kill ourselves
because we are afraid to die ;)

One last thing: Most Android malware was spread on systems, where owners
disabled the security of the system .. eg by rooting the devices.

Michaela

 Suppose you get a piece of signed content, over whatever way it was
 delivered. Suppose also that this content you got has the ability to
 read all your private data, or reformat your machine. So it's
 basically about trust. You need to establish a secure channel of
 communication to obtain a public key that matches a signature, in such
 a way that an attackers attempt to self-sign malicious content is
 foiled. And you need to have a way to discover (after having
 established that the entity is the one who was intended and that the
 signature is correct), that you indeed trust that entity.

 These are two very old problems in cryptography, and they cannot be
 solved by cryptography. There are various approaches to this problem
 in use today:

   * TLS and its web of trust: The basic idea being that there is a
 hierarchy of signatories. It works like this. An entity provides a
 certificate for the connection, signing it with their private key.
 Since you cannot establish a connection without a public key that
 matches the private key, verifying the certificate is easy. This
 entity in turn, refers to another entity which provided the
 signature for that private key. They refer to another one, and so
 forth, until you arrive at the root. You implicitly trust root.
 This works, but it has some flaws. At the edge of the web, people
 are not allowed to self-sign, so they obtain their (pricey) key
 from the next tier up. But the next tier up can't go and bother
 the next tier up everytime they need to provide a new set of keys
 to the edge. So they get blanket permission to self-sign, implying
 that it's possible for the next tier up to establish and maintain
 a trust relationship to them. As is easily demonstratable, this
 can, and often does, go wrong, where some CA gets compromised.
 This is always bad news to whomever obtained a certificate from
 them, because now a malicious party can pass themselves off as them.
   * App-stores and trust royalty: This is really easy to describe, the
 app store you obtain something from signs the content, and you
 trust the app-store, and therefore you trust the content. This
 can, and often does go wrong, as android/iOS malware amply
 demonstrates.

 TSL cannot work perfectly, because it is built on implied trust along
 the chain, and this can get compromised. App-stores cannot work
 perfectly because the ability to review content is quickly exceeded by
 the flood of content. Even if app-stores where provided with the full
 source, they would have no time to perform a proper review, and so
 time and time again malware slips trough the net.

 You can have a technical solution for signing, and you still haven't
 solved any bit of how to trust a piece of content. About the only way
 that'd be remotely feasible is if you piggyback on an existing
 implementation of a trust mechanism/transport security layer, to
 deliver the signature. For instance, many websites that allow you to
 d/l an executable provide you with a checksum of the content. The idea
 being that if the page is served up over TLS, then you've established
 that the checksum is delivered by the entity, which is also supposed
 to deliver the content. However, this has a hidden trust model that
 established trust without a technical solution. It means a user
 assumes he trusts that website, because he arrived there on his own
 volition. The same cannot be said about side-channel delivered pieces
 of content that composit into a larger whole (like signed scripts).
 For instance, assume you'd have a script file (like say the twitter
 

Re: What I am missing

2014-11-19 Thread Michaela Merz
I am not sure if I understand your question. Browsers can't be code 
servers at least not today.


Michaela



On 11/19/2014 08:43 AM, Pradeep Kumar wrote:


How the browsers can be code servers? Could you please explain a 
little more...


On 19-Nov-2014 7:51 pm, Michaela Merz michaela.m...@hermetos.com 
mailto:michaela.m...@hermetos.com wrote:


Thank you Jonas. I was actually thinking about the security model of
FirefoxOS or Android apps. We write powerful webapps nowadays. And
with webapps I mean regular web pages with a lot of script/html5
functionality. The browsers are fast enough to do a variety of things:
from running a linux kernel, to playing dos-games,  doing crypto,
decoding and streaming mp3. I understand a browser to be an operating
system on top of an operating system. But the need to protect the user
is a problem if you want to go beyond what is possible today.

I am asking to consider a model, where a signed script package
notifies
a user about is origin and signer and even may ask the user for
special
permissions like direct file system access or raw networking
sockets or
anything else that would, for safety reasons, not be possible today.

The browser would remember the origin ip and the signature of the
script
package and would re-ask for permission if something changes. It would
refuse to run if the signature isn't valid or expired.

It wouldn't change a thing in regard to updates. You would just
have to
re-sign your code before you make it available. I used to work a lot
with java applets (signed and un-signed) in the old days, I am working
with android apps today. Signing is just another step in the work
chain.

Signed code is the missing last element in the CSP - TLS environment .
Let's make the browser into something that can truly be seen as an
alternative operating system on top of an operating system.

Michaela

On 11/19/2014 08:33 AM, Jonas Sicking wrote:
 On Tue, Nov 18, 2014 at 7:40 PM, Boris Zbarsky bzbar...@mit.edu
mailto:bzbar...@mit.edu wrote:
 On 11/18/14, 10:26 PM, Michaela Merz wrote:
 First: We need signed script code.
 For what it's worth, Gecko supported this for a while.  See


http://www-archive.mozilla.org/projects/security/components/signed-scripts.html.
 In practice, people didn't really use it, and it made the
security model a
 _lot_ more complicated and hard to reason about, so the feature
was dropped.

 It would be good to understand how proposals along these lines
differ from
 what's already been tried and failed.
 The way we did script signing back then was nutty in several
ways. The
 signing we do in FirefoxOS is *much* simpler. Simple enough that no
 one has complained about the complexity that it has added to Gecko.

 Sadly enhanced security models that use signing by a trusted party
 inherently looses a lot of the advantages of the web. It means that
 you can't publish a new version of you website by simply uploading
 files to your webserver whenever you want. And it means that you
can't
 generate the script and markup that make up your website dynamically
 on your webserver.

 So I'm by no means arguing that FirefoxOS has the problem of
signing solved.

 Unfortunately no one has been able to solve the problem of how to
 grant web content access to capabilities like raw TCP or UDP sockets
 in order to access legacy hardware and protocols, or how to get
 read/write acccess to your photo library in order to build a photo
 manager, without relying on signing.

 Which has meant that the web so far is unable to compete with
native
 in those areas.

 / Jonas








Re: What I am missing

2014-11-19 Thread Michaela Merz


That is relevant and also not so. Because Java applets silently grant 
access to a out of sandbox functionality if signed. This is not what I 
am proposing. I am suggesting a model in which the sandbox model remains 
intact and users need to explicitly agree to access that would otherwise 
be prohibited.


Michaela





On 11/19/2014 12:01 AM, Jeffrey Walton wrote:

On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
michaela.m...@hermetos.com wrote:

Well .. it would be a all scripts signed or no script signed kind of a
deal. You can download malicious code everywhere - not only as scripts.
Signed code doesn't protect against malicious or bad code. It only
guarantees that the code is actually from the the certificate owner .. and
has not been altered without the signers consent.

Seems relevant: Java’s Losing Security Legacy,
http://threatpost.com/javas-losing-security-legacy and Don't Sign
that Applet!, https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

Dormann advises don't sign so that the code can't escape its sandbox
and it stays restricted (malware regularly signs to do so).





Re: What I am missing

2014-11-19 Thread Pradeep Kumar
Even today, browsers ask for permission for geolocation, local storage,
camera etc... How it is different from current scenario?
On 19-Nov-2014 8:35 pm, Michaela Merz michaela.m...@hermetos.com wrote:


 That is relevant and also not so. Because Java applets silently grant
 access to a out of sandbox functionality if signed. This is not what I am
 proposing. I am suggesting a model in which the sandbox model remains
 intact and users need to explicitly agree to access that would otherwise be
 prohibited.

 Michaela





 On 11/19/2014 12:01 AM, Jeffrey Walton wrote:

 On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
 michaela.m...@hermetos.com wrote:

 Well .. it would be a all scripts signed or no script signed kind of
 a
 deal. You can download malicious code everywhere - not only as scripts.
 Signed code doesn't protect against malicious or bad code. It only
 guarantees that the code is actually from the the certificate owner ..
 and
 has not been altered without the signers consent.

 Seems relevant: Java's Losing Security Legacy,
 http://threatpost.com/javas-losing-security-legacy and Don't Sign
 that Applet!, https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

 Dormann advises don't sign so that the code can't escape its sandbox
 and it stays restricted (malware regularly signs to do so).






Re: What I am missing

2014-11-19 Thread Anne van Kesteren
On Wed, Nov 19, 2014 at 4:27 PM, Michaela Merz
michaela.m...@hermetos.com wrote:
 I don't disagree. But what is wrong with the notion of introducing an
 _additional_ layer of certification?

Adding an additional layer of centralization.


-- 
https://annevankesteren.nl/



Re: What I am missing

2014-11-19 Thread Michaela Merz


First: You don't have to sign your code. Second: We rely on 
centralization for TLS as well. Third: Third-party verification can be 
done within the community itself 
(https://www.eff.org/deeplinks/2014/11/certificate-authority-encrypt-entire-web) 
.


Michaela


On 11/19/2014 09:41 AM, Anne van Kesteren wrote:

On Wed, Nov 19, 2014 at 4:27 PM, Michaela Merz
michaela.m...@hermetos.com wrote:

I don't disagree. But what is wrong with the notion of introducing an
_additional_ layer of certification?

Adding an additional layer of centralization.







Re: What I am missing

2014-11-19 Thread ☻Mike Samuel
Browser signature checking gives you nothing that CSP doesn't as far
as the security of pages composed from a mixture of content from
different providers.

As Florian points out, signing only establishes provenance, not any
interesting security properties.

I can always write a page that runs an interpreter on data loaded from
a third-party source even if that data is not loaded as script, so a
signed application is always open to confused deputy problems.

Signing is a red herring which distracts from the real problem : the
envelope model in which secrets are scoped to the whole content of the
envelope, makes it hard to decompose a document into multiple trust
domains that reflect the fact that the document is really composed of
content with multiple different provenances.

By the envelope model, I mean the assumption that the user-agent
receives a document and a wrapper, and makes trust decisions for the
whole of the document based on the content of the wrapper.

The envelope does all of
1. establishing a trust domain, the origin
2. bundles secrets, usually in cookies and headers
3. bundles content, possibly from multiple sources.

Ideally our protocols would be written in such a way that secrets can
be scoped to content with a way to allow nesting without inheritance.

This can be kludged on top of iframes, but only at the cost of a lot
of engineering effort.

On Wed, Nov 19, 2014 at 10:27 AM, Michaela Merz
michaela.m...@hermetos.com wrote:

 I don't disagree. But what is wrong with the notion of introducing an
 _additional_ layer of certification? Signed script and/or html would most
 certainly make it way harder to de-face a website or sneak malicious code
 into an environment.  I strongly believe that just for this reason alone, we
 should think about signed content - even without additional potentially
 unsafe functionality.

 Michaela



 On 11/19/2014 09:21 AM, Pradeep Kumar wrote:

 Michaela,

 As Josh said earlier, signing the code (somehow) will not enhance security.
 It will open doors for more threats. It's better and more open, transparent
 and in sync with the spirit of open web to give the control to end user and
 not making them to relax today on behalf of other signing authorities.

 On 19-Nov-2014 8:44 pm, Michaela Merz michaela.m...@hermetos.com wrote:

 You are correct. But all those services are (thankfully) sand boxed or
 read only. In order to make a browser into something even more useful, you
 have to relax these security rules a bit. And IMHO that *should* require
 signed code - in addition to the users consent.

 Michaela



 On 11/19/2014 09:09 AM, Pradeep Kumar wrote:

 Even today, browsers ask for permission for geolocation, local storage,
 camera etc... How it is different from current scenario?

 On 19-Nov-2014 8:35 pm, Michaela Merz michaela.m...@hermetos.com
 wrote:


 That is relevant and also not so. Because Java applets silently grant
 access to a out of sandbox functionality if signed. This is not what I am
 proposing. I am suggesting a model in which the sandbox model remains intact
 and users need to explicitly agree to access that would otherwise be
 prohibited.

 Michaela





 On 11/19/2014 12:01 AM, Jeffrey Walton wrote:

 On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
 michaela.m...@hermetos.com wrote:

 Well .. it would be a all scripts signed or no script signed kind
 of a
 deal. You can download malicious code everywhere - not only as scripts.
 Signed code doesn't protect against malicious or bad code. It only
 guarantees that the code is actually from the the certificate owner ..
 and
 has not been altered without the signers consent.

 Seems relevant: Java’s Losing Security Legacy,
 http://threatpost.com/javas-losing-security-legacy and Don't Sign
 that Applet!, https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

 Dormann advises don't sign so that the code can't escape its sandbox
 and it stays restricted (malware regularly signs to do so).








Re: What I am missing

2014-11-19 Thread Marc Fawzi


 So there is no way for an unsigned script to exploit security holes in a
 signed script?

Of course there's a way. But by the same token, there's a way a signed
script can exploit security holes in another signed script. Signing itself
doesn't establish any trust, or security.


Yup, that's also what I meant. Signing does not imply secure, but to the
average non-technical user a signed app from a trusted party may convey
both trust and security, so they wouldn't think twice about installing such
a script even if it asked for some powerful permissions that can be
exploited by another script.



 Funny you mention crypto currencies as an idea to get inspiration
 from...Trust but verify is detached from that... a browser can monitor
 what the signed scripts are doing and if it detects a potentially malicious
 pattern it can halt the execution of the script and let the user decide if
 they want to continue...

That's not working for a variety of reasons. The first reason is that
identifying what a piece of software does intelligently is one of those
really hard problems. As in Strong-AI hard.


Well, the user can setup the rules of what is considered a malicious action
and that there would be ready made configurations (best practices codified
in config) that would be the default in the browser. And then they can
exempt certain scripts.

I realize this is an open ended problem and no solution is going to address
it 100% ... It's the nature of open systems to be open to attacks but it's
how the system deals with the attack that differentiates it. It's a wide
open area of research I think, or should be.

But do we want a security model that's not extensible and not flexible? The
answer is most likely NO.





On Tue, Nov 18, 2014 at 11:03 PM, Florian Bösch pya...@gmail.com wrote:

 On Wed, Nov 19, 2014 at 7:54 AM, Marc Fawzi marc.fa...@gmail.com wrote:

 So there is no way for an unsigned script to exploit security holes in a
 signed script?

 Of course there's a way. But by the same token, there's a way a signed
 script can exploit security holes in another signed script. Signing itself
 doesn't establish any trust, or security.


 Funny you mention crypto currencies as an idea to get inspiration
 from...Trust but verify is detached from that... a browser can monitor
 what the signed scripts are doing and if it detects a potentially malicious
 pattern it can halt the execution of the script and let the user decide if
 they want to continue...

 That's not working for a variety of reasons. The first reason is that
 identifying what a piece of software does intelligently is one of those
 really hard problems. As in Strong-AI hard. Failing that, you can monitor
 what APIs a piece of software makes use of, and restrict access to those.
 However, that's already satisfied without signing by sandboxing.
 Furthermore, it doesn't entirely solve the problem as any android user will
 know. You get a ginormeous list of premissions a given piece of software
 would like to use and the user just clicks yes. Alternatively, you get
 malware that's not trustworthy, that nobody managed to properly review,
 because the non trusty part was burried/hidden by the author somewhere deep
 down, to activate only long after trust extension by fiat has happened.

 But even if you'd assume that this somehow would be an acceptable model,
 what do you define as malicious? Reformatting your machine would be
 malicious, but so would be posting on your facebook wall. What constitutes
 a malicious pattern is actually more of a social than a technical problem.



Re: What I am missing

2014-11-19 Thread Michaela Merz
How would an unsigned script be able to exploit functionality from a 
signed script if it's an either/or case - you have either all scripts 
signed or no extended features? and: Think about this: a website can be 
totally safe today and deliver exploits tomorrow without the user even 
noticing. It happened before and it will happen again. Signed content 
would prevent this by warning the user about missing or wrong signatures 
- even if signed script would not add a single extended function. I 
understand that signing code does not lead to the solution of all evils. 
But it would add another layer that needs to be broken if somebody gains 
access to a website and starts to modify code.


Michaela

On 11/19/2014 11:14 AM, Marc Fawzi wrote:



So there is no way for an unsigned script to exploit security
holes in a signed script?

Of course there's a way. But by the same token, there's a way a signed 
script can exploit security holes in another signed script. Signing 
itself doesn't establish any trust, or security.



Yup, that's also what I meant. Signing does not imply secure, but to 
the average non-technical user a signed app from a trusted party may 
convey both trust and security, so they wouldn't think twice about 
installing such a script even if it asked for some powerful 
permissions that can be exploited by another script.




Funny you mention crypto currencies as an idea to get inspiration
from...Trust but verify is detached from that... a browser can
monitor what the signed scripts are doing and if it detects a
potentially malicious pattern it can halt the execution of the
script and let the user decide if they want to continue...

That's not working for a variety of reasons. The first reason is that 
identifying what a piece of software does intelligently is one of 
those really hard problems. As in Strong-AI hard.



Well, the user can setup the rules of what is considered a malicious 
action and that there would be ready made configurations (best 
practices codified in config) that would be the default in the 
browser. And then they can exempt certain scripts.


I realize this is an open ended problem and no solution is going to 
address it 100% ... It's the nature of open systems to be open to 
attacks but it's how the system deals with the attack that 
differentiates it. It's a wide open area of research I think, or 
should be.


But do we want a security model that's not extensible and not 
flexible? The answer is most likely NO.






On Tue, Nov 18, 2014 at 11:03 PM, Florian Bösch pya...@gmail.com 
mailto:pya...@gmail.com wrote:


On Wed, Nov 19, 2014 at 7:54 AM, Marc Fawzi marc.fa...@gmail.com
mailto:marc.fa...@gmail.com wrote:

So there is no way for an unsigned script to exploit security
holes in a signed script?

Of course there's a way. But by the same token, there's a way a
signed script can exploit security holes in another signed script.
Signing itself doesn't establish any trust, or security.

Funny you mention crypto currencies as an idea to get
inspiration from...Trust but verify is detached from that...
a browser can monitor what the signed scripts are doing and if
it detects a potentially malicious pattern it can halt the
execution of the script and let the user decide if they want
to continue...

That's not working for a variety of reasons. The first reason is
that identifying what a piece of software does intelligently is
one of those really hard problems. As in Strong-AI hard. Failing
that, you can monitor what APIs a piece of software makes use of,
and restrict access to those. However, that's already satisfied
without signing by sandboxing. Furthermore, it doesn't entirely
solve the problem as any android user will know. You get a
ginormeous list of premissions a given piece of software would
like to use and the user just clicks yes. Alternatively, you get
malware that's not trustworthy, that nobody managed to properly
review, because the non trusty part was burried/hidden by the
author somewhere deep down, to activate only long after trust
extension by fiat has happened.

But even if you'd assume that this somehow would be an acceptable
model, what do you define as malicious? Reformatting your
machine would be malicious, but so would be posting on your
facebook wall. What constitutes a malicious pattern is actually
more of a social than a technical problem.






Re: What I am missing

2014-11-19 Thread Michaela Merz
Yes - it establishes provenance and protects against unauthorized 
manipulation. CSP is only as good as the content it protects. If the 
content has been manipulated server side - e.g. by unauthorized access - 
CSP is worthless.


Michaela



On 11/19/2014 10:03 AM, ☻Mike Samuel wrote:

Browser signature checking gives you nothing that CSP doesn't as far
as the security of pages composed from a mixture of content from
different providers.

As Florian points out, signing only establishes provenance, not any
interesting security properties.

I can always write a page that runs an interpreter on data loaded from
a third-party source even if that data is not loaded as script, so a
signed application is always open to confused deputy problems.

Signing is a red herring which distracts from the real problem : the
envelope model in which secrets are scoped to the whole content of the
envelope, makes it hard to decompose a document into multiple trust
domains that reflect the fact that the document is really composed of
content with multiple different provenances.

By the envelope model, I mean the assumption that the user-agent
receives a document and a wrapper, and makes trust decisions for the
whole of the document based on the content of the wrapper.

The envelope does all of
1. establishing a trust domain, the origin
2. bundles secrets, usually in cookies and headers
3. bundles content, possibly from multiple sources.

Ideally our protocols would be written in such a way that secrets can
be scoped to content with a way to allow nesting without inheritance.

This can be kludged on top of iframes, but only at the cost of a lot
of engineering effort.

On Wed, Nov 19, 2014 at 10:27 AM, Michaela Merz
michaela.m...@hermetos.com wrote:

I don't disagree. But what is wrong with the notion of introducing an
_additional_ layer of certification? Signed script and/or html would most
certainly make it way harder to de-face a website or sneak malicious code
into an environment.  I strongly believe that just for this reason alone, we
should think about signed content - even without additional potentially
unsafe functionality.

Michaela



On 11/19/2014 09:21 AM, Pradeep Kumar wrote:

Michaela,

As Josh said earlier, signing the code (somehow) will not enhance security.
It will open doors for more threats. It's better and more open, transparent
and in sync with the spirit of open web to give the control to end user and
not making them to relax today on behalf of other signing authorities.

On 19-Nov-2014 8:44 pm, Michaela Merz michaela.m...@hermetos.com wrote:

You are correct. But all those services are (thankfully) sand boxed or
read only. In order to make a browser into something even more useful, you
have to relax these security rules a bit. And IMHO that *should* require
signed code - in addition to the users consent.

Michaela



On 11/19/2014 09:09 AM, Pradeep Kumar wrote:

Even today, browsers ask for permission for geolocation, local storage,
camera etc... How it is different from current scenario?

On 19-Nov-2014 8:35 pm, Michaela Merz michaela.m...@hermetos.com
wrote:


That is relevant and also not so. Because Java applets silently grant
access to a out of sandbox functionality if signed. This is not what I am
proposing. I am suggesting a model in which the sandbox model remains intact
and users need to explicitly agree to access that would otherwise be
prohibited.

Michaela





On 11/19/2014 12:01 AM, Jeffrey Walton wrote:

On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
michaela.m...@hermetos.com wrote:

Well .. it would be a all scripts signed or no script signed kind
of a
deal. You can download malicious code everywhere - not only as scripts.
Signed code doesn't protect against malicious or bad code. It only
guarantees that the code is actually from the the certificate owner ..
and
has not been altered without the signers consent.

Seems relevant: Java’s Losing Security Legacy,
http://threatpost.com/javas-losing-security-legacy and Don't Sign
that Applet!, https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

Dormann advises don't sign so that the code can't escape its sandbox
and it stays restricted (malware regularly signs to do so).