Hi all,

Thinker et al and I had a interesting lunch conversation around Hosted
Packaged App.

The first question to ask is the mixture of versioning/atomic
update/packaging v.s. signing trusted content. Web developers were
told Service Worker is the Would Platform feature that would allow
full control over offline/versioning/atomic update, and the latest
solution to the sad story of HTTP Cache and AppCache. Yet, Hosted
Package introduce yet another layer of complexity to the solution
proposed -- we achieve one goal (delivering trusted content from Web
instead of FOTA images) at the expense of denying the intended uses of
Service Worker caches/AppCache and even URLs. It also force us to
figure out how to update apps in packages again by introducing some
delta package formats etc, again.

I wonder if there is a better solution out there that could split
signing & deploy/offline/update into two independent problems and only
dealt with the first problem instead. By doing so we could leverage
the existing solution for the latter problem.

The second question we had is whether or not HTTPS+CSP could establish
the same level of trust. The answer is probably no because of our 300
root CAs in NSS and CNNIC, but it is also because HTTPS encrypts and
signs the streamed files on the fly -- you would not be able to
establish true accountability between the code executed and the
certificate without recording the whole HTTPS session. It also deny
the possibility of 3rd-party reviews before hand.

So the lunch conversation boil down to one question: can we associate
and sign the resources individually? Either by inserting an
X-App-Signature in HTTP header or with some signature string in the
manifest. These ideas sounds utterly familiar to Privileged Hosted
Apps proposed by the one and only Ben Francis :) but I can't find any
discussion in written or remember any conversation that prompt us to
decide on Hosted Packaged App over anything based on the said
proposal. The conversation in the thread "Proposal: Privileged Hosted
Apps" doesn't look directing to the proposal directly -- Hosted
Packaged App does not solve many of the problems raised in that thread
either (and we may never be able to for problems like creating wall
gardens).

Again, the intention here is to figure out a way to decouple
offline/versioning/atomic update and signing, because we have enough
solution and complexity on the first problem already.

I wish we had the same conversation back in Mountain View, but I guess
we would have to do so over e-mail here first ...

If there is enough traction, we can try to put forward a proposal
based on the discussion.


Tim

On Sat, Apr 4, 2015 at 9:12 PM, Jonas Sicking <[email protected]> wrote:
> On Thu, Apr 2, 2015 at 5:23 PM, Benjamin Francis <[email protected]> wrote:
>> I have some comments on Jonas' proposed new security model for B2G.
>> Apologies to Jonas if this is a work in progress and not ready for
>> discussion, but it's been on the wiki for a few days now and I think Tim
>> linked to it in his blog post so I figured it was fair game ;)
>>
>> https://wiki.mozilla.org/FirefoxOS/New_security_model
>
> Doh! I forgot to send this to the b2g list. It's definitely up for discussion.
>
>> URLs
>>
>> The proposal says that "The format used for the packaging will be the one
>> defined in the W3C packaging spec draft". In addition to a packaging format
>> that spec [1] proposes a different URL format than the !// system which is
>> discussed here.
>>
>> In the W3C proposal the package is specified in a <link rel="package"
>> href="..." scope="..."> link relation and is an alternative way to fetch a
>> packaged version of a bunch of URLs within a defined URL scope in a single
>> HTTP request. Before trying to separately GET any resources which fall
>> within that scope, the user agent should first check inside the package to
>> see if the resource is included.
>
> I don't see how the W3C proposal would let us put the HTML file in the
> package. It seems to be mainly geared towards putting resources like
> CSS files, scripts and images into a package which is then used by a
> freestanding HTML page.
>
> In particular, in order to load something from a package, the package
> must first be declared using a <meta> tag in the HTML file. But in
> order to see that <meta> tag we must first load the HTML file, which
> must thus be loaded from outside the package.
>
> As far as I can tell, the W3C proposal also makes URLs significantly
> more complex. Each resource now has two URLs. One URL like
> "https://website.com/RSSReader2000/picture.jpg";, which is the URL that
> the JS and the DOM sees, the second url is like
> "https://website.com/RSSReader2000.pak#url=index.html"; which is the
> URL which is the one that can actually be used to load something from
> the server.
>
> This duality of URLs seems like it's a huge source of complexity. If I
> have a URL, how do I know what type it is? How big changes will be
> needed to all pieces of software that use URLs to add logic for
> tracking what type of URL is used where? It also seems like a lot of
> logic overhead for developers.
>
> In short, I think the W3C draft defines a good packaging format. But
> it's URL handling seems pretty broken.
>
>> Signing
>>
>> The Streamable Package Format is designed to be consistent with multipart
>> media types. Should we therefore consider using the multipart/signed format
>> from RFC 1847 [2] (e.g. [3]) rather than include separate signature files?
>> Note that this would allow the HTTP headers of the resources to part of the
>> signed resource.
>>
>> Are you expecting individual parts (files) in the package to be signed (each
>> part could be a multipart/signed), or for the package to be signed as a
>> whole (the package would be a multipart/signed)? If it's the whole package,
>> would that effect its streamability? Does the whole package have to be
>> verified before any of the resources can be used?
>
> I don't really have an opinion about rfc 1847 vs. what we use today.
> But a requirement is that each resource is individually signed. Since,
> as you note, the package can't be streamed unless we can verify each
> part separately.
>
> I'd prefer to leave the signing format up to the crypto team.
>
>> Scope
>>
>> You point out that a signed app should run in its own process and its iframe
>> should only be allowed to navigate to URLs which return signed resources.
>> This sounds tricky, but it also sounds potentially related to the
>> "navigation scope" we have discussed around web apps.
>>
>> Note that it might not be as simple as only allowing the browsing context to
>> navigate to URLs of resources which came from the package. We may want to
>> create Gaia apps which have dynamic URLs like
>> http://contacts.firefox.com/contacts/123 which the Service Worker intercepts
>> and generates a page for, but which were not one of the static resources
>> included in the package.
>
> Keep in mind that in order for a SW to be initiated the navigated-to
> URL needs to be in the scope of the SW. And I expect that for signed
> packages the scope will be that of the package. Otherwise I don't see
> any big problems. I.e. I think we can allow navigating to
> "https://contacts.firefox.com/contacts.pak!//contact/123"; even if no
> "contact/123" resource exists in the package.
>
> The only issue is that such a URL is device specific. I.e. the user
> couldn't share that URL with a friend and have the friend load it.
> That's not package specific in any way, but is always the case if you
> use a SW to generate URLs that doesn't exist and don't have a server
> to generate the same URLs.
>
>> Permissions
>>
>> If we want to grant permissions to apps which are not installed, I think we
>> need to at least re-visit all the permissions which have the default granted
>> permission as "Allow" [4].
>
> Indeed.
>
>> Currently our permissions system assumes an
>> implicit level of trust from the user from the act of installing an app.
>> Allowing a permission to be used simply by navigating a web page removes
>> this implicit opt-in from the user and puts a lot more responsibility on
>> code reviewers at Mozilla.
>
> I don't agree with this. We never intended the act of installing as
> something which the user should think of as making a security
> decision. I.e. the user was never intended to think "is this safe"
> before clicking the "yes" button. Which is why the install UI doesn't
> inform the user of anything security related.
>
>> Updates
>>
>> If the cache header of a particular resource has expired, could Gecko be
>> smart enough to just download that one resource from the package (streaming
>> the package until it comes across the resource it's looking for) rather than
>> having to download the whole package or use some complex incremental diffing
>> system?
>
> It's important that we don't mix and match content from different
> versions of a package. This isn't signing specific, but important for
> packages in general.
>
>> Origin
>>
>> Are signed resources from the package always considered cross-origin with
>> other resources on the same server?
>>
>> Could a static signed JavaScript file at
>> http://contacts.firefox.com/js/script.js do an XHR to a dynamic REST API URL
>> at http://contacts.firefox.com/contacts/123?
>
> Some of this stuff needs to be figured out still. But we definitely
> can't allow unsigned *pages* on the same server to be considered
> same-origin. Since that would mean that an unsigned page could reach
> in to a signed page and evaluate code there. Thus gaining the
> permissions of the signed page.
>
> But I would indeed like the signed content to be able to do
> same-origin XHR requests to the server they were served from without
> having to use CORS.
>
>> Manifest
>>
>> Could these new Firefox Apps maybe use the W3C web app manifest format as a
>> base, and use vendor prefixed proprietary properties where necessary? (e.g.
>> moz_permissions). Seems like an opportunity for a fresh start.
>
> I don't think this is a fresh start since my intent is to convert all
> the existing packaged content in our marketplace to this new package
> format.
>
> But I do indeed think that we should support both W3C manifests and
> our own manifest everywhere where we support manifests.
>
> / Jonas
> _______________________________________________
> dev-b2g mailing list
> [email protected]
> https://lists.mozilla.org/listinfo/dev-b2g
_______________________________________________
dev-b2g mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-b2g

Reply via email to