Re: Fingerprinting Guidance for Web Specification Authors

2015-12-01 Thread Jeffrey Walton
On Tue, Dec 1, 2015 at 11:52 AM, Arthur Barstow  wrote:
> Editors, All - please see "Fingerprinting Guidance for Web Specification
> Authors" 
> and reflect it in your spec, accordingly.

Tracking can be a tricky problem because it occurs at many layers in a
typical stack. It starts when the first TCP SYN is sent. If browsers
use an underlying transport like TCP/IP, should something be said
about it (ie., its in scope or out of scope)?

ETAGS were not mentioned in the document. In the context of the PING
working group, I would expect to see a treatment.

Jeff



Re: The futile war between Native and Web

2015-02-19 Thread Jeffrey Walton
  I am not sure about that...

Data has three states:

  (1) Data in storage
  (2) Data on display
  (3) Data in transit

Because browsers can't authenticate servers with any degree of
certainty, they have lost the data in transit state. That leaves a
poor choice of options, like side loading on location limited
channels. Side loading and location limited channels are not very
scalable.

Another option is to allow the browser to handle the lower value data
and accept the risk. That's what US financial institutions do so they
don't lose customers.

The final option is to put your trust in the browser platform.
That's what many people are happy to do. But its not one size fits
all, and it has gaps that become pain points when data sensitivity is
above trivial or low.

 I think it is possible to make a web site safe.  In
 order to achieve that, we to make sure, that

 a) the (script) code doesn't misbehave (=CSP);
 b) the integrity of the (script) code is secured on the server and while
 in transit;

I think these are necessary preconditions, but not the only conditions.

For what its worth, I'm just the messenger. There are entire
organizations with Standard Operating Procedures (SOPs) built around
the stuff I'm talking about. I'm telling you what they do based on my
experiences.

Jeff

On Thu, Feb 19, 2015 at 3:55 PM, Michaela Merz
michaela.m...@hermetos.com wrote:

 I am not sure about that. Based on the premise that the browser itself
 doesn't leak data, I think it is possible to make a web site safe.  In
 order to achieve that, we to make sure, that

 a) the (script) code doesn't misbehave (=CSP);
 b) the integrity of the (script) code is secured on the server and while
 in transit;

 I believe both of those imperative necessities are achievable.

 Michaela


 On 02/19/2015 01:43 PM, Jeffrey Walton wrote:
 On Thu, Feb 19, 2015 at 1:44 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote:
 * Jeffrey Walton wrote:
 Here's yet another failure that Public Key Pinning should have
 stopped, but the browser's rendition of HPKP could not stop because of
 the broken security model:
 http://arstechnica.com/security/2015/02/lenovo-pcs-ship-with-man-in-the-middle-adware-that-breaks-https-connections/.
 In this story the legitimate user with full administrative access to the
 systems is Lenovo. I do not really see how actual user agents could have
 stopped anything here. Timbled agents that act on behalf of someone
 other than the user might have denied users their right to modify their
 system as Lenovo did here, but that is clearly out of scope of browsers.
 --
 Like I said, the security model is broken and browser based apps can
 only handle low value data.



Re: The futile war between Native and Web

2015-02-19 Thread Jeffrey Walton
On Mon, Feb 16, 2015 at 3:34 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Sun, Feb 15, 2015 at 10:59 PM, Jeffrey Walton noloa...@gmail.com wrote:
 For the first point, Pinning with Overrides
 (tools.ietf.org/html/draft-ietf-websec-key-pinning) is a perfect
 example of the wrong security model. The organizations I work with did
 not drink the Web 2.0 koolaide, its its not acceptable to them that an
 adversary can so easily break the secure channel.

 What would you suggest instead?

Sorry to dig up an old thread.

Here's yet another failure that Public Key Pinning should have
stopped, but the browser's rendition of HPKP could not stop because of
the broken security model:
http://arstechnica.com/security/2015/02/lenovo-pcs-ship-with-man-in-the-middle-adware-that-breaks-https-connections/.

Jeff



Re: The futile war between Native and Web

2015-02-19 Thread Jeffrey Walton
On Thu, Feb 19, 2015 at 12:15 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Feb 19, 2015 at 6:10 PM, Jeffrey Walton noloa...@gmail.com wrote:
 On Mon, Feb 16, 2015 at 3:34 AM, Anne van Kesteren ann...@annevk.nl wrote:
 What would you suggest instead?

 Sorry to dig up an old thread.

 Here's yet another failure that Public Key Pinning should have
 stopped, but the browser's rendition of HPKP could not stop because of
 the broken security model:
 http://arstechnica.com/security/2015/02/lenovo-pcs-ship-with-man-in-the-middle-adware-that-breaks-https-connections/.

 That does not really answer my questions though.

Good point.

Stop letting externalities control critical security parameters
unmolested since an externality is not the origin nor the the user.

HPKP has a reporting mode, but a broken pinset is a MUST NOT report.
Broken pinsets should be reported to the user and the origin so the
browser is no longer complicit in covering up for the attacker.

Jeff



Re: The futile war between Native and Web

2015-02-19 Thread Jeffrey Walton
On Thu, Feb 19, 2015 at 4:31 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Feb 19, 2015 at 10:05 PM, Jeffrey Walton noloa...@gmail.com wrote:
 For what its worth, I'm just the messenger. There are entire
 organizations with Standard Operating Procedures (SOPs) built around
 the stuff I'm talking about. I'm telling you what they do based on my
 experiences.

 From your arguments though it sounds like they would be fine with
 buying PCs from Lenovo with installed spyware, which makes it all
 rather dubious. You can't cite the Lenovo case as a failure of
 browsers when it's a compromised client.


No :) The organizations I work with have SOPs in place to address
that. They would not be running an unapproved image in the first
place.

*If* the user installed a CA for interception purposes, then yes, I
would blame the platform. The user does not set organizational
policies, and its not acceptable the browser allow the secure channel
to be subverted by an externality.

I think the secret ingredient that is missing in the browser secret
sauce is a Key Usage of INTERCEPTION. This way, a user who installs a
certificate without INTERCEPTION won't be able to use it for
interception because the browser won't break a known good pinset
without it. And users who install one with INTERCEPTION will know what
they are getting. I know it sounds like Steve Bellovin's Evil Bit RFC
(Aril Fools Day RFC), but that's what the security model forces us
into because we can't differentiate between the good bad guys and
the bad guys.

In native apps (and sometimes hybrid apps), we place a control to
ensure that does not happen. We are not encumbered by the broken
security model.

Jeff



Re: The futile war between Native and Web

2015-02-16 Thread Jeffrey Walton
On Mon, Feb 16, 2015 at 11:19 AM, Anders Rundgren
anders.rundgren@gmail.com wrote:
 ...

 You would anyway end-up with proprietary AppStores with granted Apps and
 then I don't really see the point insisting on using web-technology anymore.
 General code-signing like used in Windows application doesn't help, it is
 just one OK button more to click before running.

The interesting thing about App Stores and Code Signing is it provides
vendor lock-in and maintains revenue streams. So while it has
[sometimes] questionable value, its something that we are going to see
a lot more of as vendors unofficially use it as a tool to further
their internal agendas.

Look at how many vendors are supporting Installable Web Apps.
Installable Web Apps effectively turns any web server into an App
Store. The answer is: Mozilla and Chrome.

Jeff



Re: The futile war between Native and Web

2015-02-16 Thread Jeffrey Walton
On Mon, Feb 16, 2015 at 3:17 AM, Florian Bösch pya...@gmail.com wrote:
 On Mon, Feb 16, 2015 at 9:08 AM, Jeffrey Walton noloa...@gmail.com wrote:

 I'd hardly consider an account holder's data as high value. Medium at
 best and likely low value. But that's just me.

 Of course if the data is compromised it means that an attacker can also
 remote-control your e-banking interface, and issue payments and so forth.
 I'm sure that's not high value either?

No, that's definitely not high value from my experience with three US
financial firms. In US financial, those losses are simply passed on to
share holders. Risk is democratized, reward is privatized.

Perhaps you should talk to other security architects with experience
in financial and see what they have to say.

Jeff



Re: The futile war between Native and Web

2015-02-16 Thread Jeffrey Walton
On Mon, Feb 16, 2015 at 2:15 AM, Florian Bösch pya...@gmail.com wrote:
 On Mon, Feb 16, 2015 at 8:09 AM, Anders Rundgren
 anders.rundgren@gmail.com wrote:

 Unfortunately this is wrong and is why I started this thread. Mobile
 banking applications in Europe are usually featured as Apps.
 This has multiple reasons; one is that there's no way to deal with
 client-side PKI and secure key storage in the mobile web.


 Well Postfinance, Credit Suisse and UBS all have browser based e-banking
 solutions. Some of them have Apps (usually on iOS/Android) in the
 app-stores, but these are usually just web-widgets put in a container so
 they can put it on the store.

I'd hardly consider an account holder's data as high value. Medium at
best and likely low value. But that's just me.

Jeff



Re: The futile war between Native and Web

2015-02-15 Thread Jeffrey Walton
 In practice this has proved to be wrong although the reasons vary from lack
 of standards for
 the platform feature to support,

I find there are two problems with browser based apps. First is the
security model, and second is anemic security opportunities.

For the first point, Pinning with Overrides
(tools.ietf.org/html/draft-ietf-websec-key-pinning) is a perfect
example of the wrong security model. The organizations I work with did
not drink the Web 2.0 koolaide, its its not acceptable to them that an
adversary can so easily break the secure channel.

For the second point, and as a security architect, I regularly reject
browser-based apps that operate on medium and high value data because
we can't place the security controls needed to handle the data. The
browser based apps are fine for low value data.

An example of the lack of security controls is device provisioning and
client authentication. We don't have protected or isolated storage,
browsers can't safely persist provisioning shared secrets, secret
material is extractable (even if marked non-extractable), browsers
can't handle client certificates, browsers are more than happy to
cough up a secret to any server with a certificate or public key (even
the wrong ones), ...

For medium and high value data, that usually leaves hybrid and native
apps. With a high value data and a native app, there's usually
non-trivial residual risk that usually forces the app into risk
acceptance.

 Yet another difficulty is that the browser vendors and the market
 occasionally have diverging
 interests and priorities, leaving the latter lot in a very unfavorable
 situation w.r.t. innovation.

Guys like me don't have a dog in that fight. We don't care about the
bells and whistles. We just want to place security controls
commensurate with the data sensitivity level.

Jeff

On Sun, Feb 15, 2015 at 6:19 AM, Anders Rundgren
anders.rundgren@gmail.com wrote:
 In theory browsers can support any kind of platform-related function, right?

 In practice this has proved to be wrong although the reasons vary from lack
 of standards for
 the platform feature to support, to security and trust-models models
 involving other parties
 than the user and the site connected to.   In addition, the concept of
 trusted web code still
 doesn't exist and personally I doubt that it will be here anytime soon, if
 ever.  Permissions do
 not address code trustability either.

 Yet another difficulty is that the browser vendors and the market
 occasionally have diverging
 interests and priorities, leaving the latter lot in a very unfavorable
 situation w.r.t. innovation.

 To avoid TL;DR.  A browser can do things the native level cannot but this is
 equally applicable
 the other way round so an obvious solution is burying the hatchet and
 rather try to figure out
 how these great systems could work in concert!  Here is a concrete
 suggestion:

 https://lists.w3.org/Archives/Public/public-web-intents/2015Feb/.html




Re: =[xhr]

2014-11-27 Thread Jeffrey Walton
 I think there are several different scenarios under consideration.

 1. The author says Content-Length 100, writes 50 bytes, then closes the 
 stream.
 2. The author says Content-Length 100, writes 50 bytes, and never closes the 
 stream.
 3. The author says Content-Length 100, writes 150 bytes, then closes the 
 stream.
 4. The author says Content-Length 100 , writes 150 bytes, and never closes 
 the stream.

Using a technique similar to (2) will cause some proxies to hang.
http://www.google.com/search?q=proxy+hang+content-length+wrong



Re: What I am missing

2014-11-18 Thread Jeffrey Walton
On Wed, Nov 19, 2014 at 12:35 AM, Michaela Merz
michaela.m...@hermetos.com wrote:
 Well .. it would be a all scripts signed or no script signed kind of a
 deal. You can download malicious code everywhere - not only as scripts.
 Signed code doesn't protect against malicious or bad code. It only
 guarantees that the code is actually from the the certificate owner .. and
 has not been altered without the signers consent.

Seems relevant: Java’s Losing Security Legacy,
http://threatpost.com/javas-losing-security-legacy and Don't Sign
that Applet!, https://www.cert.org/blogs/certcc/post.cfm?EntryID=158.

Dormann advises don't sign so that the code can't escape its sandbox
and it stays restricted (malware regularly signs to do so).



Re: PSA: publishing new WD of Clipboard API and events on Sept 18

2014-09-16 Thread Jeffrey Walton
On Mon, Sep 15, 2014 at 9:06 PM, Daniel Cheng dch...@google.com wrote:
 Again, what are you trying to defend against? Why is it beneficial to try to
 block this?
At minimum, its information leakage. If the data has value such that
at least one leg requires HTTPS, then traversing some legs with HTTP
is a security defect. That would be a CWE-311
(http://cwe.mitre.org/data/definitions/311.html).

The benefits are customary confidentiality and privacy protections. In
addition, unauthorized parties will be restrained from injecting into
the clipboard.

In a post-Snowden era, we have a good idea of how wide spread some of
these problems are. So addressing the problem is consistent with web
design principals. In particular, 3.1. Solve Real Problems.

I also believe it violates at least two web design principals. First
is 3.2. Priority of Constituencies [1], and second is 3.3. Secure
By Design [2]. It violates the first design principal because the
user asked for a specific feature, but it was not delivered. It
violates the second feature because its not secure by design.

To volley it back over the net, can you think of users or
organizations who classify their data as valuable, and then say its OK
to send it send it over HTTP? (I often use the contrapositive to
envision something in a different view).

And to be clear: I'm not begging for HTTPS everywhere (though I would
like to :). I ask if HTTPS is selected for on some legs, then it
should be used on all legs. Don't mix and match because a third party
cares less about the data than the user or organization.

 Again, what are you trying to defend against? Why is it beneficial to try to
 block this?

And to address the potential block: please don't do that because of
me. Move the security concerns along with the feature set.

[0] http://www.w3.org/TR/html-design-principles/#solve-real-problems
[1] http://www.w3.org/TR/html-design-principles/#priority-of-constituencies
[2] http://www.w3.org/TR/html-design-principles/#secure-by-design

 On Sep 15, 2014 3:18 PM, Jeffrey Walton noloa...@gmail.com wrote:

 On Mon, Sep 15, 2014 at 5:26 PM, Hallvord R. M. Steen
 hst...@mozilla.com wrote:
http://dev.w3.org/2006/webapi/clipops/clipops.html
  Please forgive my ignorance. But I don't see a requirement that data
  egressed from the local machine to be protected with SSL/TLS.
 
  I can certainly add a note *encouraging* encryption, but it's not
  something we can require in a meaningful sense - it's several layers away
  from the parts of the process the spec is about.
 
  Also, if the origin uses a secure scheme like HTTPS, then shouldn't
  the script's also require the same? That is, shouldn't the spec avoid
  fetching from https://www.example.com and then shipping clipboard data
  off to http://www.ads.com?
 
  As an end user, I would go absolutely nuts if a computer was behaving
  inconsistently in apparently random ways like that. I'm pretty sure that no
  matter how security conscious you are, you probably copy and paste data
  between HTTPS and HTTP pages several times every month.. Having the browser
  block that because it pretends to know that some random data is important
  when I know it's not doesn't sound user friendly at all.

 Well, usually the attacker has to work for a downgrade attack :)

 Wouldn't it be better for the user if a consistent policy were applied
 across the board when handling their data? If one leg of the
 connection uses HTTPS, then all legs must use it. If I were a user and
 visited a site with HTTPS, then that's what I would expect when moving
 my data around.

 Proper handling of the data shifts the onus to the webmaster and
 developers, but webmasters and developers are in a better position to
 manage these sorts of things. Its not really a burden on the
 technology folks - they just have to pay attention to the details. I
 don't think that's asking too much.

 And the clipboard standard is new, so its a great opportunity to avoid
 the patching used to address gaps. If the gaps are addressed early,
 then they won't be an issue in the future.



Re: PSA: publishing new WD of Clipboard API and events on Sept 18

2014-09-15 Thread Jeffrey Walton
On Mon, Sep 15, 2014 at 4:27 AM, Arthur Barstow art.bars...@gmail.com wrote:
 This is a heads-up Hallvord intends to publish a WD of Clipboard API and
 events and he is targeting a publication date of September 18. The ED

   http://dev.w3.org/2006/webapi/clipops/clipops.html

 If anyone has any comments or concerns about this plan, please let us know
 before the 18th.
Please forgive my ignorance. But I don't see a requirement that data
egressed from the local machine to be protected with SSL/TLS.

Also, if the origin uses a secure scheme like HTTPS, then shouldn't
the script's also require the same? That is, shouldn't the spec avoid
fetching from https://www.example.com and then shipping clipboard data
off to http://www.ads.com?

Is these intended?



Re: PSA: publishing new WD of Clipboard API and events on Sept 18

2014-09-15 Thread Jeffrey Walton
On Mon, Sep 15, 2014 at 5:26 PM, Hallvord R. M. Steen
hst...@mozilla.com wrote:
   http://dev.w3.org/2006/webapi/clipops/clipops.html
 Please forgive my ignorance. But I don't see a requirement that data
 egressed from the local machine to be protected with SSL/TLS.

 I can certainly add a note *encouraging* encryption, but it's not something 
 we can require in a meaningful sense - it's several layers away from the 
 parts of the process the spec is about.

 Also, if the origin uses a secure scheme like HTTPS, then shouldn't
 the script's also require the same? That is, shouldn't the spec avoid
 fetching from https://www.example.com and then shipping clipboard data
 off to http://www.ads.com?

 As an end user, I would go absolutely nuts if a computer was behaving 
 inconsistently in apparently random ways like that. I'm pretty sure that no 
 matter how security conscious you are, you probably copy and paste data 
 between HTTPS and HTTP pages several times every month.. Having the browser 
 block that because it pretends to know that some random data is important 
 when I know it's not doesn't sound user friendly at all.

Well, usually the attacker has to work for a downgrade attack :)

Wouldn't it be better for the user if a consistent policy were applied
across the board when handling their data? If one leg of the
connection uses HTTPS, then all legs must use it. If I were a user and
visited a site with HTTPS, then that's what I would expect when moving
my data around.

Proper handling of the data shifts the onus to the webmaster and
developers, but webmasters and developers are in a better position to
manage these sorts of things. Its not really a burden on the
technology folks - they just have to pay attention to the details. I
don't think that's asking too much.

And the clipboard standard is new, so its a great opportunity to avoid
the patching used to address gaps. If the gaps are addressed early,
then they won't be an issue in the future.



Re: Proposal for a Permissions API

2014-09-04 Thread Jeffrey Walton
On Thu, Sep 4, 2014 at 4:24 PM, Florian Bösch pya...@gmail.com wrote:
 On Thu, Sep 4, 2014 at 10:18 PM, Marcos Caceres mar...@marcosc.com wrote:

 This sets up an unrealistic straw-man. Are there any real sites that would
 need to show all of the above all at the same time?

 Let's say you're writing a video editor, you'd like:

 To get access to the locations API so that you can geotag the videos
 Get access to the notifications API so that you can inform the user when
 rendering has finished.
 Get user media to capture material
 Put a window in fullscreen (perhaps on a second monitor) or to view footage
 without other decorations

 Of course it's a bit contrived, but it's an example of where we're steering
 to. APIs don't stop being introduced as of today, and some years down the
 road, I'm sure more APIs that require permissions will be introduced, which
 increases the likelihood of moving such an example from the realm of
 unlikely to pretty common.
This could make a good case study.

A site that continually prompts the user could negatively affect the
user experience. If the designers of the site appreciate the fact,
then they might ask for fewer permissions. They might even segregate
functionality into different areas of the site with different
permission requirements to lessen the burden on a user. Its kind of
like a forced attrition into principal of least privilege.

If there are no hurdles or obstacles, then sites will ask for
everything whether they need it or not. The web will degenerate into
an Android flashlight app.

Given browsers are going to be executing high value code and handling
high value data (cf., secure origins) and the two choices above, I
think I would rather have the prompts.



Re: Blocking message passing for Workers

2014-08-11 Thread Jeffrey Walton
On Mon, Aug 11, 2014 at 7:52 PM, David Bruant bruan...@gmail.com wrote:
 Le 12/08/2014 00:40, Glenn Maynard a écrit :

 On Sat, Aug 9, 2014 at 9:12 AM, David Bruant bruan...@gmail.com wrote:

 This topic is on people minds [1]. My understanding of where we're at is
 that ECMAScript 7 will bring syntax (async/await keywords [2]) that looks
 like sync syntax, but acts asynchronously. This should eliminate the need
 for web devs for blocking message passing primitives for workers.


 Syntax sugar around async is not a replacement for synchronous APIs.

 I have yet to find a use case for hand-written code that requires sync APIs
 and cannot be achieved with async programming.

Asynch complicates diagramming and modelling because you need a state
machine instead of a simple ladder diagram.

One of the reasons cited for the heartbleed failure was standards
imposed complexity. Forcing  async when sync will suffice surely
complicates some programs.

I also find it easier to audit the latter.

Jeff



Re: [clipboard] Semi-Trusted Events Alternative

2014-07-26 Thread Jeffrey Walton
On Sat, Jul 26, 2014 at 9:19 AM, Perry Smith pedz...@gmail.com wrote:
 Sorry if this is a lame question but I never understood the dangers of Copy 
 and Paste that the web is trying to avoid.  Can someone explain that to me?

Its a point of data egress. You don't want sensitive information from
one program scraped and egressed by another.

The first program could be a browser and the second program could be
malware. In this case, the malware looks for data placed on the
clipboard by the browser (and hopes to get a username, password,
sensitive document, etc).

Or, it could be another program with the browser scraping the data and
hauling it off to a site.

Jeff



Re: [clipboard] Semi-Trusted Events Alternative

2014-07-26 Thread Jeffrey Walton
On Sat, Jul 26, 2014 at 9:34 AM, Perry Smith pedz...@gmail.com wrote:

 On Jul 26, 2014, at 8:26 AM, Jeffrey Walton noloa...@gmail.com wrote:

 On Sat, Jul 26, 2014 at 9:19 AM, Perry Smith pedz...@gmail.com wrote:
 Sorry if this is a lame question but I never understood the dangers of Copy 
 and Paste that the web is trying to avoid.  Can someone explain that to me?

 Its a point of data egress. You don't want sensitive information from
 one program scraped and egressed by another.

 The first program could be a browser and the second program could be
 malware. In this case, the malware looks for data placed on the
 clipboard by the browser (and hopes to get a username, password,
 sensitive document, etc).

 Or, it could be another program with the browser scraping the data and
 hauling it off to a site.

 I thought about that.  So it is not so much the Copy and Paste operations as 
 much as being able to get the content of the clipboard. ?

Yes, I believe so. The clipboard is a shared resource with little to
no restrictions.

One of the check boxes on a security evaluation is how a program
handles the clipboard and copy/paste (or at least the ones I used when
doing security architecture work). Its one of those dataflows that
could be part of a higher then expected data sensitivity, like a
single sign-on password.

Also, data egress may have been a bad choice. In this case, I think
its more about data collection. Its hard to stop a web browser from
opening a socket ;)

Two addition clipboard features that would be nice are: (1) a one
shot copy/paste: delete the password from the clipboard after
retrieving it from he password manager and pasting it into a password
box; and (2) timed copy/paste: expire the data after 10 seconds or
so. Both should allow the legitimate use cases, and narrow the window
for the abuse cases.

Jeff



WebApp installation via the browser

2014-05-30 Thread Jeffrey Walton
I have a question about Use Cases for Installable WebApps located at
https://w3c-webmob.github.io/installable-webapps/.

Under section Add to Homescreen, the document states:

... giving developers the choice to tightly integrate their web
applications into the OS directly from the Web browser is
still somewhat new...

... [Installable WebApps] are different in that the
applications are installed directly from the browser itself
rather than from an app store...

It sounds like to me the idea is to allow any site on the internet to
become a app store. My observations are the various app stores provide
vendor lock-in and ensure a revenue stream. Its architected into the
platform. Companies like Apple, Microsoft and RIM likely *won't* give
up lock-in or the revenue stream (at least not without a fight).

Are there any platforms providing the feature? Has the feature gained
any traction among the platform vendors?

Thanks in advance.



Re: WebApp installation via the browser

2014-05-30 Thread Jeffrey Walton
On Fri, May 30, 2014 at 9:04 PM, Brendan Eich bren...@mozilla.org wrote:
 Jeffrey Walton wrote:

 Are there any platforms providing the feature? Has the feature gained
 any traction among the platform vendors?

 Firefox OS wants this.
Thanks Brendan.

As a second related question, is an Installable WebApp considered a
side-loaded app?