Re: Intent to implement: CSS Grid Layout Module Level 1

2015-02-03 Thread L. David Baron
On Tuesday 2015-02-03 19:40 +, Mats Palmgren wrote:
 On 02/03/2015 04:51 PM, Jonas Sicking wrote:
 Can we also expose this to certified apps so that our Gaia developers can
 start experimenting with using grids?
 
 There isn't any useful layout code in the tree yet.  The ETA for
 anything that's worth experimenting with is still 3-4 weeks or so.
 
 I think we can enable the pref for Nightly/Aurora at that point.

Getting user feedback on this is great.  But we should make sure to
communicate clearly about how stable the code is, to avoid
dependencies on code that's likely to change.  It's also good to set
clear expectations about when code that's on nightly/aurora is
likely to end up on beta and release.

-David

-- 
턞   L. David Baron http://dbaron.org/   턂
턢   Mozilla  https://www.mozilla.org/   턂
 Before I built a wall I'd ask to know
 What I was walling in or walling out,
 And to whom I was like to give offense.
   - Robert Frost, Mending Wall (1914)


signature.asc
Description: Digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Non-unified builds no longer occurring on central/inbound and friends

2015-02-03 Thread ISHIKAWA,chiaki
On 2015/02/03 15:24, Mike Hommey wrote:
 On Tue, Feb 03, 2015 at 02:27:52PM +0900, ISHIKAWA,chiaki wrote:
 I did a non-unified build and saw the expected failure.
 This is a summary of what I saw.

 Background:

 I may need to modify and debug basic I/O routines on local PC, and so
 want to avoid unnecessary compilation. I use ccache locally to make
 sure I can avoid re-compilation of touched but not modified C++ source
 files (files get touched and remain unmodified when I execute
 hg qpop and hg qpush in successions to work on different patches.
 Without ccache, I have to compile many files. ccache helps a lot.)

 There is a different perspective on unified compilation.

 Compiler farm users:
 One time fast compilation is very important.
 So unified compilation is a win.
 (I suspect precompiled headers, -pch, would be a good win, too.)

 Developers who repeats edit a small set of files, compile and link
 many times on local PC:

 He/she may modify only a few files and want quick
 turn around of the compile of a few files and link time.

 Unified compilation actually compiles more lines than he/she wants
 (because of the extra source lines included in unified source files
 in which his/her modified files are also included.
 (Correct? Am I missing something here?)
 So he/she may not like unified compilation in such scenario.
 
 Here's my take on this: yes, we should optimize for build times when
 code is modified.
 
 But here's the thing: in most directories, unified compilation shouldn't
 be making a huge difference. That is, compiling one unified source vs.
 compiling one source shouldn't make a big difference. If it does (and it
 does in some directories like js/src), then the number of unified
 sources in the directory where it's a problem should be adjusted.
 
 Mike

Mike, thank you for the comment.
I suspect this is indeed the case in many directories.
(I mean unless a change of a single file caused 20 or 30 files to be
included into a unified source, then it is an overhead certainly. But so
far, the upper-bound of single change of a file is less than a couple of
minutes including the link with -gsplit-dwarf.)

I will report if I find a file, when touched, causes  an extraordinarily
long compilation time (by including many of the source files during
unified compilation).

By the way, I saw Unified_binding_*.cpp files during compilation, and I
suspect they are different types of unified compilation since this
unified_binding compilation seems to occur no matter what the setting
of FILES_PER_UNIFIED_FILE.

TIA

CI


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Content Vault Feasibility RFC

2015-02-03 Thread Eric Rescorla
This kind of feature comes up frequently, but to the best of my knowledge
(which
I believe is fairly up to date) it is not known how to build a robust
version of this.

To generalize the problem a bit, we have two pieces of software running on
the
user's computer:

A: A confined process running in some sandbox but which has access to
some secret information.
B: A non-confined process running on the same machine but without access
to the secret information.

Both of these processes are attacker controlled and presumed malicious. The
attacker's job is to exfiltrate the secret information from A to B.

You'll note that when phrased this way we're back to the classic confinement
problems and MLS (see, for instance Lampson's A Note on the Confinement
Problem). What makes this so difficult is that even if you close *all* the
explicit
channels between the two processes, we have to contend with covert channels
of which the browser has many.

It's worth noting that we have at best partial solutions in two rather
easier settings:

- We regularly have to content with cross-origin leakage situations in the
browser
  even when the two sides are *not* cooperating, for instance CSS history
sniffing [1]

- Multitenanted processes when: (a) you have much tighter system control
  (b) the processes aren't cooperating (see, for instance, Ristenpart et
al. from
  2009 [0]).

Given this and the generally low entropy of the data which needs to be
exfiltrated
in these settings, I'm not very enthusiastic about the prospects of this
working.

-Ekr

[0]
https://blog.mozilla.org/security/2010/03/31/plugging-the-css-history-leak/
[1] http://cseweb.ucsd.edu/~hovav/papers/rtss09.html



On Tue, Feb 3, 2015 at 7:44 AM, Olivier Yiptong oyipt...@mozilla.com
wrote:

 # The Content Vault

 The purpose of this document is to gather your comments about the
 feasibility of the idea of a Content Vault (CV). After gathering your
 comments, a more formal RFC will be drafted.

 I’d like to use your comments to colour the proposal, from a platform,
 security and privacy perspective, and from others if needed.

 This is about a new Firefox feature that will allow access to privileged
 information in a content jail, that tries not to let information leak out.
 Another descriptive term for this is a privacy vault.

 The state within that vault may be changed, perhaps on a global or
 per-domain manner.
 This sandbox will allow the transformation of DOM elements prior to
 rendering, but unavailable to the parent page.

 The basic idea is to create a new kind of iframe, with special privileges
 and limitations. In some ways, this may be considered the opposite to HTML5
 sandbox (http://www.html5rocks.com/en/tutorials/security/sandboxed-iframes),
 whose focus is primarily on integrity; the focus of our solution is on
 confidentiality or privacy.

 The idea of the content vault was brought to me by Ben Livshits, a
 Research Scientist at Microsoft Research. Ben’s interests are broad, and
 include Security and Privacy. Ben wishes to be involved in this project; we
 will have his input on the matter.

 Ben can be found online:
 http://research.microsoft.com/en-us/um/people/livshits/

 ## Rationale

 Today’s Internet user expects a great level of personalization. Websites
 achieve this personalization by building a relationship with that user, and
 sometimes through third parties. Those websites commonly create a profile
 for that user, append new data with each interaction and often enrich that
 corpus by buying additional data from brokers.

 The act of personalization is not inherently wrong and is often desired.
 User experiements show that personalization increases user engagement and
 satisfaction in the long run. We, after all, expect our computers to be
 useful devices and that involves a degree of personalization. However, the
 cost is often at the expense of privacy and/or security.

 With the idea of a content vault, we may be able to achieve some level of
 personalization while keeping the data within the control of the user
 agent, thus preventing data leaks.

 ## The Content Vault

 This vault would:
 • not be accessible from the parent page (similar to x-domain
 iframes)
 • have limited capabilities (e.g. no network access)
 • have access to privileged data stored in the UA
 • do decisioning in UA without leaking externally
 • expose an API only accessible inside a sandbox (e.g.
 declaratively allow for certain lists of items to be re-ordered)

 ### Privileged Data

 At this point, the data the CV has access to is not that relevant.
 For illustration purposes, here are some examples of data that would show
 the sensitive nature and utility of such data:
 • product purchase history
 • content preferences (e.g. +ve or -ve signals for topics)
 • absence or presence of signals gathered on the internet

 This pieces of data could inform the rendering of the contents of the CV,
 

Re: Intent to implement: CSS Grid Layout Module Level 1

2015-02-03 Thread Jonas Sicking
Yes!

Can we also expose this to certified apps so that our Gaia developers can
start experimenting with using grids?

/ Jonas
On Feb 2, 2015 2:25 PM, Mats Palmgren m...@mozilla.com wrote:

 Summary:
 CSS Grid defines a two-dimensional grid-based layout system, optimized for
 user interface design. In the grid layout model, the children of a grid
 container can be positioned into arbitrary slots in a flexible or fixed
 predefined layout grid.

 Bug:
 https://bugzilla.mozilla.org/show_bug.cgi?id=css-grid

 Link to standard:
 CSS Grid Layout Module Level 1
 http://dev.w3.org/csswg/css-grid/

 Platform coverage:
 All Gecko platforms and products.

 Estimated or target release:
 TBD

 Preference behind which this will be implemented:
 layout.css.grid.enabled


 /Mats
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Content Vault Feasibility RFC

2015-02-03 Thread rbarnes
It would be helpful if you could comment more on the use cases here.  It's not 
entirely clear to me what's motivating this proposal.  In your message, the 
Inline use case seems to have been truncated, and the Mobile Applications 
use case seems perfectly well addressed by iframes.

Let's focus on the Dynamic Tiles case. As I understand it, the proposal here 
is to have a piece of content that is displayed differently based on private 
information (e.g., a user's preference ordering).

Given that, it seems like what you're asking for is a black box into which the 
untrusted content can write a request saying, please customize this and render 
it without letting me see it.

The problem with this is that the black box needs to be really black.  Nothing 
externally observable about it can change, and it cannot be allowed to emit any 
information itself.  As EKR points out, this is really, really hard.  Even if 
you load everything into a sandbox with no network connectivity, JS can still 
exfiltrate secrets to other JS on the box, by doing things like running the CPU.

In other words, the only possible safe way to inject private information into a 
page is if there is no dynamism in the page at all -- you just hand the 
resources to the browser and say render as you please.

In other words, given that that sounds pretty far from what you're asking for, 
I agree with EKR that this idea is unlikely to be workable.

--Richard


On Tuesday, February 3, 2015 at 11:29:37 AM UTC-5, Eric Rescorla wrote:
 This kind of feature comes up frequently, but to the best of my knowledge
 (which
 I believe is fairly up to date) it is not known how to build a robust
 version of this.
 
 To generalize the problem a bit, we have two pieces of software running on
 the
 user's computer:
 
 A: A confined process running in some sandbox but which has access to
 some secret information.
 B: A non-confined process running on the same machine but without access
 to the secret information.
 
 Both of these processes are attacker controlled and presumed malicious. The
 attacker's job is to exfiltrate the secret information from A to B.
 
 You'll note that when phrased this way we're back to the classic confinement
 problems and MLS (see, for instance Lampson's A Note on the Confinement
 Problem). What makes this so difficult is that even if you close *all* the
 explicit
 channels between the two processes, we have to contend with covert channels
 of which the browser has many.
 
 It's worth noting that we have at best partial solutions in two rather
 easier settings:
 
 - We regularly have to content with cross-origin leakage situations in the
 browser
   even when the two sides are *not* cooperating, for instance CSS history
 sniffing [1]
 
 - Multitenanted processes when: (a) you have much tighter system control
   (b) the processes aren't cooperating (see, for instance, Ristenpart et
 al. from
   2009 [0]).
 
 Given this and the generally low entropy of the data which needs to be
 exfiltrated
 in these settings, I'm not very enthusiastic about the prospects of this
 working.
 
 -Ekr
 
 [0]
 https://blog.mozilla.org/security/2010/03/31/plugging-the-css-history-leak/
 [1] http://cseweb.ucsd.edu/~hovav/papers/rtss09.html
 
 
 
 On Tue, Feb 3, 2015 at 7:44 AM, Olivier Yiptong oyipt...@mozilla.com
 wrote:
 
  # The Content Vault
 
  The purpose of this document is to gather your comments about the
  feasibility of the idea of a Content Vault (CV). After gathering your
  comments, a more formal RFC will be drafted.
 
  I'd like to use your comments to colour the proposal, from a platform,
  security and privacy perspective, and from others if needed.
 
  This is about a new Firefox feature that will allow access to privileged
  information in a content jail, that tries not to let information leak out.
  Another descriptive term for this is a privacy vault.
 
  The state within that vault may be changed, perhaps on a global or
  per-domain manner.
  This sandbox will allow the transformation of DOM elements prior to
  rendering, but unavailable to the parent page.
 
  The basic idea is to create a new kind of iframe, with special privileges
  and limitations. In some ways, this may be considered the opposite to HTML5
  sandbox (http://www.html5rocks.com/en/tutorials/security/sandboxed-iframes),
  whose focus is primarily on integrity; the focus of our solution is on
  confidentiality or privacy.
 
  The idea of the content vault was brought to me by Ben Livshits, a
  Research Scientist at Microsoft Research. Ben's interests are broad, and
  include Security and Privacy. Ben wishes to be involved in this project; we
  will have his input on the matter.
 
  Ben can be found online:
  http://research.microsoft.com/en-us/um/people/livshits/
 
  ## Rationale
 
  Today's Internet user expects a great level of personalization. Websites
  achieve this personalization by building a relationship with that user, and
  sometimes through third 

Content Vault Feasibility RFC

2015-02-03 Thread Olivier Yiptong
# The Content Vault

The purpose of this document is to gather your comments about the feasibility 
of the idea of a Content Vault (CV). After gathering your comments, a more 
formal RFC will be drafted.

I’d like to use your comments to colour the proposal, from a platform, security 
and privacy perspective, and from others if needed.

This is about a new Firefox feature that will allow access to privileged 
information in a content jail, that tries not to let information leak out. 
Another descriptive term for this is a privacy vault.

The state within that vault may be changed, perhaps on a global or per-domain 
manner.
This sandbox will allow the transformation of DOM elements prior to rendering, 
but unavailable to the parent page.

The basic idea is to create a new kind of iframe, with special privileges and 
limitations. In some ways, this may be considered the opposite to HTML5 sandbox 
(http://www.html5rocks.com/en/tutorials/security/sandboxed-iframes), whose 
focus is primarily on integrity; the focus of our solution is on 
confidentiality or privacy.

The idea of the content vault was brought to me by Ben Livshits, a Research 
Scientist at Microsoft Research. Ben’s interests are broad, and include 
Security and Privacy. Ben wishes to be involved in this project; we will have 
his input on the matter.

Ben can be found online: http://research.microsoft.com/en-us/um/people/livshits/

## Rationale

Today’s Internet user expects a great level of personalization. Websites 
achieve this personalization by building a relationship with that user, and 
sometimes through third parties. Those websites commonly create a profile for 
that user, append new data with each interaction and often enrich that corpus 
by buying additional data from brokers.

The act of personalization is not inherently wrong and is often desired. User 
experiements show that personalization increases user engagement and 
satisfaction in the long run. We, after all, expect our computers to be useful 
devices and that involves a degree of personalization. However, the cost is 
often at the expense of privacy and/or security.

With the idea of a content vault, we may be able to achieve some level of 
personalization while keeping the data within the control of the user agent, 
thus preventing data leaks.

## The Content Vault

This vault would:
• not be accessible from the parent page (similar to x-domain iframes)
• have limited capabilities (e.g. no network access)
• have access to privileged data stored in the UA
• do decisioning in UA without leaking externally
• expose an API only accessible inside a sandbox (e.g. declaratively 
allow for certain lists of items to be re-ordered)

### Privileged Data

At this point, the data the CV has access to is not that relevant.
For illustration purposes, here are some examples of data that would show the 
sensitive nature and utility of such data:
• product purchase history
• content preferences (e.g. +ve or -ve signals for topics)
• absence or presence of signals gathered on the internet

This pieces of data could inform the rendering of the contents of the CV, in a 
way that keeps the data within the UA. This data would not be otherwise 
accessible.

### Vault limitations

The CV would have limited capabilities. For instance, certain API endpoints 
will be closed off, e.g. XHR. The idea is to make it so that the runtime for 
this content to be completely self-contained, aside from the rendering to the 
user.

The vault would only be allowed to do transformations to the DOM content and 
perhaps to modify state within the UA that is only accessible via another vault.

Along the same vein as CSP, resources and capabilities for the CV could be 
declared ahead of time.

To mitigate information leakage, for instance, resources could be required to 
be declared in advance. Those resources would be loaded and perhaps 
pre-rendered prior to being selected and drawn.

### Vault API

To aid in personalizing content, an API will be made available within the 
vault. This API will only be made available within the CV and may declare 
certain domain permissions.

An example of a potential declarative API:

ul personalizable=”true”
li topic=”business”.../li
li topic=”baseball”.../li
li topic=”foobarwidget”.../li
/ul

This could trigger the UA to re-order based on users’ preferences, most 
preferred on top, and blacklisted topics hidden. The goal of the surrounding CV 
is to prevent nosy JavaScript from discerning the user’s preferences from the 
DOM state.

JavaScript API’s could also be offered.

## Application

### Inline
The CV could be used embedded in pages, or in what is considered browser chrome.
An example

### Tiles
It could be used to implement the idea of “Dynamic Tiles” in Firefox (an idea 
coming from Doug Turner’s team). Those tiles would be defined as page fragments 
and potentially scripts obtained from the 

Re: Evaluating the performance of new features

2015-02-03 Thread Gabriele Svelto
On 01/02/2015 13:18, Howard Chu wrote:
 People may say I'm biased since I'm the author of LMDB but I have only
 ever posted objective, reproducible comparisons of LMDB to other
 alternatives. http://symas.com/mdb/#bench
 
 If your typical record sizes are smaller than 1KB and you have more
 writes than reads, LMDB may be a poor choice. If you need to support
 large DBs on 32-bit processors LMDB will be a bad choice.

I had come across LMDB myself some time ago and I remember thinking that
it looked like a very good fit for implementing IndexedDB. I didn't
really dig too much into it as my knowledge of our storage code is
limited and I know I wouldn't have the time to try and hack together a
prototype. I think it would be pretty interesting though.

 If you have heavy reads, LMDB may be an ideal choice. Nothing else
 matches it for small footprint, nothing else matches it for reliability,
 and nothing else matches it for read performance.

It sounds like it would be perfect for practically all the IndexedDB
databases I've come across while working on FxOS.

 Gabriele



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Test Informant Report - Week ending Feb 01

2015-02-03 Thread Test Informant

Test Informant report for 2015-02-01.

State of test manifests at revision 940118b1adcd.
Using revision fa91879c8428 as a baseline for comparisons.
Showing tests enabled or disabled between 2015-01-25 and 2015-02-01.

85% of tests across all suites and configurations are enabled.

Summary
---
marionette- ↑0↓0   - 92%
mochitest-a11y- ↑0↓0   - 99%
mochitest-browser-chrome  - ↑67↓18 - 94%
mochitest-browser-chrome-e10s - ↑150↓4 - 61%
mochitest-chrome  - ↑24↓0  - 96%
mochitest-plain   - ↑123↓0 - 84%
mochitest-plain-e10s  - ↑50↓4  - 79%
xpcshell  - ↑16↓0  - 86%

Full Report
---
http://brasstacks.mozilla.com/testreports/weekly/2015-02-01.informant-report.html


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Content Vault Feasibility RFC

2015-02-03 Thread Bobby Holley
Looping in Deian, who was working (is working?) on something like this.

On Tue, Feb 3, 2015 at 9:35 AM, Monica Chew m...@mozilla.com wrote:

 Hi Olivier,

 I agree with ekr and Richard. There has been a lot of research lately about
 how to do personalization in a privacy-preserving manner.

 Bloom Cookies: Web Search Personalization without User Tracking, NDSS 2015
 http://research.microsoft.com/pubs/238114/BloomCookies.pdf

 RePriv: Re-Imagining Content Personalization and In-Browser Privacy, SP
 2011 (Livshits is a co-author on this one)

 http://research.microsoft.com/en-us/um/people/livshits/papers/pdf/oakland11.pdf

 Privad: Practical Privacy in Online Advertising, NSDI 2011
 http://static.usenix.org/event/nsdi11/tech/full_papers/Guha.pdf

 Adnostic: Privacy Preserving Targeted Advertising, NDSS 2010
 http://crypto.stanford.edu/adnostic/

 None of these require what is essentially the equivalent of limited XSS on
 behalf of ad networks. Have you evaluated if these work for you?

 Thanks,
 Monica

 On Tue, Feb 3, 2015 at 7:44 AM, Olivier Yiptong oyipt...@mozilla.com
 wrote:

  # The Content Vault
 
  The purpose of this document is to gather your comments about the
  feasibility of the idea of a Content Vault (CV). After gathering your
  comments, a more formal RFC will be drafted.
 
  I’d like to use your comments to colour the proposal, from a platform,
  security and privacy perspective, and from others if needed.
 
  This is about a new Firefox feature that will allow access to privileged
  information in a content jail, that tries not to let information leak
 out.
  Another descriptive term for this is a privacy vault.
 
  The state within that vault may be changed, perhaps on a global or
  per-domain manner.
  This sandbox will allow the transformation of DOM elements prior to
  rendering, but unavailable to the parent page.
 
  The basic idea is to create a new kind of iframe, with special privileges
  and limitations. In some ways, this may be considered the opposite to
 HTML5
  sandbox (
 http://www.html5rocks.com/en/tutorials/security/sandboxed-iframes),
  whose focus is primarily on integrity; the focus of our solution is on
  confidentiality or privacy.
 
  The idea of the content vault was brought to me by Ben Livshits, a
  Research Scientist at Microsoft Research. Ben’s interests are broad, and
  include Security and Privacy. Ben wishes to be involved in this project;
 we
  will have his input on the matter.
 
  Ben can be found online:
  http://research.microsoft.com/en-us/um/people/livshits/
 
  ## Rationale
 
  Today’s Internet user expects a great level of personalization. Websites
  achieve this personalization by building a relationship with that user,
 and
  sometimes through third parties. Those websites commonly create a profile
  for that user, append new data with each interaction and often enrich
 that
  corpus by buying additional data from brokers.
 
  The act of personalization is not inherently wrong and is often desired.
  User experiements show that personalization increases user engagement and
  satisfaction in the long run. We, after all, expect our computers to be
  useful devices and that involves a degree of personalization. However,
 the
  cost is often at the expense of privacy and/or security.
 
  With the idea of a content vault, we may be able to achieve some level of
  personalization while keeping the data within the control of the user
  agent, thus preventing data leaks.
 
  ## The Content Vault
 
  This vault would:
  • not be accessible from the parent page (similar to x-domain
  iframes)
  • have limited capabilities (e.g. no network access)
  • have access to privileged data stored in the UA
  • do decisioning in UA without leaking externally
  • expose an API only accessible inside a sandbox (e.g.
  declaratively allow for certain lists of items to be re-ordered)
 
  ### Privileged Data
 
  At this point, the data the CV has access to is not that relevant.
  For illustration purposes, here are some examples of data that would show
  the sensitive nature and utility of such data:
  • product purchase history
  • content preferences (e.g. +ve or -ve signals for topics)
  • absence or presence of signals gathered on the internet
 
  This pieces of data could inform the rendering of the contents of the CV,
  in a way that keeps the data within the UA. This data would not be
  otherwise accessible.
 
  ### Vault limitations
 
  The CV would have limited capabilities. For instance, certain API
  endpoints will be closed off, e.g. XHR. The idea is to make it so that
 the
  runtime for this content to be completely self-contained, aside from the
  rendering to the user.
 
  The vault would only be allowed to do transformations to the DOM content
  and perhaps to modify state within the UA that is only accessible via
  another vault.
 
  Along the same vein as CSP, resources and capabilities for 

Re: Content Vault Feasibility RFC

2015-02-03 Thread Monica Chew
Hi Olivier,

I agree with ekr and Richard. There has been a lot of research lately about
how to do personalization in a privacy-preserving manner.

Bloom Cookies: Web Search Personalization without User Tracking, NDSS 2015
http://research.microsoft.com/pubs/238114/BloomCookies.pdf

RePriv: Re-Imagining Content Personalization and In-Browser Privacy, SP
2011 (Livshits is a co-author on this one)
http://research.microsoft.com/en-us/um/people/livshits/papers/pdf/oakland11.pdf

Privad: Practical Privacy in Online Advertising, NSDI 2011
http://static.usenix.org/event/nsdi11/tech/full_papers/Guha.pdf

Adnostic: Privacy Preserving Targeted Advertising, NDSS 2010
http://crypto.stanford.edu/adnostic/

None of these require what is essentially the equivalent of limited XSS on
behalf of ad networks. Have you evaluated if these work for you?

Thanks,
Monica

On Tue, Feb 3, 2015 at 7:44 AM, Olivier Yiptong oyipt...@mozilla.com
wrote:

 # The Content Vault

 The purpose of this document is to gather your comments about the
 feasibility of the idea of a Content Vault (CV). After gathering your
 comments, a more formal RFC will be drafted.

 I’d like to use your comments to colour the proposal, from a platform,
 security and privacy perspective, and from others if needed.

 This is about a new Firefox feature that will allow access to privileged
 information in a content jail, that tries not to let information leak out.
 Another descriptive term for this is a privacy vault.

 The state within that vault may be changed, perhaps on a global or
 per-domain manner.
 This sandbox will allow the transformation of DOM elements prior to
 rendering, but unavailable to the parent page.

 The basic idea is to create a new kind of iframe, with special privileges
 and limitations. In some ways, this may be considered the opposite to HTML5
 sandbox (http://www.html5rocks.com/en/tutorials/security/sandboxed-iframes),
 whose focus is primarily on integrity; the focus of our solution is on
 confidentiality or privacy.

 The idea of the content vault was brought to me by Ben Livshits, a
 Research Scientist at Microsoft Research. Ben’s interests are broad, and
 include Security and Privacy. Ben wishes to be involved in this project; we
 will have his input on the matter.

 Ben can be found online:
 http://research.microsoft.com/en-us/um/people/livshits/

 ## Rationale

 Today’s Internet user expects a great level of personalization. Websites
 achieve this personalization by building a relationship with that user, and
 sometimes through third parties. Those websites commonly create a profile
 for that user, append new data with each interaction and often enrich that
 corpus by buying additional data from brokers.

 The act of personalization is not inherently wrong and is often desired.
 User experiements show that personalization increases user engagement and
 satisfaction in the long run. We, after all, expect our computers to be
 useful devices and that involves a degree of personalization. However, the
 cost is often at the expense of privacy and/or security.

 With the idea of a content vault, we may be able to achieve some level of
 personalization while keeping the data within the control of the user
 agent, thus preventing data leaks.

 ## The Content Vault

 This vault would:
 • not be accessible from the parent page (similar to x-domain
 iframes)
 • have limited capabilities (e.g. no network access)
 • have access to privileged data stored in the UA
 • do decisioning in UA without leaking externally
 • expose an API only accessible inside a sandbox (e.g.
 declaratively allow for certain lists of items to be re-ordered)

 ### Privileged Data

 At this point, the data the CV has access to is not that relevant.
 For illustration purposes, here are some examples of data that would show
 the sensitive nature and utility of such data:
 • product purchase history
 • content preferences (e.g. +ve or -ve signals for topics)
 • absence or presence of signals gathered on the internet

 This pieces of data could inform the rendering of the contents of the CV,
 in a way that keeps the data within the UA. This data would not be
 otherwise accessible.

 ### Vault limitations

 The CV would have limited capabilities. For instance, certain API
 endpoints will be closed off, e.g. XHR. The idea is to make it so that the
 runtime for this content to be completely self-contained, aside from the
 rendering to the user.

 The vault would only be allowed to do transformations to the DOM content
 and perhaps to modify state within the UA that is only accessible via
 another vault.

 Along the same vein as CSP, resources and capabilities for the CV could be
 declared ahead of time.

 To mitigate information leakage, for instance, resources could be required
 to be declared in advance. Those resources would be loaded and perhaps
 pre-rendered prior to being selected and drawn.

 ### Vault API

 

MemShrink Meeting - Today, 3 Feb 2015 at 2:00pm PST

2015-02-03 Thread Jet Villegas
Today's Memshrink meeting is is brought to you by areweslimyet.com:
https://bugzilla.mozilla.org/show_bug.cgi?id=1100253

The wiki page for this meeting is at:
   https://wiki.mozilla.org/Performance/MemShrink

Agenda:
* Prioritize unprioritized MemShrink bugs.
* Discuss how we measure progress.
* Discuss approaches to getting more data.

Meeting details:

* Tue, 3 Feb 2015, 2:00 PM PST
*
http://arewemeetingyet.com/Los%20Angeles/2015-02-03/14:00/MemShrink%20Meeting
* Vidyo: Memshrink
* Dial-in Info:
   - In office or soft phone: extension 92
   - US/INTL: 650-903-0800 or 650-215-1282 then extension 92
   - Toll-free: 800-707-2533 then password 369
   - Conference num 98802
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: CSS Grid Layout Module Level 1

2015-02-03 Thread Mats Palmgren

On 02/03/2015 04:51 PM, Jonas Sicking wrote:

Can we also expose this to certified apps so that our Gaia developers can
start experimenting with using grids?


There isn't any useful layout code in the tree yet.  The ETA for
anything that's worth experimenting with is still 3-4 weeks or so.

I think we can enable the pref for Nightly/Aurora at that point.
I'll send an update when that happens.

/Mats

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Content Vault Feasibility RFC

2015-02-03 Thread Ben Livshits
Hi Monica,


Thanks for your comments. We've been thinking about this for a while. Yes, we 
are very familiar with this work, but the setting is a little different. In 
RePriv, for example, we had to analyze the personalization code which is fine 
for research idea but won't work so well in practice. Fundamentally, the 
challenge is that malicious active content can leak the results of 
personalization, possibly to third parties. How much of an issue that is, we 
can certainly debate, but this is the reason we're trying to figure out how to 
use runtime enforcement to create such a privacy vault.


The ad-focused efforts often solve these challenges through some form of 
multiplexing batches of ads, which is sort of difficult for arbitrary HTML 
content...


Thanks,

Ben



From: Monica Chew m...@mozilla.com
Sent: Tuesday, February 3, 2015 9:35 AM
To: Olivier Yiptong
Cc: dev-platform; Sid Stamm; Marcos Caceres; Ehsan Akhgari; Christoph 
Kerschbaumer; Ed Lee; Ben Livshits
Subject: Re: Content Vault Feasibility RFC

Hi Olivier,

I agree with ekr and Richard. There has been a lot of research lately about how 
to do personalization in a privacy-preserving manner.

Bloom Cookies: Web Search Personalization without User Tracking, NDSS 2015
http://research.microsoft.com/pubs/238114/BloomCookies.pdf

RePriv: Re-Imagining Content Personalization and In-Browser Privacy, SP 2011 
(Livshits is a co-author on this one)
http://research.microsoft.com/en-us/um/people/livshits/papers/pdf/oakland11.pdf

Privad: Practical Privacy in Online Advertising, NSDI 2011
http://static.usenix.org/event/nsdi11/tech/full_papers/Guha.pdf

Adnostic: Privacy Preserving Targeted Advertising, NDSS 2010
http://crypto.stanford.edu/adnostic/

None of these require what is essentially the equivalent of limited XSS on 
behalf of ad networks. Have you evaluated if these work for you?

Thanks,
Monica

On Tue, Feb 3, 2015 at 7:44 AM, Olivier Yiptong 
oyipt...@mozilla.commailto:oyipt...@mozilla.com wrote:
# The Content Vault

The purpose of this document is to gather your comments about the feasibility 
of the idea of a Content Vault (CV). After gathering your comments, a more 
formal RFC will be drafted.

I’d like to use your comments to colour the proposal, from a platform, security 
and privacy perspective, and from others if needed.

This is about a new Firefox feature that will allow access to privileged 
information in a content jail, that tries not to let information leak out. 
Another descriptive term for this is a privacy vault.

The state within that vault may be changed, perhaps on a global or per-domain 
manner.
This sandbox will allow the transformation of DOM elements prior to rendering, 
but unavailable to the parent page.

The basic idea is to create a new kind of iframe, with special privileges and 
limitations. In some ways, this may be considered the opposite to HTML5 sandbox 
(http://www.html5rocks.com/en/tutorials/security/sandboxed-iframes), whose 
focus is primarily on integrity; the focus of our solution is on 
confidentiality or privacy.

The idea of the content vault was brought to me by Ben Livshits, a Research 
Scientist at Microsoft Research. Ben’s interests are broad, and include 
Security and Privacy. Ben wishes to be involved in this project; we will have 
his input on the matter.

Ben can be found online: http://research.microsoft.com/en-us/um/people/livshits/

## Rationale

Today’s Internet user expects a great level of personalization. Websites 
achieve this personalization by building a relationship with that user, and 
sometimes through third parties. Those websites commonly create a profile for 
that user, append new data with each interaction and often enrich that corpus 
by buying additional data from brokers.

The act of personalization is not inherently wrong and is often desired. User 
experiements show that personalization increases user engagement and 
satisfaction in the long run. We, after all, expect our computers to be useful 
devices and that involves a degree of personalization. However, the cost is 
often at the expense of privacy and/or security.

With the idea of a content vault, we may be able to achieve some level of 
personalization while keeping the data within the control of the user agent, 
thus preventing data leaks.

## The Content Vault

This vault would:
• not be accessible from the parent page (similar to x-domain iframes)
• have limited capabilities (e.g. no network access)
• have access to privileged data stored in the UA
• do decisioning in UA without leaking externally
• expose an API only accessible inside a sandbox (e.g. declaratively 
allow for certain lists of items to be re-ordered)

### Privileged Data

At this point, the data the CV has access to is not that relevant.
For illustration purposes, here are some examples of data that would show the 
sensitive nature and utility of such data:
• 

Re: Intent to implement: CSS Grid Layout Module Level 1

2015-02-03 Thread Jonas Sicking
On Tue, Feb 3, 2015 at 11:40 AM, Mats Palmgren m...@mozilla.com wrote:
 On 02/03/2015 04:51 PM, Jonas Sicking wrote:

 Can we also expose this to certified apps so that our Gaia developers can
 start experimenting with using grids?

 There isn't any useful layout code in the tree yet.  The ETA for
 anything that's worth experimenting with is still 3-4 weeks or so.

 I think we can enable the pref for Nightly/Aurora at that point.
 I'll send an update when that happens.

Great! Very much looking forward to that.

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Completed FYI: Re: Current recovery plan for gecko-dev and git (Was: Re: gecko-dev and Git replication will be broken for a little while)

2015-02-03 Thread Hal Wine
All work was completed on Monday and services fully restored. If you are 
experiencing any issues, please contact #vcs.


NOTE: if you use the remote git.mozilla.org/integration/gecko-dev.git 
please keep reading.


If you use this particular remote, and pulled during the problem window 
(roughly Jan 28-Jan 30), you may have picked up some bad shas. This 
remote now has the correct shas, and you can rebase any local work on that.


The other public remote for this repository: 
github.com/mozilla/gecko-dev.git was not affected. (The bad shas were 
not pushed to that instance.)


--Hal

On 2015-02-01 19:35 , Laura Thomson wrote:

7.30 PST update:
Work is largely complete (with two exceptions, and no force pushes
required) and we are currently re-enabling automation.

We have left gecko-projects and integration/gecko-dev for now and will pick
these up in the morning.

Thank you for your patience while the team worked through this outage. We
will have a postmortem next week and post a link to the writeup here.

Best,

Laura



On Sun, Feb 1, 2015 at 8:00 PM, Laura Thomson lthom...@mozilla.com wrote:


5pm PST update:
Work is progressing smoothly.  We are currently at step 8 in the detailed
plan posted earlier. That is, all commits have been manually processed and
the head of both systems matches.

We'll send an update when work is complete, or at 9am PST, whichever is
sooner.

Best,

Laura

On Sun, Feb 1, 2015 at 2:42 PM, Laura Thomson lthom...@mozilla.com
wrote:


Here is an update on our plans and status.

= Overview =
gps and hwine will implement the plan, which is, in summary, manually
playing back the problematic merges one by one to ensure both systems are
in agreement.

gps has point and hwine is online for peer review of the work.

No further tree closures are needed for this plan.

= Procedure in detail =
1) Make backup copy of SHA-1 mapfiles on both systems (in progress)
2) Manually iterate through Mercurial commits starting at 8991b10184de
and run gexport on that commit
3) Compare resulting SHA-1s in Git across conversion systems
4) Manually Git cherry-pick and update the mapfiles as needed
** go/no-go point (work-to-completion is guaranteed diminishing from here)
5) Prune entries from mapfiles newer than and including 8991b10184de (the
first merge in central)
6) After bfa194d93aed has been converted to Git with the same SHA-1,
proceed to convert remaining commits via `hg gexport`.
7) Verify new head matches in both systems
8) Manually push this new head to the master branch from both systems
(non-force)
** if force push on legacy, notify downstream partners
9) Turn on automated conversion again

= Success conditions =
* Legacy and modern vcs-sync are producing same shas
* Legacy and modern vcs-sync can push fast forward to gecko.git 
gecko-dev.git (respectively)
* Modern vcs-sync also has sha agreement with gecko-projects.git

= Next update =
The next update to lists, etc, will be when it's fixed, if things change
significantly, or at 5pm PST, whichever comes first.

Let me know if you have questions.

Best,

Laura



On Fri, Jan 30, 2015 at 8:01 PM, Gregory Szorc g...@mozilla.com wrote:


I figured people would like an update.

There were multiple, independent failures in the replication systems
(there
are 2 systems that replicate Mercurial to Git).

At least one system wasn't DAG aware. It was effectively using the tip
commit of the Mercurial repositories (the most recently committed
changeset) to constitute the Git branch head when it should have been
using
the latest commit on the default branch. It is a minor miracle this
hasn't broken before, as all anybody needed to do was push to an older
head
to create a non-fast-forward push.

The other system got in a really wonky state when processing some merge
commits in mozilla-central. Instead of converting a handful of commits in
the 2nd merge parent, it converted all commits down to Mercurial
revision 0
and merged in an unrelated DAG head with tens of thousands of commits!
It's
a good thing GitHub rejected a malformed author line, or the gecko-dev
repository would be epically whacky right now and would almost certainly
require a hard reset / force push to correct.

Both systems are replicating Firefox Mercurial commits to Git. And the
SHA-1s need to be consistent between them. We're capable of fixing at
least
one of these systems now. But we're hesitant to fix one unless we are
pretty sure both systems agree about SHA-1s. We have obligations with
partners to not force push. And, you don't like force pushing either. So
caution is needed before bringing any system back online.

There is currently no ETA for service restoration. But people are working
on it. I wish I had better news to report.

On Thu, Jan 29, 2015 at 1:06 AM, Gregory Szorc g...@mozilla.com wrote:


Git replication is currently broken due to a mistake of mine when mass
closing branches earlier today.

Don't expect restoration before 1200 PDT.

Bug 927219.



Re: Content Vault Feasibility RFC

2015-02-03 Thread Deian Stefan

Hi all,

We are still working on COWL on two fronts. First, I am currently
drafting up the FPWD for COWL. Second, we also started thinking about
COWL in the context of extensions, which I ma share some similarities
with the proposed CV. A draft position paper of this is available [1],
but we also started hacking on this---would be happy to share more
details and see if there are commonalities.

From the description below, I think that COWL's confinement can be used
to get some of the desired properties. In particular, if you create an
iframe whose label is a unique origin---but don't give it the privilege
for this origin (hence different from HTML5 iframe sandbox)---then the
code in the iframe can effectively only manipulate the DOM. (This was
the idea behind the fresh privileges.)

COWL adds more restrictions on top of SOP and CSP, so we don't expose
any APIs for accessing more privileged data, but I can't tell from your
description if this is actually something that you want to do.

Ben: are there particular things that you have in mind that COWL doesn't 
address?

Best,
Deian

[1] http://www.scs.stanford.edu/~deian/pubs/heule:2015:the-most.pdf

Ben Livshits livsh...@microsoft.com writes:

 Looking forward to hearing more from Deian! I wonder if Bobby means the COWL 
 paper. Best.

 -Ben
 Sent from a mobile device

 On Feb 3, 2015, at 10:40, Bobby Holley 
 bobbyhol...@gmail.commailto:bobbyhol...@gmail.com wrote:

 Looping in Deian, who was working (is working?) on something like this.

 On Tue, Feb 3, 2015 at 9:35 AM, Monica Chew 
 m...@mozilla.commailto:m...@mozilla.com wrote:
 Hi Olivier,

 I agree with ekr and Richard. There has been a lot of research lately about
 how to do personalization in a privacy-preserving manner.

 Bloom Cookies: Web Search Personalization without User Tracking, NDSS 2015
 http://research.microsoft.com/pubs/238114/BloomCookies.pdf

 RePriv: Re-Imagining Content Personalization and In-Browser Privacy, SP
 2011 (Livshits is a co-author on this one)
 http://research.microsoft.com/en-us/um/people/livshits/papers/pdf/oakland11.pdf

 Privad: Practical Privacy in Online Advertising, NSDI 2011
 http://static.usenix.org/event/nsdi11/tech/full_papers/Guha.pdf

 Adnostic: Privacy Preserving Targeted Advertising, NDSS 2010
 http://crypto.stanford.edu/adnostic/

 None of these require what is essentially the equivalent of limited XSS on
 behalf of ad networks. Have you evaluated if these work for you?

 Thanks,
 Monica

 On Tue, Feb 3, 2015 at 7:44 AM, Olivier Yiptong 
 oyipt...@mozilla.commailto:oyipt...@mozilla.com
 wrote:

 # The Content Vault

 The purpose of this document is to gather your comments about the
 feasibility of the idea of a Content Vault (CV). After gathering your
 comments, a more formal RFC will be drafted.

 I’d like to use your comments to colour the proposal, from a platform,
 security and privacy perspective, and from others if needed.

 This is about a new Firefox feature that will allow access to privileged
 information in a content jail, that tries not to let information leak out.
 Another descriptive term for this is a privacy vault.

 The state within that vault may be changed, perhaps on a global or
 per-domain manner.
 This sandbox will allow the transformation of DOM elements prior to
 rendering, but unavailable to the parent page.

 The basic idea is to create a new kind of iframe, with special privileges
 and limitations. In some ways, this may be considered the opposite to HTML5
 sandbox (http://www.html5rocks.com/en/tutorials/security/sandboxed-iframes),
 whose focus is primarily on integrity; the focus of our solution is on
 confidentiality or privacy.

 The idea of the content vault was brought to me by Ben Livshits, a
 Research Scientist at Microsoft Research. Ben’s interests are broad, and
 include Security and Privacy. Ben wishes to be involved in this project; we
 will have his input on the matter.

 Ben can be found online:
 http://research.microsoft.com/en-us/um/people/livshits/

 ## Rationale

 Today’s Internet user expects a great level of personalization. Websites
 achieve this personalization by building a relationship with that user, and
 sometimes through third parties. Those websites commonly create a profile
 for that user, append new data with each interaction and often enrich that
 corpus by buying additional data from brokers.

 The act of personalization is not inherently wrong and is often desired.
 User experiements show that personalization increases user engagement and
 satisfaction in the long run. We, after all, expect our computers to be
 useful devices and that involves a degree of personalization. However, the
 cost is often at the expense of privacy and/or security.

 With the idea of a content vault, we may be able to achieve some level of
 personalization while keeping the data within the control of the user
 agent, thus preventing data leaks.

 ## The Content Vault

 This vault would:
 • not be accessible from