Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-14 Thread Martin Thomson

 On 2014-11-13, at 21:25, Henri Sivonen hsivo...@hsivonen.fi wrote:
 Your argument relies on there being no prior session that was not 
 intermediated by the attacker.  I’ll concede that this is a likely situation 
 for a large number of clients, and not all servers will opt for protection 
 against that school of attack.
 
 What protection are you referring to?

HTTP-TLS (which seems to be confused with Alt-Svc in some of the discussion 
I’ve seen).  If you ever get a clean session, you can commit to being 
authenticated and thereby avoid any MitM until that timer lapses.  I appreciate 
that you think that this is worthless, and it may well be of marginal or even 
no use.  That’s why it’s labelled as an experiment.

 
 I haven't been to the relevant IETF sessions myself, but assume that
 https://twitter.com/sleevi_/status/509954820300472320 is true.
 
 That’s pure FUD as far as I can tell.
 
 How so given that
 http://tools.ietf.org/html/draft-loreto-httpbis-trusted-proxy20-01
 exists and explicitly seeks to defeat the defense that TLS traffic
 arising from https and TLS traffic arising from already-upgraded OE
 http look pretty much alike to an operator?

That is a direct attempt to water down the protections of the opportunistic 
security model to make MitM feasible by signaling its use.  That received a 
strongly negative reaction and E/// and operators have since distanced 
themselves from that line of solution.

 What about
 http://arstechnica.com/security/2014/10/verizon-wireless-injects-identifiers-link-its-users-to-web-requests/
 ? What about 
 http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/
 ?

Opportunistic security is a small part of our response to that.  I don’t 
understand why this is difficult to comprehend.  A simple server upgrade with 
no administrator intervention is very easy, and the protection that affords is, 
for small time attacks like these, what I consider to be an effective 
countermeasure.

 I’ve been talking regularly to operators and they are concerned about 
 opportunistic security.  It’s less urgent for them given that we are the 
 only ones who have announced an intent to deploy it (and its current status).
 
 Concerned in what way? (Having concerns suggests they aren't seeking
 to merely carry IP packets unaltered.)

Concerned in the same way that they are about all forms of increasing use of 
encryption.  They want in.  To enhance content.  To add services.  To collect 
information.  To decorate traffic to include markers for their partners.  To do 
all the things they are used to doing with cleartext traffic.  You suggest that 
they can just strip this stuff off if we add it.  It’s not that easy.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-14 Thread Henri Sivonen
On Fri, Nov 14, 2014 at 10:51 AM, Martin Thomson m...@mozilla.com wrote:
 How so given that
 http://tools.ietf.org/html/draft-loreto-httpbis-trusted-proxy20-01
 exists and explicitly seeks to defeat the defense that TLS traffic
 arising from https and TLS traffic arising from already-upgraded OE
 http look pretty much alike to an operator?

 That is a direct attempt to water down the protections of the opportunistic 
 security model to make MitM feasible by signaling its use.  That received a 
 strongly negative reaction and E/// and operators have since distanced 
 themselves from that line of solution.

Seems to be an indication of what some operators want nonetheless.

 What about
 http://arstechnica.com/security/2014/10/verizon-wireless-injects-identifiers-link-its-users-to-web-requests/
 ? What about 
 http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/
 ?

 Opportunistic security is a small part of our response to that.  I don’t 
 understand why this is difficult to comprehend.  A simple server upgrade with 
 no administrator intervention is very easy, and the protection that affords 
 is, for small time attacks like these, what I consider to be an effective 
 countermeasure.

The part that's hard to accept is: Why is the countermeasure
considered effective for attacks like these, when the level of how
active the MITM needs to be to foil the countermeasure (by
inhibiting the upgrade by messing with the initial HTTP/1.1 headers)
is less than the level of active these MITMs already are when they
inject new HTTP/1.1 headers or inject JS into HTML?

 I’ve been talking regularly to operators and they are concerned about 
 opportunistic security.  It’s less urgent for them given that we are the 
 only ones who have announced an intent to deploy it (and its current 
 status).

 Concerned in what way? (Having concerns suggests they aren't seeking
 to merely carry IP packets unaltered.)

 Concerned in the same way that they are about all forms of increasing use of 
 encryption.  They want in.  To enhance content.  To add services.  To collect 
 information.  To decorate traffic to include markers for their partners.  To 
 do all the things they are used to doing with cleartext traffic.  You suggest 
 that they can just strip this stuff off if we add it.  It’s not that easy.

Why isn't stripping the HTTP/1.1 headers that signal the upgrade that
easy? Rendering the upgrade signaling headers unrecognizable without
stretching or contracting the bytes seems easier than adding HTTP
headers or adding JS.

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Spotty sheriff coverage today

2014-11-14 Thread Ryan VanderMeulen
FYI, due to a combination of PTO and other commitments, full-time sheriff 
coverage is going to be spotty today. Please make an extra effort to keep an 
eye on any pushes you make so we hopefully avoid any big bustage pile-ups right 
before the weekend.

Thanks!

-Ryan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-11-14 Thread Patrick McManus
On Thu, Nov 13, 2014 at 11:16 PM, Henri Sivonen hsivo...@hsivonen.fi
wrote:

 The part that's hard to accept is: Why is the countermeasure
 considered effective for attacks like these, when the level of how
 active the MITM needs to be to foil the countermeasure (by
 inhibiting the upgrade by messing with the initial HTTP/1.1 headers)
 is less than the level of active these MITMs already are when they
 inject new HTTP/1.1 headers or inject JS into HTML?



There are a few pieces here -
1] I totally expect what you describe about signalling stripping to happen
to some subset of the traffic, but an active cleartext carrier based MITM
is not the only opponent. Many of these systems are tee'd read only
dragnets. Especially the less sophisticated scenarios.
1a] not all of the signalling happens in band especially wrt mobility.
2] When the basic ciphertext technology is proven, I expect to see other
ways to signal its use.

I casually mentioned a tofu pin yesterday and you were rightly concerned
about pin fragility - but in this case the pin needn't be hard fail (and
pin was a poor word choice) - its an indicator to try OE. That can be
downgraded if you start actively resetting 443, sure - but that's a much
bigger step to take that may result in generally giving users of your
network a bad experience.

And if you go down this road you find all manner of other interesting ways
to bootstrap OE - especially if what you are bootstrapping is an
opportunistic effort that looks a lot like https on the wire: gossip
distribution of known origins, optimistic attempts on your top-N frecency
sites, DNS (sec?).. even h2 https sessions can be used to carry http
schemed traffic (the h2 protocol finally explicitly carries scheme as part
of the transaction instead of making all transactions on the same
connection carry the same scheme) which might be a very good thing for
folks with mixed content problems. Most of this can be explored
asynchronously at the cost of some plaintext usage in the interim. Its
opportunistic afterall.

There is certainly some cat and mouse here - as Martin says, its really
just a small piece. I don't think of it as more than replacing some
plaintext with some encryption - that's not perfection, but I really do
think its significant.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Help make the build faster by porting make rules to the misc tier

2014-11-14 Thread Gregory Szorc
We added a capability to the Firefox build system that will ultimately 
lead to speeding up the build and we need your help to realize its 
potential.


Background
==

The Firefox build executes as a pipeline of stages called tiers. The 
tiers are currently export - compile - misc - libs - tools. The 
build system iterates the tiers and builds directories in those tiers.


The export, compile, and misc tiers are mostly concurrent. If you 
execute with |make -j32| or more, they should saturate your cores. If 
you have access to a modern, multi-core machine, you realize the 
benefits of that machine.


The libs and tools tier, by contrast, aren't concurrent. This is mostly 
due to historical reasons. Back in the day, the traversal order of the 
directories was hard-coded by the order directories were registered in 
(DIRS += foo bar). If you had a cross-directory dependency, you 
registered one directory before the other.


Furthermore, back when we only had export, libs, and tools tier, these 
tiers were conceptually pre-build, build, and post-build tiers. 
libs became a dumping ground for most build tasks. Fast forward a few 
years and libs is still where most of the one-off build rules live. libs 
still executes in the order directories are defined in.


Mission Misc


The libs tier slows down the build because it isn't executed 
concurrently. By how much? As always, it depends. But on my machine, 
traversing the libs tier on a no-op build takes ~50% of the wall time of 
the build (~22s/45s). The inefficiency of the libs tier is even more 
pronounced during a full build (multi-core usage drops off a cliff). 
*The libs tier is clownshoes.*


Our goal is to slowly kill the libs tier by moving things elsewhere, 
where they can execute with much higher concurrency and where 
cross-directory dependencies are properly captured. That will leave us 
with a more efficient build system that executes faster.


This is where we need your help.

The tree has tons of build tasks that run during the libs tier. Many of 
them have no dependencies and can be safely moved into the misc tier.


*We need your help writing patches to move things to the misc tier.*

How You Can Help


1) Search for build rules in your Makefile.in that execute in the libs 
tier/target in your Makefile.in files

2) Examine the dependencies
3) If it doesn't depend on anything in the libs tier/target or doesn't 
depend on things outside the current directory, it is probably safe to 
move it to misc. (See also the warnings below.)
5) Try converting the rule to the misc tier. Add |HAS_MISC_RULE = True| 
to the moz.build in that directory.

6) Test the build locally and push it to try
7) File a bug blocking bug 1099232. Initiate a code review and flag a 
build peer (glandium, mshal, gps are preferred) for review.


For an example conversion, see 
https://hg.mozilla.org/mozilla-central/rev/b1a9e41d3f4b


For a log showing what all gets built in libs, see 
https://people.mozilla.org/~gszorc/libs.log (this is an OS X build). We 
ideally want this log to be empty.


INSTALL_TARGETS and PP_TARGETS
--

The default rule/tier for INSTALL_TARGETS and PP_TARGETS is libs. Unless 
you explicitly declare a *_TARGET variable, the rule will execute during 
libs and will be slower.


*Merely grepping for libs is thus not sufficient for identifying rules 
that execute during libs.*


Here is how you move an INSTALL_TARGETS rule from libs to misc.

Before:

  foo_FILES := foo.js
  foo_DEST := $(DIST)/bin
  INSTALL_TARGETS += foo

After:

  foo_FILES := foo.js
  foo_DEST := $(DIST)/bin
  foo_TARGET := misc
  INSTALL_TARGETS += foo

Things to Watch Out For
---

When evaluating whether you can convert something from libs to misc:

* Stay away from things referencing $(XPI_NAME)
* Stay away from things related to JAR manifest parsing
* Stay away from things related to packaging
* Stay away from things related to l10n

There be dragons around all of these things. It is best to let a build 
peer handle it.


If you aren't sure, don't hesitate to ask a build peer. We (the build 
peers) want help tackling low-hanging fruit: don't waste your time on 
something beyond your skill level: you likely won't derive anything but 
pain. If it doesn't work right away, seriously consider moving on and 
punting to someone with more domain knowledge.


Cross-Directory Dependencies


Did you find a rule that depends on another directory? For things in 
moz.build, this should just work. But for Makefile.in rules, you need 
to explicitly declare that cross-directory dependency or the build is 
susceptible to race conditions.


To declare a cross-directory dependency for the misc tier, you'll want 
to add an entry at the bottom of config/recurse.mk.


For example, say you have a misc rule in xpcom/system that depends on 
a misc rule in xpcom/base. You would add the following to 

Re: Intent to ship: Web Speech API - Speech Recognition with Pocketsphinx

2014-11-14 Thread Sandip Kamat
Hi Andre, I suggest let's update the wiki for these sizes (as well as other 
questions in this thread) so we can use that as a central place of info. 

-Sandip 

- Original Message -

 From: Andre Natal ana...@gmail.com
 To: smaug sm...@welho.com
 Cc: Sandip Kamat ska...@mozilla.com, dev-platform@lists.mozilla.org
 Sent: Saturday, November 8, 2014 8:50:44 PM
 Subject: Re: Intent to ship: Web Speech API - Speech Recognition with
 Pocketsphinx

 Hi Olli,

  How much does Pocketsphinx increase binary size? or download size?

 In the past was suggested to avoid ship the models with packages, but yes to
 create a preferences panel in the apps to allow the user to download the
 models he wants to.

 About the size of pocketsphinx libraries itself, in mac os, they sum ~ 2.3 mb
 [1]. I don't know which type of compression the build system does when
 compiling/packaging, but should be efficient enough.

 [1]
 MacBook-Air-de-AndreNatal:gecko-dev andrenatal$ ls -lsa
 /usr/local/lib/libsphinxbase.a
 2184 -rw-r--r-- 1 root admin 1114840 Jul 7 14:39
 /usr/local/lib/libsphinxbase.a
 MacBook-Air-de-AndreNatal:gecko-dev andrenatal$ ls -lsa
 /usr/local/lib/libpocketsphinx.a
 2352 -rw-r--r-- 1 root admin 1201240 Jul 7 14:52
 /usr/local/lib/libpocketsphinx.a

  When the pref is enabled, how much does it use memory on desktop, what
  about
  on b2g?
 

 On b2g, it uses memory only after the decoder be activated and loaded the
 models. I did a profile in Zte Open C and here is the report [2] and here
 the exact snapshot [3]. Seems ~ 21 mb is used after load the models.

 In desktop mac os Nightly, the memory usage was of ~11mb.

 [2] https://www.dropbox.com/s/cf1drl3thkf6mp1/memory-reports?dl=0
 [3] https://www.dropbox.com/s/1rt6z9t5h30whn0/Vaani_b2g_openc.png?dl=0

   -Olli
  
 

   On 10/31/2014 01:18 AM, Andre Natal wrote:
  
 

I've been researching speech recognition in Firefox for two years.
First
   
  
 
SpeechRTC, then emscripten, and now Web Speech API with CMU
pocketsphinx
   
  
 
[1] embedded in Gecko C++ layer, project that I had the luck to develop
for
   
  
 
Google Summer of Code with the mentoring of Olli Pettay, Guilherme
   
  
 
Gonçalves, Steven Lee, Randell Jesup plus others and with the
management
of
   
  
 
Sandip Kamat.
   
  
 

The implementation already works in B2G, Fennec and all FF desktop
   
  
 
versions, and the first language supported will be english. The API and
   
  
 
implementation are in conformity with W3C standard [2]. The preference
to
   
  
 
enable it is: media.webspeech.service. default = pocketsphinx
   
  
 

The required patches for achieve this are:
   
  
 

- Import pocketsphinx sources in Gecko. Bug 1051146 [3]
   
  
 
- Embed english models. Bug 1065911 [4]
   
  
 
- Change SpeechGrammarList to store grammars inside SpeechGrammar
objects.
   
  
 
Bug 1088336 [5]
   
  
 
- Creation of a SpeechRecognitionService for Pocketsphinx. Bug 1051148
[6]
   
  
 

Also, other important features that we don't have patches yet:
   
  
 
- Relax VAD strategy to be les strict and avoid stop in the middle of
   
  
 
speech when speaking low volume phonemes [7]
   
  
 
- Integrate or develop a grapheme to phoneme algorithm to realtime
   
  
 
generator when compiling grammars [8]
   
  
 
- Inlcude and build models for other languages [9]
   
  
 
- Continuous and wordspotting recognition [10]
   
  
 

The wip repo is here [11] and this Air Mozilla video [12] plus this
wiki
   
  
 
has more detailed info [13].
   
  
 

At this comment you can see a cpu usage on flame while recognition is
   
  
 
happening [14]
   
  
 

I wish to hear your comments.
   
  
 

Thanks,
   
  
 

Andre Natal
   
  
 

[1] http://cmusphinx.sourceforge. net/
   
  
 
[2] https://dvcs.w3.org/hg/speech- api/raw-file/tip/speechapi. html
   
  
 
[3] https://bugzilla.mozilla.org/ show_bug.cgi?id=1051146
   
  
 
[4] https://bugzilla.mozilla.org/ show_bug.cgi?id=1065911
   
  
 
[5] https://bugzilla.mozilla.org/ show_bug.cgi?id=1088336
   
  
 
[6] https://bugzilla.mozilla.org/ show_bug.cgi?id=1051148
   
  
 
[7] https://bugzilla.mozilla.org/ show_bug.cgi?id=1051604
   
  
 
[8] https://bugzilla.mozilla.org/ show_bug.cgi?id=1051554
   
  
 
[9] https://bugzilla.mozilla.org/ show_bug.cgi?id=1065904 and
   
  
 
https://bugzilla.mozilla.org/ show_bug.cgi?id=1051607
   
  
 
[10] https://bugzilla.mozilla.org/ show_bug.cgi?id=967896
   
  
 
[11] https://github.com/andrenatal/ gecko-dev
   
  
 
[12] https://air.mozilla.org/ mozilla-weekly-project- meeting-20141027/
(Jump
   
  
 
to 12:00)
   
  
 
[13] https://wiki.mozilla.org/ SpeechRTC_-_Speech_enabling_
the_open_web
   
  
 
[14] https://bugzilla.mozilla.org/ show_bug.cgi?id=1051148#c14
   
  
 

Re: Intent to ship: Web Speech API - Speech Recognition with Pocketsphinx

2014-11-14 Thread Sandip Kamat
Hi Olli, In general for FxOS devices, the thought is to let the OEMs decide 
which language models they would like to ship with, preloaded. That way there 
is a partner choice based on regions, but also the users could directly 
download the packages they like. For now, since we are very early stage, we 
just have English support. We need help to build and test other language models 
in parallel. 

Sandip 

- Original Message -

 From: Andre Natal ana...@gmail.com
 To: smaug sm...@welho.com
 Cc: Sandip Kamat ska...@mozilla.com, dev-platform@lists.mozilla.org
 Sent: Saturday, November 8, 2014 8:50:44 PM
 Subject: Re: Intent to ship: Web Speech API - Speech Recognition with
 Pocketsphinx

 Hi Olli,

  How much does Pocketsphinx increase binary size? or download size?

 In the past was suggested to avoid ship the models with packages, but yes to
 create a preferences panel in the apps to allow the user to download the
 models he wants to.

 About the size of pocketsphinx libraries itself, in mac os, they sum ~ 2.3 mb
 [1]. I don't know which type of compression the build system does when
 compiling/packaging, but should be efficient enough.

 [1]
 MacBook-Air-de-AndreNatal:gecko-dev andrenatal$ ls -lsa
 /usr/local/lib/libsphinxbase.a
 2184 -rw-r--r-- 1 root admin 1114840 Jul 7 14:39
 /usr/local/lib/libsphinxbase.a
 MacBook-Air-de-AndreNatal:gecko-dev andrenatal$ ls -lsa
 /usr/local/lib/libpocketsphinx.a
 2352 -rw-r--r-- 1 root admin 1201240 Jul 7 14:52
 /usr/local/lib/libpocketsphinx.a

  When the pref is enabled, how much does it use memory on desktop, what
  about
  on b2g?
 

 On b2g, it uses memory only after the decoder be activated and loaded the
 models. I did a profile in Zte Open C and here is the report [2] and here
 the exact snapshot [3]. Seems ~ 21 mb is used after load the models.

 In desktop mac os Nightly, the memory usage was of ~11mb.

 [2] https://www.dropbox.com/s/cf1drl3thkf6mp1/memory-reports?dl=0
 [3] https://www.dropbox.com/s/1rt6z9t5h30whn0/Vaani_b2g_openc.png?dl=0

   -Olli
  
 

   On 10/31/2014 01:18 AM, Andre Natal wrote:
  
 

I've been researching speech recognition in Firefox for two years.
First
   
  
 
SpeechRTC, then emscripten, and now Web Speech API with CMU
pocketsphinx
   
  
 
[1] embedded in Gecko C++ layer, project that I had the luck to develop
for
   
  
 
Google Summer of Code with the mentoring of Olli Pettay, Guilherme
   
  
 
Gonçalves, Steven Lee, Randell Jesup plus others and with the
management
of
   
  
 
Sandip Kamat.
   
  
 

The implementation already works in B2G, Fennec and all FF desktop
   
  
 
versions, and the first language supported will be english. The API and
   
  
 
implementation are in conformity with W3C standard [2]. The preference
to
   
  
 
enable it is: media.webspeech.service. default = pocketsphinx
   
  
 

The required patches for achieve this are:
   
  
 

- Import pocketsphinx sources in Gecko. Bug 1051146 [3]
   
  
 
- Embed english models. Bug 1065911 [4]
   
  
 
- Change SpeechGrammarList to store grammars inside SpeechGrammar
objects.
   
  
 
Bug 1088336 [5]
   
  
 
- Creation of a SpeechRecognitionService for Pocketsphinx. Bug 1051148
[6]
   
  
 

Also, other important features that we don't have patches yet:
   
  
 
- Relax VAD strategy to be les strict and avoid stop in the middle of
   
  
 
speech when speaking low volume phonemes [7]
   
  
 
- Integrate or develop a grapheme to phoneme algorithm to realtime
   
  
 
generator when compiling grammars [8]
   
  
 
- Inlcude and build models for other languages [9]
   
  
 
- Continuous and wordspotting recognition [10]
   
  
 

The wip repo is here [11] and this Air Mozilla video [12] plus this
wiki
   
  
 
has more detailed info [13].
   
  
 

At this comment you can see a cpu usage on flame while recognition is
   
  
 
happening [14]
   
  
 

I wish to hear your comments.
   
  
 

Thanks,
   
  
 

Andre Natal
   
  
 

[1] http://cmusphinx.sourceforge. net/
   
  
 
[2] https://dvcs.w3.org/hg/speech- api/raw-file/tip/speechapi. html
   
  
 
[3] https://bugzilla.mozilla.org/ show_bug.cgi?id=1051146
   
  
 
[4] https://bugzilla.mozilla.org/ show_bug.cgi?id=1065911
   
  
 
[5] https://bugzilla.mozilla.org/ show_bug.cgi?id=1088336
   
  
 
[6] https://bugzilla.mozilla.org/ show_bug.cgi?id=1051148
   
  
 
[7] https://bugzilla.mozilla.org/ show_bug.cgi?id=1051604
   
  
 
[8] https://bugzilla.mozilla.org/ show_bug.cgi?id=1051554
   
  
 
[9] https://bugzilla.mozilla.org/ show_bug.cgi?id=1065904 and
   
  
 
https://bugzilla.mozilla.org/ show_bug.cgi?id=1051607
   
  
 
[10] https://bugzilla.mozilla.org/ show_bug.cgi?id=967896
   
  
 
[11] https://github.com/andrenatal/ gecko-dev
   
  
 
[12] https://air.mozilla.org/ 

Intent to ship: object-fit object-position CSS properties

2014-11-14 Thread Daniel Holbert
As of sometime early next week (say, Nov 17th 2014), I intend to turn on
support for the object-fit  object-position CSS properties by default.

They have been developed behind the
layout.css.object-fit-and-position.enabled preference. (The layout
patches for these properties are actually just about to land; they're
well-tested enough [via included automated tests] that I'm confident
turning the pref on soon after that lands.)

Other UAs who are already shipping this feature:
  * Blink (Chrome, Opera)
  * WebKit (Safari) -- supports object-fit, but not object-position,
according to http://caniuse.com/#search=object-fit


This feature was previously discussed in this intent to implement
thread:
https://groups.google.com/forum/#!searchin/mozilla.dev.platform/object-fit/mozilla.dev.platform/sZx-5uYx6h8/dxcjm4yvCCEJ

Bug to turn on by default:
 https://bugzilla.mozilla.org/show_bug.cgi?id=1099450
Link to standard:
  http://dev.w3.org/csswg/css-images-3/#the-object-fit
  http://dev.w3.org/csswg/css-images-3/#the-object-position
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: object-fit object-position CSS properties

2014-11-14 Thread Daniel Holbert
Here's the intent to ship thread, for reference:
https://groups.google.com/forum/#!topic/mozilla.dev.platform/DK_AyuGfFhg
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: object-fit object-position CSS properties

2014-11-14 Thread Kyle Huey
On Fri, Nov 14, 2014 at 4:30 PM, Daniel Holbert dholb...@mozilla.com wrote:
 As of sometime early next week (say, Nov 17th 2014), I intend to turn on
 support for the object-fit  object-position CSS properties by default.

 They have been developed behind the
 layout.css.object-fit-and-position.enabled preference. (The layout
 patches for these properties are actually just about to land; they're
 well-tested enough [via included automated tests] that I'm confident
 turning the pref on soon after that lands.)

 Other UAs who are already shipping this feature:
   * Blink (Chrome, Opera)
   * WebKit (Safari) -- supports object-fit, but not object-position,
 according to http://caniuse.com/#search=object-fit


 This feature was previously discussed in this intent to implement
 thread:
 https://groups.google.com/forum/#!searchin/mozilla.dev.platform/object-fit/mozilla.dev.platform/sZx-5uYx6h8/dxcjm4yvCCEJ

 Bug to turn on by default:
  https://bugzilla.mozilla.org/show_bug.cgi?id=1099450
 Link to standard:
   http://dev.w3.org/csswg/css-images-3/#the-object-fit
   http://dev.w3.org/csswg/css-images-3/#the-object-position
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

Does it make sense to wait a release (meaning one week on trunk) here?
 Not judging this, just making sure you're aware of the dates.

- Kyle
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: object-fit object-position CSS properties

2014-11-14 Thread Daniel Holbert
On 11/14/2014 05:30 PM, Kyle Huey wrote:
 Does it make sense to wait a release (meaning one week on trunk) here?
  Not judging this, just making sure you're aware of the dates.

Thanks -- yup, I'm aware that we're branching soon.

I don't think we'd gain much by holding off on this until the next
release cycle (giving it 6 weeks of preffed-on baking on Nightly,
instead of 1 week).  The spec is stable, the feature is pretty
straightforward  includes a large automated test-suite, so there's low
risk for regressions  spec-changes.  (And it'll get more/better testing
on Aurora/Beta than it'd get on Nightly, too. And it's easy enough to
turn off, in the unlikely event that do we find out it's not ready for
shipping to our release users.)

Moreover, other UAs are already shipping this, and authors want it, so
it's better to get this on track to ship sooner rather than later.

~Daniel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


How to move the $(INSTALL) instruction in makefile.in to moz.build?

2014-11-14 Thread Yonggang Luo

There is a lot $(INSTALL)  in remaining makefile.in,
I have strong intention to replace it with moz.build.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform