Re: Intent to deprecate: Insecure HTTP

2015-05-05 Thread Florian Bösch
On Tue, May 5, 2015 at 12:03 AM, Daniel Holbert dholb...@mozilla.com
wrote:

 Without getting too deep into the exact details about animation /
 notifications / permissions, it sounds like Florian's concern RE
 browsers want to disable fullscreen if you are not serving the website
 over HTTPS may be unfounded, then.

 (Unless Florian or Martin have some extra information that we're missing.)

I responded to OPs comment about restricting features (such as fullscreen),
I have no more information than that.

Yes, if the permission dialog could be done away with altogether and an
appropriate UX could be done to make it difficult to miss the fullscreen
change, and if that made it possible to have fullscreen functionality
regardless of http or https that would make me happy.

It would also take care of another UX concern of mine (permission dialog
creep), particularly in the case of where an iframe with fullscreen
functionality is embedded, and the youtube player for instance is
re-polling permissions to go fullscreen on every domain it's embedded in
(which from a users point of view just doesn't make any sense).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: document.execCommand(cut/copy)

2015-05-05 Thread Tantek Çelik
On Wed, May 6, 2015 at 12:51 AM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:
 On 2015-05-05 6:31 PM, Daniel Holbert wrote:

 On 05/05/2015 02:51 PM, Ehsan Akhgari wrote:

 Sites such as Github currently use Flash in order to
 allow people to copy text to the clipboard by clicking a button in their
 UI.

First, this is awesome and can't wait to try it out.

Second, cut is potentially destructive to user data, have you
considered enabling this only for secure connections? Either way it
would be good to know the reasoning behind your decision.

Tantek
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Jonas Sicking
On Tue, May 5, 2015 at 4:34 PM, Nicholas Nethercote
n.netherc...@gmail.com wrote:
 On Wed, May 6, 2015 at 2:12 AM, Bill McCloskey wmcclos...@mozilla.com wrote:

 Regarding process-per-core or process-per-domain or whatever, I just want
 to point out that responsiveness will improve even beyond process-per-core.

 You're probably right, but as you increase the number of processes the
 responsiveness improvements will hit diminishing returns at some point
 whereas the memory usage will likely scale linearly.

Is the diminishing returns part true even if you lower the priority
of processes in background tabs?

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rust in Gecko. rust-url compatibility

2015-05-05 Thread Valentin Gosu
On 6 May 2015 at 04:58, Doug Turner do...@mozilla.com wrote:


  On May 5, 2015, at 12:55 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Thu, Apr 30, 2015 at 3:34 PM, Valentin Gosu valentin.g...@gmail.com
 wrote:
  As some of you may know, Rust is approaching its 1.0 release in a
 couple of
  weeks. One of the major goals for Rust is using a rust library in Gecko.
  The specific one I'm working at the moment is adding rust-url as a safer
  alternative to nsStandardURL.
 
  Will this affect our ability to make nsStandardURL thread-safe?


Unfortunatelly this project does not make nsStandardURL thread safe. It is
only meant to prove Rust can be used in mozilla-central, and bring a little
extra safety to our URL parsing.


 
  Right now there are various things, mainly related to addons, that are
  forcing us to only use nsIURI on the main thread. However this is a
  major pain as we move more code off the main thread and as we do more
  IPC stuff.


This involves changing the API used to create and change URIs and would
probably require all the consumers of nsIURI to alter their code.


 
  We've made some, small, steps towards providing better addon APIs
  which would enable us to make nsIURI parsing and handling possible to
  do from any thread.
 
  Would this affect this work moving forward? Or would a Rust
  implemented URL-parser/nsIURI be possible to use from other threads
  once the other blockers for that are removed?


The rust code is meant to be a drop-in replacement for our current parsing
code. A thread safe URI implementation should be able to use the rust code
without any issues.


 Valentin, if you’re serious about this effort, lets start by collecting
 requirements of this rewrite.  Thread safety is important.  Compat and
 correctness (talk to Anne!).  Performance


Thread safety has been on Necko's wish list for a while now, but this
project isn't going to fix it.
Compatibility with our current implementation is my main concern.
Performance might also benefit from this project, as our current
implementation does multiple iterations to parse a URI. I expect a slight
improvement.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rust in Gecko. rust-url compatibility

2015-05-05 Thread Boris Zbarsky

On 5/5/15 9:58 PM, Doug Turner wrote:

Performance.


Note that performance has been a recurring problem in our current URI 
code.  It's a bit (10%) slower than Chrome's, but about 2x slower than 
Safari's, and shows up a good bit in profiles.  Some of this may be due 
to XPCOM strings, of course, and the fact that there's a lot of 
back-and-forth between UTF-16 and UTF-8 involved, but some is just the 
way the URI parsers work.  So yeah, if we can get better performance 
that would be nice.  ;)


-Boris

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Mike Hommey
On Tue, May 05, 2015 at 05:10:42PM -0700, Bill McCloskey wrote:
 On Tue, May 5, 2015 at 4:34 PM, Nicholas Nethercote n.netherc...@gmail.com
 wrote:
 
  resident-unique (only available on Linux, alas) is probably the most
  interesting measurement in this case.
 
 
 Usually I see resident-unique pretty consistently about 15-20MB higher than
 explicit for content processes.
 
 I wonder if we track IPDL shared memory in about:memory. I would imagine we
 do, but I don't see it anywhere.

As long as we exec() new processes for each new content process, we're
going to have a lot of wasted memory because of relocations in data
segments in libxul. Ironically, even though it doesn't have fork(), I
think Windows doesn't have this problem (I /think/ it shares memory for
relocated read-only data segments across processes, as long as it
doesn't have to re-relocate because it couldn't get the same address.)

Nuwa, aiui, can somewhat help here, but the possibly best option is
actually to just not have a separate executable and fork() the main
process (I didn't say this was going to be easy)

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rust in Gecko. rust-url compatibility

2015-05-05 Thread Doug Turner

 On May 5, 2015, at 12:55 PM, Jonas Sicking jo...@sicking.cc wrote:
 
 On Thu, Apr 30, 2015 at 3:34 PM, Valentin Gosu valentin.g...@gmail.com 
 wrote:
 As some of you may know, Rust is approaching its 1.0 release in a couple of
 weeks. One of the major goals for Rust is using a rust library in Gecko.
 The specific one I'm working at the moment is adding rust-url as a safer
 alternative to nsStandardURL.
 
 Will this affect our ability to make nsStandardURL thread-safe?
 
 Right now there are various things, mainly related to addons, that are
 forcing us to only use nsIURI on the main thread. However this is a
 major pain as we move more code off the main thread and as we do more
 IPC stuff.
 
 We've made some, small, steps towards providing better addon APIs
 which would enable us to make nsIURI parsing and handling possible to
 do from any thread.
 
 Would this affect this work moving forward? Or would a Rust
 implemented URL-parser/nsIURI be possible to use from other threads
 once the other blockers for that are removed?


Valentin, if you’re serious about this effort, lets start by collecting 
requirements of this rewrite.  Thread safety is important.  Compat and 
correctness (talk to Anne!).  Performance.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: document.execCommand(cut/copy)

2015-05-05 Thread Ehsan Akhgari
On Tue, May 5, 2015 at 7:34 PM, Tantek Çelik tan...@cs.stanford.edu wrote:

 On Wed, May 6, 2015 at 12:51 AM, Ehsan Akhgari ehsan.akhg...@gmail.com
 wrote:
  On 2015-05-05 6:31 PM, Daniel Holbert wrote:
 
  On 05/05/2015 02:51 PM, Ehsan Akhgari wrote:
 
  Sites such as Github currently use Flash in order to
  allow people to copy text to the clipboard by clicking a button in
 their
  UI.

 First, this is awesome and can't wait to try it out.

 Second, cut is potentially destructive to user data, have you
 considered enabling this only for secure connections? Either way it
 would be good to know the reasoning behind your decision.


Hmm, what would that prevent against though?  A web page could just use the
normal DOM APIs to destroy the user data (e.g., something like the contents
of a blog post the user is writing in a blogging web app).  Is this what
you had in mind?

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Nicholas Nethercote
On Wed, May 6, 2015 at 2:12 AM, Bill McCloskey wmcclos...@mozilla.com wrote:

 Regarding process-per-core or process-per-domain or whatever, I just want
 to point out that responsiveness will improve even beyond process-per-core.

You're probably right, but as you increase the number of processes the
responsiveness improvements will hit diminishing returns at some point
whereas the memory usage will likely scale linearly.

 So I do think we're going to want more than just 4 or 8 content processes.
 We're just going to have to work really hard on the memory usage. I've
 noticed a lot of waste for really small content processes. One of my
 content processes (for mxr) looks like this right now: explicit 50MB,
 heap-overhead 19MB, js-non-window 14MB, heap-unclassified 12MB. The actual
 window is only 0.89 MB.

resident-unique (only available on Linux, alas) is probably the most
interesting measurement in this case.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Bill McCloskey
On Tue, May 5, 2015 at 4:34 PM, Nicholas Nethercote n.netherc...@gmail.com
wrote:

 resident-unique (only available on Linux, alas) is probably the most
 interesting measurement in this case.


Usually I see resident-unique pretty consistently about 15-20MB higher than
explicit for content processes.

I wonder if we track IPDL shared memory in about:memory. I would imagine we
do, but I don't see it anywhere.

-Bill
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rust in Gecko. rust-url compatibility

2015-05-05 Thread Ehsan Akhgari
On Tue, May 5, 2015 at 10:10 PM, Valentin Gosu valentin.g...@gmail.com
wrote:

 On 6 May 2015 at 04:58, Doug Turner do...@mozilla.com wrote:

 
   On May 5, 2015, at 12:55 PM, Jonas Sicking jo...@sicking.cc wrote:
  
   On Thu, Apr 30, 2015 at 3:34 PM, Valentin Gosu 
 valentin.g...@gmail.com
  wrote:
   As some of you may know, Rust is approaching its 1.0 release in a
  couple of
   weeks. One of the major goals for Rust is using a rust library in
 Gecko.
   The specific one I'm working at the moment is adding rust-url as a
 safer
   alternative to nsStandardURL.
  
   Will this affect our ability to make nsStandardURL thread-safe?
 

 Unfortunatelly this project does not make nsStandardURL thread safe. It is
 only meant to prove Rust can be used in mozilla-central, and bring a little
 extra safety to our URL parsing.


Note that the URL parser used by nsStandardURL (that is, nsStdURLParser)
itself is thread-safe.  If I understand things correctly, this project aims
to replace nsStdURLParser with a Rust implementation, therefore it needs to
keep being thread-safe.  We are already depending on the thread-safety of
the URL parser in Gecko.

-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Web Speech API Installation Build Flags

2015-05-05 Thread kdavis
We would like some feedback on build flags for the Web Speech API installation.

More specifically, we are planning to land an initial version of the Web Speech 
API[1] into Geko. However, due to a number of factors, model size being one of 
them, we plan to introduce various build flags which install/do not install 
parts of the Web Speech API for various build targets.

Our current plan for B2G is as follows:

1. Introduce a flag to control installation of the Web Speech API
2. Introduce a flag to control installation of  Pocketsphinx[2], the STT/TTS 
engine.
3. Introduce a script to allow installation of models, allowing developers to 
test the Web Speech API (They can test once they've made a build with the 
previous two flags on)

Our question is related to desktop and Fennec. Our current plan is to:

1. Introduce a flag to control installation of the Web Speech API + 
Pocketsphinx + English model[3]

The question is: Is this a good plan for desktop and Fennec? Should there be 
more/less fine grade control for installation there?

[1] https://dvcs.w3.org/hg/speech-api/raw-file/tip/webspeechapi.html
[2] http://cmusphinx.sourceforge.net/
[3] Initially we will work only with English and introduce a mechanism to 
install other models later.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Ehsan Akhgari

On 2015-05-05 10:30 AM, Mike Conley wrote:

The e10s team is currently only focused on getting things to work with a
single content process at this time. We eventually want to work with
multiple content processes (as others have pointed out, the exact number
to work with is not clear), but we're focused on working with a single
process because we believe this is a strictly easier backdrop to develop
against.

Once we've got single content process nailed down, we can then start
exploring multiple content processes.


I like roc and some other colleagues have also set the process count 
pref to 10 for a while and have not noticed any issues that were not 
present with one content process.  However, I realize that a few 
people's anecdotes cannot be a good enough reason to change what we're 
planning to do here.  :-)


Is there a more detailed description of what the issues with multiple 
content processes are that e10s itself doesn't suffer from?



We might revisit that decision as things stabilize, but in the meantime
I want to stress something: if you're cranking up dom.ipc.processCount
to avoid the spinner, I suspect you are probably wallpapering over the
issue, and robbing us of data. We've just landed Telemetry probes to get
tab switch and spinner times[1]. If you bump up dom.ipc.processCount to
avoid the spinner, we miss out on knowing when you _would_ have seen the
spinner. This makes it harder for us to know whether or not things are
improving or getting worse. It makes it harder for us to identify
patterns about what conditions cause the spinner to appear.


One of the extremely common cases where I used to get the spinner was 
when my system was under load (outside of Firefox.)  I doubt that we're 
going to be able to fix anything in our code to prevent showing the 
spinner in such circumstances.  Another such common case would be one 
CPU hungry tab.


Are we planning on strategies to mitigate that general issue?  It's not 
clear to me that replacing the (possibly invisible, or at least only 
visible in the UI animations) jank that we have without e10s when 
switching tabs with a very visible spinner icon in the middle of the 
content area is the right trade-off.  Even though I _know_ the reason 
why we show the spinner, with my user hat on, I think when switching 
tabs, Firefox with 1 content process seems more sluggish than no-e10s, 
or rather, the visual effect that you'll get in case we show the spinner 
with e10s gives me a more sluggish experience than a few small 
animations in the UI stuttering.


Cheers,
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Mike Conley
 Is there a more detailed description of what the issues with multiple 
 content processes are that e10s itself doesn't suffer from?

I'm interpreting this as, What are the problems with multiple content
processes that single process does not have, from the user's perspective?

This is mostly unknown, simply because dom.ipc.processCount  1 is not
well tested. Most (if not all) e10s tests test a single content process.
As a team, when a bug is filed and we see that it's only reproducible
with dom.ipc.processCount  1, the priority immediately drops, because
we're just not focusing on it.

So the issues with dom.ipc.processCount are mostly unknown - although a
few have been filed:
https://bugzilla.mozilla.org/buglist.cgi?quicksearch=processCountlist_id=12230722

 One of the extremely common cases where I used to get the spinner was when 
 my system was under load (outside of Firefox.)  I doubt that we're going to 
 be able to fix anything in our code to prevent showing the spinner in such 
 circumstances. 

Yes, I experience that too - I often see the spinner when I have many
tabs open and I'm doing a build in the background.

I think it's worth diving in here and investigating what is occurring in
that case. I have my suspicions that
https://bugzilla.mozilla.org/show_bug.cgi?id=1161166 is a big culprit on
OS X, but have nothing but profiles to back that up. My own experience
is that the spinner is far more prevalent in the many-tabs-and-build
case on OS X than on other platforms, which makes me suspect that we're
just doing something wrong somewhere - with bug 1161166 being my top
suspect.

 Another such common case would be one CPU hungry tab.

I think this falls more under the domain of UX. With single-process
Firefox, the whole browser locks up, and we (usually) show a modal
dialog asking the user if they want to stop the script. In other cases,
we just jank and fail to draw frames until the process is ready.

With a content process, the UI remains responsive, but we get this
bigass spinner. That's not an amazing trade-off - it's much uglier and
louder, IMO, than the whole browser locking up. The big spinner was just
an animation that we tossed in so that it was clear that a frame was not
ready (and to avoid just painting random video memory), but I should
emphasize that it was never meant to ship.

If we ship the current appearance of the spinner to our release
population... it would mean that my heels have been ground down to nubs,
because I will fight tooth and nail to prevent that from happening. I
suspect UX feels the same.

So for the case where the content process is being blocked by heavy
content, we might need to find better techniques to communicate to the
user what's going on and to give them options. I suspect / hope that bug
1106527 will carry that work.

Here's what I know:

1) Folks are reporting that they _never_ see the spinner when they crank
up dom.ipc.processCount  1. This is true even when they're doing lots
of work in the background, like building.

Here's what I suspect:

1) I suspect that given the same group of CPU heavy tabs, single-process
Firefox will currently perform better than e10s with a single content
process. I suspect we can reach parity here.

2) I suspect that OS X is where most of the pain is, and I suspect bug
1161166 is a big part of it.

Here's what I suggest:

1) Wait for Telemetry data to come in to get a better sense of who is
being affected and what conditions they are under. Hopefully, the
population who have dom.ipc.processCount  1 are small enough that we
have useful data for the dom.ipc.processCount = 1 case.

2) Send me profiles for when you see it.

3) Be patient as we figure out what is slow and iron it out. Realize
that we're just starting to look at performance, as we've been focusing
on stability and making browser features work up until now.

4) Trust that we're not going to ship The Spinner Experience, because
shipping it as-is is beyond ill-advised. :D

-Mike

On 05/05/2015 10:49 AM, Ehsan Akhgari wrote:
 On 2015-05-05 10:30 AM, Mike Conley wrote:
 The e10s team is currently only focused on getting things to work with a
 single content process at this time. We eventually want to work with
 multiple content processes (as others have pointed out, the exact number
 to work with is not clear), but we're focused on working with a single
 process because we believe this is a strictly easier backdrop to develop
 against.

 Once we've got single content process nailed down, we can then start
 exploring multiple content processes.
 
 I like roc and some other colleagues have also set the process count
 pref to 10 for a while and have not noticed any issues that were not
 present with one content process.  However, I realize that a few
 people's anecdotes cannot be a good enough reason to change what we're
 planning to do here.  :-)
 
 Is there a more detailed description of what the issues with multiple
 content processes are that e10s itself doesn't 

Re: what is new in talos, what is coming up

2015-05-05 Thread Joel Maher
Great question Brian!

I believe you are asking if we would generate alerts based on individual
tests instead of the summary of tests.  The short answer is no.  In looking
at reporting the subtest results as new alerts, we found there was a lot of
noise (especially in svgx, tp5o, dromaeo, v8) as compared to the summary
alerts.  cart/tart has a bit of noise which might be more realistic, to
report specific pages on, we could investigate this more if there is a
strong need for it.

late last year we cleaned up our summary reporting to be a geometric mean
of all the pages/subtests which were run.  This means that we do a better
job of reporting a regression of the test summary when a specific test is
the cause.

One thing we have been working on is a compare mode to perfherder which
replaces compare-talos.  This is live (
https://treeherder.mozilla.org/perf.html#/comparechooser - although
changing rapidly) and will do a great job of telling you which specific
test caused a regression.

Do you have concerns about a lack of reporting subtests, or tooling to make
finding the results easier?  suggestions are welcome, this is something we
work on regularly and improve to make our lives easier!

-Joel


On Mon, May 4, 2015 at 2:08 PM, Brian Grinstead bgrinst...@mozilla.com
wrote:

 The upcoming changes sound great!  Is there currently a way (or plans to
 add a way) to track regressions / improvements for a single measurement
 within a test?  I see that in perfherder I can add these measurements to a
 graph (http://mzl.la/1E17Zyo) but it’s hard to distinguish between normal
 variation across runs and an actual regression by looking at the graph.

 Brian


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Mike Conley
Funny folks should bring this up - I recently wrote a blog post about this:

http://mikeconley.ca/blog/2015/05/04/electrolysis-and-the-big-tab-spinner-of-doom/

Funny how things cluster. :) I suggest reading that top to bottom before
you continue reading this post.

...

Welcome back! :D

The e10s team is currently only focused on getting things to work with a
single content process at this time. We eventually want to work with
multiple content processes (as others have pointed out, the exact number
to work with is not clear), but we're focused on working with a single
process because we believe this is a strictly easier backdrop to develop
against.

Once we've got single content process nailed down, we can then start
exploring multiple content processes.

We might revisit that decision as things stabilize, but in the meantime
I want to stress something: if you're cranking up dom.ipc.processCount
to avoid the spinner, I suspect you are probably wallpapering over the
issue, and robbing us of data. We've just landed Telemetry probes to get
tab switch and spinner times[1]. If you bump up dom.ipc.processCount to
avoid the spinner, we miss out on knowing when you _would_ have seen the
spinner. This makes it harder for us to know whether or not things are
improving or getting worse. It makes it harder for us to identify
patterns about what conditions cause the spinner to appear.

We are just starting to get to the point where we're looking at
performance, and I think what would be very valuable is for people to
set dom.ipc.processCount to 1, and give us profiles for when they see
the spinner. Upload the profiles, and paste a link to them in [2]. I
will happily look at them or forward them to people smarter than I to
look at them.

There's a video in my blog post demonstrating how to get and use the
profiler, if you've never used it before.

Profiling has already been fruitful! Just yesterday, we identified [3]
as a pretty brutal bottleneck for tab switching on OS X.

So don't just run away from the spinner - help us drive a flaming stake
through its heart with profiles. I think that'd be the best course of
action on this issue.

-Mike

[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1156592
[2]: https://bugzilla.mozilla.org/show_bug.cgi?id=1135719
[3]: https://bugzilla.mozilla.org/show_bug.cgi?id=1161166

On 05/05/2015 6:53 AM, Ted Mielczarek wrote:
 On Tue, May 5, 2015, at 02:53 AM, Leman Bennett (Omega X) wrote:
 On 5/5/2015 12:23 AM, Robert O'Callahan wrote:
 On Tue, May 5, 2015 at 10:29 AM, Leman Bennett (Omega X) 
 Redacted.For.Spam@request.contact wrote:

 Inquiring minds would like to know.

 At the moment, e10s tabs is still somewhat slower than non-e10s. Multiple
 content processes would go a long way for more responsive navigation and
 less stalls on the one content process. That stall spinner is getting a LOT
 of hate at the moment.


 I don't know, but I've enabled multiple content processes, and I haven't
 noticed any problems --- and the spinner does seem to be shown a lot less.

 Rob


 The issue I've seen with dom.ipc.processCount beyond one process is that 
 they're not dynamic. Those instances will stay open for the entire 
 session and not unload themselves after a time which can mean double the 
 memory use.

 I heard that there was rumor of a plan to limit process count spawn to 
 per-domain. But I've not seen offhand of a bug filed for it or anything 
 else that relates to achieving more than one content process instance.
 
 There's a bug filed[1], but every time I've asked about it I've been
 told it's not currently on the roadmap. I, too, find that single-process
 e10s is worse for responsiveness, which is unfortunate. Last time I
 tried to use dom.ipc.processCount  1 I found that window.open was
 broken (and also target=_blank on links) which made actual browsing
 difficult, but I haven't tested it recently.
 
 I also filed a couple of bugs[2][3] about being smarter about multiple
 content processes which would make things a bit nicer.
 
 -Ted
 
 1. https://bugzilla.mozilla.org/show_bug.cgi?id=641683
 2. https://bugzilla.mozilla.org/show_bug.cgi?id=1066789
 3. https://bugzilla.mozilla.org/show_bug.cgi?id=1066792
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-05 Thread Mike Hoye

On 2015-05-05 4:59 AM, sn...@arbor.net wrote:

Encryption should be activated only after BOTH parties have mutually 
authenticated.
Why establish an encrypted transport to an unknown attacker?
A web you have to uniquely identify yourself to participate in is really 
not open or free for an awful lot of people. And if we had a reliable 
way of identifying attacks and attackers at all, much less in some 
actionable way, this would all be a much simpler problem.


It is, just as one example among thousands, impossible to know if your 
wifi is being sniffed or not.


- mhoye
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: document.execCommand(cut/copy)

2015-05-05 Thread Jet Villegas
\o/ Great to see this come through. Shumway was already using this but
needed chrome privilege to do so. It's nice to open it up.

--Jet

On Tue, May 5, 2015 at 2:51 PM, Ehsan Akhgari ehsan.akhg...@gmail.com
wrote:

 Summary: We currently disallow programmatic copying and cutting from JS for
 Web content, which has relied on web sites to rely on Flash in order to
 copy content to the clipboard.  We are planning to relax this restriction
 to allow this when execCommand is called in response to a user event.  This
 restriction mimics what we do for other APIs, such as FullScreen.

 Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1012662

 Link to standard: This is unfortunately not specified very precisely.
 There is a rough spec here: 

 https://dvcs.w3.org/hg/editing/raw-file/tip/editing.html#miscellaneous-commands
 
 and the handling of clipboard events is specified here: 
 https://w3c.github.io/clipboard-apis/.  Sadly, the editing spec is not
 actively edited.  We will strive for cross browser interoperability, of
 course.

 Platform coverage: All platforms.

 Target release: Firefox 40.

 Preference behind which this will be implemented: This won't be hidden
 behind a preference, as the code changes required are not big, and can be
 easily reverted.

 DevTools bug: N/A

 Do other browser engines implement this: IE 10 and Chrome 43 both implement
 this.  Opera has adopted this from Blink as of version 29.

 Security  Privacy Concerns: We have discussed this rather extensively
 before: http://bit.ly/1zynBg7, and have decided that restricting these
 functions to only work in response to user events is enough to prevent
 abuse here.  Note that we are not going to enable the paste command which
 would give applications access to the contents of the clipboard.

 Web designer / developer use-cases: This feature has been rather popular
 among web sites.  Sites such as Github currently use Flash in order to
 allow people to copy text to the clipboard by clicking a button in their
 UI.

 Cheers,
 --
 Ehsan
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: document.execCommand(cut/copy)

2015-05-05 Thread Ehsan Akhgari

On 2015-05-05 6:31 PM, Daniel Holbert wrote:

On 05/05/2015 02:51 PM, Ehsan Akhgari wrote:

Sites such as Github currently use Flash in order to
allow people to copy text to the clipboard by clicking a button in their UI.


Bugzilla does this, too! [1] (If you enable experimental user
interface in your account's General Preferences.)


Good point!  Filed bug 1161797 for that.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Steve Fink

On 05/05/2015 12:42 AM, Nicholas Nethercote wrote:

On Mon, May 4, 2015 at 11:53 PM, Leman Bennett (Omega X)
Redacted.For.Spam@request.contact wrote:

I heard that there was rumor of a plan to limit process count spawn to
per-domain. But I've not seen offhand of a bug filed for it or anything else
that relates to achieving more than one content process instance.

There are multiple competing factors when it comes to choosing the
right number of processes.

- For security, more processes is better; one per tab is probably ideal.

- For crash protection, ditto.

- For responsiveness, one process per CPU core is probably best.


I'm not so sure of that. Fewer processes than #CPUs will underutilize 
compute resources, but matching processes to cores doesn't guarantee 
that half those processes won't end up waiting on I/O. It's nice to have 
a few extra processes to time-slice in so that they can make progress 
when another process blocks on. (Then again, if there are worker threads 
in the mix, they may or may not eat a core themselves...) That's more of 
a throughput argument than a responsiveness one, but it can impact 
responsiveness if the one tab you care about ends up waiting on 
something else in the same process that could have been split out.


Having more processes exposes more parallelism to where the scheduler 
can see it.


(At the cost of memory, security, and crashes, as you said, as well as 
scheduler and locking overhead, cache thrashing, etc.)




- For memory usage, one process is probably best.

I'd be loathe to use as many processes as Chrome does, which is
something like one per domain (AIUI). I've heard countless times that
Chrome basically falls over once you get past 60 or 70 tabs, and I've
always assumed that this is due to memory usage, or possibly some kind
of IPC scaling issue. In contrast, plenty of Firefox users have 100+
tabs, and there are some that even have 1000+. I think it's crucial
that we continue working well for such users.

With all that in mind, my hope/guess is that we'll end up one day with
one process per CPU core, because it's the middle ground.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Bill McCloskey
The only issues I'm aware of with dom.ipc.processCount  1 are:
1. The devtools Browser Content Toolbox doesn't work.
2. Some printing related stuff doesn't work.
3. There's a theoretical issue where plugins can hang when processCount  1
and when more than one plugin is running.
The first two should be easy to fix. The last one is harder. I think the
easiest fix would be to run all plugins from the same plugin process.

I remember the issue Ted mentioned, and it was fixed by bug 567058.

One major issue I've noticed with the spinner is that the content process
doesn't get much CPU time when the system is under heavy load, at least
under Linux. I think we're probably not doing a good job or respecting
Linux's interactivity heuristic or something. The main process seems to get
scheduled more frequently, so non-e10s performs better in that situation.
Other OSs probably have similar issues. Increasing processCount doesn't
help with this problem.

The main reason we haven't increased processCount is because we want to
ship as soon as possible and this seems like the way to do it. I guess it
really comes down to which is easier:
- getting content process memory usage low enough that we can increase
processCount,
- or getting performance of processCount=1 good enough that we're at parity
with non-e10s.
I suspect the latter will be easier, but we'll see. We haven't focused much
on performance yet.

Regarding process-per-core or process-per-domain or whatever, I just want
to point out that responsiveness will improve even beyond process-per-core.
So I do think we're going to want more than just 4 or 8 content processes.
We're just going to have to work really hard on the memory usage. I've
noticed a lot of waste for really small content processes. One of my
content processes (for mxr) looks like this right now: explicit 50MB,
heap-overhead 19MB, js-non-window 14MB, heap-unclassified 12MB. The actual
window is only 0.89 MB.

-Bill


On Tue, May 5, 2015 at 8:41 AM, Mike Conley mcon...@mozilla.com wrote:

  Is there a more detailed description of what the issues with multiple
 content processes are that e10s itself doesn't suffer from?

 I'm interpreting this as, What are the problems with multiple content
 processes that single process does not have, from the user's perspective?

 This is mostly unknown, simply because dom.ipc.processCount  1 is not
 well tested. Most (if not all) e10s tests test a single content process.
 As a team, when a bug is filed and we see that it's only reproducible
 with dom.ipc.processCount  1, the priority immediately drops, because
 we're just not focusing on it.

 So the issues with dom.ipc.processCount are mostly unknown - although a
 few have been filed:

 https://bugzilla.mozilla.org/buglist.cgi?quicksearch=processCountlist_id=12230722

  One of the extremely common cases where I used to get the spinner was
 when my system was under load (outside of Firefox.)  I doubt that we're
 going to be able to fix anything in our code to prevent showing the
 spinner in such circumstances.

 Yes, I experience that too - I often see the spinner when I have many
 tabs open and I'm doing a build in the background.

 I think it's worth diving in here and investigating what is occurring in
 that case. I have my suspicions that
 https://bugzilla.mozilla.org/show_bug.cgi?id=1161166 is a big culprit on
 OS X, but have nothing but profiles to back that up. My own experience
 is that the spinner is far more prevalent in the many-tabs-and-build
 case on OS X than on other platforms, which makes me suspect that we're
 just doing something wrong somewhere - with bug 1161166 being my top
 suspect.

  Another such common case would be one CPU hungry tab.

 I think this falls more under the domain of UX. With single-process
 Firefox, the whole browser locks up, and we (usually) show a modal
 dialog asking the user if they want to stop the script. In other cases,
 we just jank and fail to draw frames until the process is ready.

 With a content process, the UI remains responsive, but we get this
 bigass spinner. That's not an amazing trade-off - it's much uglier and
 louder, IMO, than the whole browser locking up. The big spinner was just
 an animation that we tossed in so that it was clear that a frame was not
 ready (and to avoid just painting random video memory), but I should
 emphasize that it was never meant to ship.

 If we ship the current appearance of the spinner to our release
 population... it would mean that my heels have been ground down to nubs,
 because I will fight tooth and nail to prevent that from happening. I
 suspect UX feels the same.

 So for the case where the content process is being blocked by heavy
 content, we might need to find better techniques to communicate to the
 user what's going on and to give them options. I suspect / hope that bug
 1106527 will carry that work.

 Here's what I know:

 1) Folks are reporting that they _never_ see the spinner when they crank
 up 

Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Mike Conley
 My question is: after a decent period of time picking the low-hanging 
 fruit, if there is still non-trivial spinner time for processCount=1, would 
 the team consider shifting efforts to getting processCount1 ship-worthy 
 instead of resorting to heroics to get processCount=1 ship-worthy?

I can't speak for the whole team, and I'm not really a shot-caller for
the project, but I would definitely consider this in the scenario you've
laid out.

On 05/05/2015 12:11 PM, Luke Wagner wrote:
 It definitely makes sense to start your performance investigation with
 processCount=1 since that will likely highlight the low-hanging fruit
 which should be fixed regardless of processCount.
 
 My question is: after a decent period of time picking the low-hanging
 fruit, if there is still non-trivial spinner time for processCount=1,
 would the team consider shifting efforts to getting processCount1
 ship-worthy instead of resorting to heroics to get processCount=1
 ship-worthy?
 
 On Tue, May 5, 2015 at 10:41 AM, Mike Conley mcon...@mozilla.com
 mailto:mcon...@mozilla.com wrote:
 
  Is there a more detailed description of what the issues with multiple 
 content processes are that e10s itself doesn't suffer from?
 
 I'm interpreting this as, What are the problems with multiple content
 processes that single process does not have, from the user's
 perspective?
 
 This is mostly unknown, simply because dom.ipc.processCount  1 is not
 well tested. Most (if not all) e10s tests test a single content process.
 As a team, when a bug is filed and we see that it's only reproducible
 with dom.ipc.processCount  1, the priority immediately drops, because
 we're just not focusing on it.
 
 So the issues with dom.ipc.processCount are mostly unknown - although a
 few have been filed:
 
 https://bugzilla.mozilla.org/buglist.cgi?quicksearch=processCountlist_id=12230722
 
  One of the extremely common cases where I used to get the spinner was 
 when my system was under load (outside of Firefox.)  I doubt that we're 
 going to be able to fix anything in our code to prevent showing the spinner 
 in such circumstances.
 
 Yes, I experience that too - I often see the spinner when I have many
 tabs open and I'm doing a build in the background.
 
 I think it's worth diving in here and investigating what is occurring in
 that case. I have my suspicions that
 https://bugzilla.mozilla.org/show_bug.cgi?id=1161166 is a big culprit on
 OS X, but have nothing but profiles to back that up. My own experience
 is that the spinner is far more prevalent in the many-tabs-and-build
 case on OS X than on other platforms, which makes me suspect that we're
 just doing something wrong somewhere - with bug 1161166 being my top
 suspect.
 
  Another such common case would be one CPU hungry tab.
 
 I think this falls more under the domain of UX. With single-process
 Firefox, the whole browser locks up, and we (usually) show a modal
 dialog asking the user if they want to stop the script. In other cases,
 we just jank and fail to draw frames until the process is ready.
 
 With a content process, the UI remains responsive, but we get this
 bigass spinner. That's not an amazing trade-off - it's much uglier and
 louder, IMO, than the whole browser locking up. The big spinner was just
 an animation that we tossed in so that it was clear that a frame was not
 ready (and to avoid just painting random video memory), but I should
 emphasize that it was never meant to ship.
 
 If we ship the current appearance of the spinner to our release
 population... it would mean that my heels have been ground down to nubs,
 because I will fight tooth and nail to prevent that from happening. I
 suspect UX feels the same.
 
 So for the case where the content process is being blocked by heavy
 content, we might need to find better techniques to communicate to the
 user what's going on and to give them options. I suspect / hope that bug
 1106527 will carry that work.
 
 Here's what I know:
 
 1) Folks are reporting that they _never_ see the spinner when they crank
 up dom.ipc.processCount  1. This is true even when they're doing lots
 of work in the background, like building.
 
 Here's what I suspect:
 
 1) I suspect that given the same group of CPU heavy tabs, single-process
 Firefox will currently perform better than e10s with a single content
 process. I suspect we can reach parity here.
 
 2) I suspect that OS X is where most of the pain is, and I suspect bug
 1161166 is a big part of it.
 
 Here's what I suggest:
 
 1) Wait for Telemetry data to come in to get a better sense of who is
 being affected and what conditions they are under. Hopefully, the
 population who have dom.ipc.processCount  1 are small enough that we
 have useful data for the 

Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Luke Wagner
It definitely makes sense to start your performance investigation with
processCount=1 since that will likely highlight the low-hanging fruit which
should be fixed regardless of processCount.

My question is: after a decent period of time picking the low-hanging
fruit, if there is still non-trivial spinner time for processCount=1, would
the team consider shifting efforts to getting processCount1 ship-worthy
instead of resorting to heroics to get processCount=1 ship-worthy?

On Tue, May 5, 2015 at 10:41 AM, Mike Conley mcon...@mozilla.com wrote:

  Is there a more detailed description of what the issues with multiple
 content processes are that e10s itself doesn't suffer from?

 I'm interpreting this as, What are the problems with multiple content
 processes that single process does not have, from the user's perspective?

 This is mostly unknown, simply because dom.ipc.processCount  1 is not
 well tested. Most (if not all) e10s tests test a single content process.
 As a team, when a bug is filed and we see that it's only reproducible
 with dom.ipc.processCount  1, the priority immediately drops, because
 we're just not focusing on it.

 So the issues with dom.ipc.processCount are mostly unknown - although a
 few have been filed:

 https://bugzilla.mozilla.org/buglist.cgi?quicksearch=processCountlist_id=12230722

  One of the extremely common cases where I used to get the spinner was
 when my system was under load (outside of Firefox.)  I doubt that we're
 going to be able to fix anything in our code to prevent showing the
 spinner in such circumstances.

 Yes, I experience that too - I often see the spinner when I have many
 tabs open and I'm doing a build in the background.

 I think it's worth diving in here and investigating what is occurring in
 that case. I have my suspicions that
 https://bugzilla.mozilla.org/show_bug.cgi?id=1161166 is a big culprit on
 OS X, but have nothing but profiles to back that up. My own experience
 is that the spinner is far more prevalent in the many-tabs-and-build
 case on OS X than on other platforms, which makes me suspect that we're
 just doing something wrong somewhere - with bug 1161166 being my top
 suspect.

  Another such common case would be one CPU hungry tab.

 I think this falls more under the domain of UX. With single-process
 Firefox, the whole browser locks up, and we (usually) show a modal
 dialog asking the user if they want to stop the script. In other cases,
 we just jank and fail to draw frames until the process is ready.

 With a content process, the UI remains responsive, but we get this
 bigass spinner. That's not an amazing trade-off - it's much uglier and
 louder, IMO, than the whole browser locking up. The big spinner was just
 an animation that we tossed in so that it was clear that a frame was not
 ready (and to avoid just painting random video memory), but I should
 emphasize that it was never meant to ship.

 If we ship the current appearance of the spinner to our release
 population... it would mean that my heels have been ground down to nubs,
 because I will fight tooth and nail to prevent that from happening. I
 suspect UX feels the same.

 So for the case where the content process is being blocked by heavy
 content, we might need to find better techniques to communicate to the
 user what's going on and to give them options. I suspect / hope that bug
 1106527 will carry that work.

 Here's what I know:

 1) Folks are reporting that they _never_ see the spinner when they crank
 up dom.ipc.processCount  1. This is true even when they're doing lots
 of work in the background, like building.

 Here's what I suspect:

 1) I suspect that given the same group of CPU heavy tabs, single-process
 Firefox will currently perform better than e10s with a single content
 process. I suspect we can reach parity here.

 2) I suspect that OS X is where most of the pain is, and I suspect bug
 1161166 is a big part of it.

 Here's what I suggest:

 1) Wait for Telemetry data to come in to get a better sense of who is
 being affected and what conditions they are under. Hopefully, the
 population who have dom.ipc.processCount  1 are small enough that we
 have useful data for the dom.ipc.processCount = 1 case.

 2) Send me profiles for when you see it.

 3) Be patient as we figure out what is slow and iron it out. Realize
 that we're just starting to look at performance, as we've been focusing
 on stability and making browser features work up until now.

 4) Trust that we're not going to ship The Spinner Experience, because
 shipping it as-is is beyond ill-advised. :D

 -Mike

 On 05/05/2015 10:49 AM, Ehsan Akhgari wrote:
  On 2015-05-05 10:30 AM, Mike Conley wrote:
  The e10s team is currently only focused on getting things to work with a
  single content process at this time. We eventually want to work with
  multiple content processes (as others have pointed out, the exact number
  to work with is not clear), but we're focused on working with a single
  process 

Re: No more binary components in extensions

2015-05-05 Thread Benjamin Smedberg



On 5/4/2015 6:53 PM, Philipp Kewisch wrote:


So to be clear, this is just removed/disabled for Firefox? Other
projects like Thunderbird are not affected?


Followups to dev-extensions please!

That is incorrect. This is currently disabled for all gecko applications.

B2G has asked that binary component support be restored for 
distribution/bundles only, and that is being done in bug 1161212.


As I said on the other list, I will review a patch which makes this 
configurable for Thunderbird, but I don't plan to write that patch myself.


--BDS

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Jonas Sicking
On Tue, May 5, 2015 at 12:42 AM, Nicholas Nethercote
n.netherc...@gmail.com wrote:
 On Mon, May 4, 2015 at 11:53 PM, Leman Bennett (Omega X)
 Redacted.For.Spam@request.contact wrote:

 I heard that there was rumor of a plan to limit process count spawn to
 per-domain. But I've not seen offhand of a bug filed for it or anything else
 that relates to achieving more than one content process instance.

 There are multiple competing factors when it comes to choosing the
 right number of processes.

 - For security, more processes is better; one per tab is probably ideal.

 - For crash protection, ditto.

 - For responsiveness, one process per CPU core is probably best.

 - For memory usage, one process is probably best.

Actually, what you generally want to do is to switch process when the
user navigates from one website to another in a given tab. And then
kill off the old process.

One big benefit that this provides is that it cleans up any memory
leaks. So there are memory advantages to this approach.

Another approach is that we should eventually move to a
process-per-origin model, where we can actually use the sandbox to
ensure that different websites can't steal each other's data, even if
we have a buffer overflow somewhere in gecko.

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Leman Bennett (Omega X)

On 5/5/2015 12:23 AM, Robert O'Callahan wrote:

On Tue, May 5, 2015 at 10:29 AM, Leman Bennett (Omega X) 
Redacted.For.Spam@request.contact wrote:


Inquiring minds would like to know.

At the moment, e10s tabs is still somewhat slower than non-e10s. Multiple
content processes would go a long way for more responsive navigation and
less stalls on the one content process. That stall spinner is getting a LOT
of hate at the moment.



I don't know, but I've enabled multiple content processes, and I haven't
noticed any problems --- and the spinner does seem to be shown a lot less.

Rob



The issue I've seen with dom.ipc.processCount beyond one process is that 
they're not dynamic. Those instances will stay open for the entire 
session and not unload themselves after a time which can mean double the 
memory use.


I heard that there was rumor of a plan to limit process count spawn to 
per-domain. But I've not seen offhand of a bug filed for it or anything 
else that relates to achieving more than one content process instance.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Nicholas Nethercote
On Mon, May 4, 2015 at 11:53 PM, Leman Bennett (Omega X)
Redacted.For.Spam@request.contact wrote:

 I heard that there was rumor of a plan to limit process count spawn to
 per-domain. But I've not seen offhand of a bug filed for it or anything else
 that relates to achieving more than one content process instance.

There are multiple competing factors when it comes to choosing the
right number of processes.

- For security, more processes is better; one per tab is probably ideal.

- For crash protection, ditto.

- For responsiveness, one process per CPU core is probably best.

- For memory usage, one process is probably best.

I'd be loathe to use as many processes as Chrome does, which is
something like one per domain (AIUI). I've heard countless times that
Chrome basically falls over once you get past 60 or 70 tabs, and I've
always assumed that this is due to memory usage, or possibly some kind
of IPC scaling issue. In contrast, plenty of Firefox users have 100+
tabs, and there are some that even have 1000+. I think it's crucial
that we continue working well for such users.

With all that in mind, my hope/guess is that we'll end up one day with
one process per CPU core, because it's the middle ground.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-05 Thread snash
The additional expense of HTTPS arises from the significantly higher cost to 
the service owner of protecting it against attack, to maintain service 
Availability (that third side of the security CIA triangle that gets 
forgotten). 

Encryption should be activated only after BOTH parties have mutually 
authenticated.
Why establish an encrypted transport to an unknown attacker?

This might be done with mutual authentication in TLS  (which nobody does) or 
creating a separate connection after identities are authenticated, or use an 
App with embedded identity.

I'll be at RIPE70. Steve

On Monday, April 13, 2015 at 3:57:58 PM UTC+1, Richard Barnes wrote:
 There's pretty broad agreement that HTTPS is the way forward for the web.
 In recent months, there have been statements from IETF [1], IAB [2], W3C
 [3], and even the US Government [4] calling for universal use of
 encryption, which in the case of the web means HTTPS.
 
 In order to encourage web developers to move from HTTP to HTTPS, I would
 like to propose establishing a deprecation plan for HTTP without security.
 Broadly speaking, this plan would entail  limiting new features to secure
 contexts, followed by gradually removing legacy features from insecure
 contexts.  Having an overall program for HTTP deprecation makes a clear
 statement to the web community that the time for plaintext is over -- it
 tells the world that the new web uses HTTPS, so if you want to use new
 things, you need to provide security.  Martin Thomson and I drafted a
 one-page outline of the plan with a few more considerations here:
 
 https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing
 
 Some earlier threads on this list [5] and elsewhere [6] have discussed
 deprecating insecure HTTP for powerful features.  We think it would be a
 simpler and clearer statement to avoid the discussion of which features are
 powerful and focus on moving all features to HTTPS, powerful or not.
 
 The goal of this thread is to determine whether there is support in the
 Mozilla community for a plan of this general form.  Developing a precise
 plan will require coordination with the broader web community (other
 browsers, web sites, etc.), and will probably happen in the W3C.
 
 Thanks,
 --Richard
 
 [1] https://tools.ietf.org/html/rfc7258
 [2]
 https://www.iab.org/2014/11/14/iab-statement-on-internet-confidentiality/
 [3] https://w3ctag.github.io/web-https/
 [4] https://https.cio.gov/
 [5]
 https://groups.google.com/d/topic/mozilla.dev.platform/vavZdN4tX44/discussion
 [6]
 https://groups.google.com/a/chromium.org/d/topic/blink-dev/2LXKVWYkOus/discussion
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mercurial 3.4 / Reporting Mercurial Issues / Birthday Party

2015-05-05 Thread David Rajchenbach-Teller
Two cents: evolve is Good for You (tm).

Cheers,
 David

On 05/05/15 22:12, Gregory Szorc wrote:
 Mercurial 3.4 was released on May 1.
 
 If you are an evolve user, I highly recommend upgrading for performance
 reasons. Everyone else should consider upgrading. But you may want to
 wait for 3.4.1 in case there are any regressions.


-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Mike Hommey
On Tue, May 05, 2015 at 11:41:41AM -0400, Mike Conley wrote:
 With a content process, the UI remains responsive, but we get this
 bigass spinner. That's not an amazing trade-off - it's much uglier and
 louder, IMO, than the whole browser locking up. The big spinner was just
 an animation that we tossed in so that it was clear that a frame was not
 ready (and to avoid just painting random video memory), but I should
 emphasize that it was never meant to ship.

Last time I tried e10s, which was a while ago, tab switching did feel
weird with e10s *because* of that lack of the browser lock-up, because
now, the tab strip shows you've switched tabs, but the content is still
from before switching, until the spinner shows up or the new content
appears.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rust in Gecko. rust-url compatibility

2015-05-05 Thread Jonas Sicking
On Thu, Apr 30, 2015 at 3:34 PM, Valentin Gosu valentin.g...@gmail.com wrote:
 As some of you may know, Rust is approaching its 1.0 release in a couple of
 weeks. One of the major goals for Rust is using a rust library in Gecko.
 The specific one I'm working at the moment is adding rust-url as a safer
 alternative to nsStandardURL.

Will this affect our ability to make nsStandardURL thread-safe?

Right now there are various things, mainly related to addons, that are
forcing us to only use nsIURI on the main thread. However this is a
major pain as we move more code off the main thread and as we do more
IPC stuff.

We've made some, small, steps towards providing better addon APIs
which would enable us to make nsIURI parsing and handling possible to
do from any thread.

Would this affect this work moving forward? Or would a Rust
implemented URL-parser/nsIURI be possible to use from other threads
once the other blockers for that are removed?

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Mercurial 3.4 / Reporting Mercurial Issues / Birthday Party

2015-05-05 Thread Gregory Szorc
Mercurial 3.4 was released on May 1.

If you are an evolve user, I highly recommend upgrading for performance
reasons. Everyone else should consider upgrading. But you may want to wait
for 3.4.1 in case there are any regressions.

More details with a link to the changelog and Mozilla-tailored upgrade
instructions are at [1].

While I'm here, I'd like to call attention to [2] for how to report issues
you are having with Mercurial. The Mercurial maintainers have made it clear
that Mozilla is an important user and that our concerns are important to
them, especially when it comes to performance. They'd really like to see
more feedback and bug reporting from Mozillians. If you see something, file
something!

Finally, there will be a 10th birthday party for Mercurial at Atlassian's
office in SF tomorrow [3]. Come and meet people responsible for maintaining
a tool that you rely on.

[1] http://gregoryszorc.com/blog/2015/05/04/mercurial-3.4-released/
[2]
https://mozilla-version-control-tools.readthedocs.org/en/latest/hgmozilla/issues.html
[3] http://www.meetup.com/Bay-Area-Mercurial-Meetup/events/222139518/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread George Wright

On 05/05/15 16:40, Mike Hommey wrote:

Last time I tried e10s, which was a while ago, tab switching did feel
weird with e10s *because* of that lack of the browser lock-up, because
now, the tab strip shows you've switched tabs, but the content is still
from before switching, until the spinner shows up or the new content
appears.

I changed that behaviour a while ago. Now we wait for the content to be 
ready, or for the spinner to show (if we've waited 300ms and content 
still isn't ready) before we update the tab strip.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to alter the coding style to not require the usage of 'virtual' where 'override' is used

2015-05-05 Thread Jeff Walden
Seeing this a touch late, commenting on things not noted yet.

On 04/27/2015 12:48 PM, Ehsan Akhgari wrote:
 I think we should change it to require the usage of exactly one of these
 keywords per *overridden* function: virtual, override, and final.  Here
 are the advantages:
 
 * It is a more succinct, as |virtual void foo() override;| doesn't convey
 more information than |void foo() override;|.
 * It makes it easier to determine what kind of function you are looking at
 by just looking at its declaration.  |virtual void foo();| means a virtual
 function that is not overridden, |void foo() override;| means an overridden
 virtual function, and |void foo() final;| means an overridden virtual
 function that cannot be further overridden.

All else equal, shorter is better.  But this concision hurts readability, even 
past the non-obvious final/override = virtual implication others have noted.  
(And yes, C++ can/should permit final/override on non-virtuals.  JSString and 
subclasses would be immediate users.)

Requiring removal of virtual from the signature for final/override prevents 
simply examining a declaration's start to determine whether the function is 
virtual.  You must read the entire declaration to know: a problem because 
final/override can blend in.  For longer (especially multiline) declarations 
this matters.  Consider these SpiderMonkey declarations:

 /* Standard internal methods. */
 virtual bool getOwnPropertyDescriptor(JSContext* cx, HandleObject proxy, 
 HandleId id,
   MutableHandleJSPropertyDescriptor 
 desc) const override;
 virtual bool defineProperty(JSContext* cx, HandleObject proxy, HandleId 
 id,
 HandleJSPropertyDescriptor desc,
 ObjectOpResult result) const override;
 virtual bool ownPropertyKeys(JSContext* cx, HandleObject proxy,
  AutoIdVector props) const override;
 virtual bool delete_(JSContext* cx, HandleObject proxy, HandleId id,
  ObjectOpResult result) const override;

virtual is extraordinarily clear in starting each declaration.  override 
and final alone would be obscured at the end of a long string of text, 
especially when skimming.  (I disagree with assuming syntax coloring penalizing 
non-IDE users.)

I think ideal style requires as many of virtual/override/final as apply.  
Virtual for easy reading.  Override to be clear when it occurs.  And final when 
that's what you want.  (Re dholbert's subtle point: |virtual void foo() 
final| loses strictness, but -Woverloaded-virtual gives it back.)

 * It will allow us to remove NS_IMETHODIMP, and use NS_IMETHOD instead.

A better alternative, that exposes standard C++ and macro-hides non-standard 
gunk, would be to use NS_IMETHOD for everything and directly use |virtual| in 
declarations.  This point is no justification for changing style rules.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement and ship: document.execCommand(cut/copy)

2015-05-05 Thread Ehsan Akhgari
Summary: We currently disallow programmatic copying and cutting from JS for
Web content, which has relied on web sites to rely on Flash in order to
copy content to the clipboard.  We are planning to relax this restriction
to allow this when execCommand is called in response to a user event.  This
restriction mimics what we do for other APIs, such as FullScreen.

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1012662

Link to standard: This is unfortunately not specified very precisely.
There is a rough spec here: 
https://dvcs.w3.org/hg/editing/raw-file/tip/editing.html#miscellaneous-commands
and the handling of clipboard events is specified here: 
https://w3c.github.io/clipboard-apis/.  Sadly, the editing spec is not
actively edited.  We will strive for cross browser interoperability, of
course.

Platform coverage: All platforms.

Target release: Firefox 40.

Preference behind which this will be implemented: This won't be hidden
behind a preference, as the code changes required are not big, and can be
easily reverted.

DevTools bug: N/A

Do other browser engines implement this: IE 10 and Chrome 43 both implement
this.  Opera has adopted this from Blink as of version 29.

Security  Privacy Concerns: We have discussed this rather extensively
before: http://bit.ly/1zynBg7, and have decided that restricting these
functions to only work in response to user events is enough to prevent
abuse here.  Note that we are not going to enable the paste command which
would give applications access to the contents of the clipboard.

Web designer / developer use-cases: This feature has been rather popular
among web sites.  Sites such as Github currently use Flash in order to
allow people to copy text to the clipboard by clicking a button in their UI.

Cheers,
-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there an e10s plan for multiple content processes?

2015-05-05 Thread Ted Mielczarek
On Tue, May 5, 2015, at 02:53 AM, Leman Bennett (Omega X) wrote:
 On 5/5/2015 12:23 AM, Robert O'Callahan wrote:
  On Tue, May 5, 2015 at 10:29 AM, Leman Bennett (Omega X) 
  Redacted.For.Spam@request.contact wrote:
 
  Inquiring minds would like to know.
 
  At the moment, e10s tabs is still somewhat slower than non-e10s. Multiple
  content processes would go a long way for more responsive navigation and
  less stalls on the one content process. That stall spinner is getting a LOT
  of hate at the moment.
 
 
  I don't know, but I've enabled multiple content processes, and I haven't
  noticed any problems --- and the spinner does seem to be shown a lot less.
 
  Rob
 
 
 The issue I've seen with dom.ipc.processCount beyond one process is that 
 they're not dynamic. Those instances will stay open for the entire 
 session and not unload themselves after a time which can mean double the 
 memory use.
 
 I heard that there was rumor of a plan to limit process count spawn to 
 per-domain. But I've not seen offhand of a bug filed for it or anything 
 else that relates to achieving more than one content process instance.

There's a bug filed[1], but every time I've asked about it I've been
told it's not currently on the roadmap. I, too, find that single-process
e10s is worse for responsiveness, which is unfortunate. Last time I
tried to use dom.ipc.processCount  1 I found that window.open was
broken (and also target=_blank on links) which made actual browsing
difficult, but I haven't tested it recently.

I also filed a couple of bugs[2][3] about being smarter about multiple
content processes which would make things a bit nicer.

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=641683
2. https://bugzilla.mozilla.org/show_bug.cgi?id=1066789
3. https://bugzilla.mozilla.org/show_bug.cgi?id=1066792
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform