Re: Leaving hg.mozilla bug.mozilla to github for the leaving complicated infrastructure of mozilla.

2015-12-04 Thread Yonggang Luo
On Fri, Dec 4, 2015 at 3:55 PM, R Kent James  wrote:

> Yonggang Luo, I've been trying to figure out how to understand what you
> are doing for over a year now without success. You've been doing some
> amazing work, and I get the vague impression that you are working on a
> Chinese fork of Thunderbird. Now you are proposing some specific changes to
> Thunderbird infrastructure, but you are really not a Thunderbird
> contributor. Could you give some sort of public statement of how your
> technical work fits into the broader picture of products?
>
Indeed, I've been trying to do contribute back to the Thunderbird
community, but that's too hard for me, the Thunderbird is always pursuing
the mozilla-central, and we (our company) need a stable version of
mozilla-central to release our product, and that's makes me hard to
contribute back things, and we are using git but hg to do the
version control, and that's also complicated our contribution progress.



> I've been trying to get people working on open-source email clients to
> talk to teach other and figure out ways to work together better. In my
> dreams, we all work together to make an amazing product rather than have
> small teams each duplicating each other's work. We'd love to be able to
> work with you, even if you are not specifically contributing to
> Thunderbird. But I have no context now to understand how.
>
When I trying to ease the development burden of Thunderbird, I didn't two
things,
1、Building Xulrunner independently,  So the xulrunner is a stable version.
2、Building our customized Thunderbird without mozilla source, and only
depends on Xulrunner.

When I trying submit a patch to thunderbird community, that's always need
to create a
bug in bug.mozilla.org, and that's very hard to use and slow.
The merging progress are very slow. That's why I am hesitate to
contributing things back.

>
> R. Kent James
> Chair, Thunderbird Council
>
>
> On 12/3/2015 8:52 PM, 罗勇刚(Yonggang Luo)  wrote:
>
>> I think the relationship between Thunderbird/SeaMonkey and
>> gecko-dev should be like the relation ship between Atom editor vs
>> Electron.
>> So there is no need that Thunderbird/SeaMonkey chasing the
>> mozilla-central.
>> And considerate the hard for contributing code in mozilla infrastructure
>> for thunderbird,
>> I suggest to completely to leave the mozilla infrastructure.
>> To reach this target, we need to do the following things.
>> 1、All the repos are git based and hosted on github
>> I've already writing scripts to mirror all branches/tags of comm &
>> gecko-dev,
>> They are now placed at
>> https://github.com/mail-apps/comm
>> https://github.com/mail-apps/gecko-dev
>> I have the scripts to do that.
>> After the migration, we only need to maintain the gecko-dev mirror
>> 2、Thunderbird/SeaMonkey bugs moved to github.
>>  I didn't write the scripts yet,
>> 3、The leaving Makefile.in should be clear up,
>> 4、Removing all the #if #endif in
>>   xul/js/mozilla files. This is not a must be procedure
>>   We may hack the mozbuild system that
>>   generate install_manifests for those files.
>> 5、Leaving the C/C++ building system from moz.build
>>  This is very important, cause the current moz.build for building
>>  are to tight with the lowering mozilla building infrastructure, and
>> if
>> we want to leaving the mozilla source tree, this would be a
>> unavoidable step.
>>  For building C/C++, we may choose gyp/cmake/qbs or other means
>>  that's was a self-contained echo-system and that's doesn't depends on
>>  the complicated mozbuild eco-system.
>> For example: LLVM are leaving autoconf to CMake.
>> We may choose, cause Thunderbird/SeaMonkey is much
>>  smaller than gecko-dev, so that's won't be a big deal.
>>
>> 6、 Building the All Thunderbird C/C++ compoents as a single
>>  thunderbird.dll or other things. I've already implement that, and
>> doesn't
>>  need to modify much of the source code.
>> 7、Getting all the locales install to be build with pure python.
>>  This is necessary, and because of this, we may also need to convert
>> all
>> the locales
>>   repo from hg to git.
>>  Or we may using a much simpler method. place all the locales files
>> into
>> the comm source-tree.
>>
>> 8、Building xulrunner distribution for Thunderbird/Gecko like
>> https://github.com/atom/electron/releases, so we only need to downloading
>> those binaries.
>>
>> 9、Packaging the mozbuild building system into the xulrunner, so that we
>> won't need
>>   the whole mozilla source-tree.
>> 10、 The xulrunner only chasing for stable version of firefox, unless there
>> is much need of new functional, then the xulrunner only based on ESR
>> version of firefox.
>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 
 此致
礼
罗勇刚
Yours
  

Re: Leaving hg.mozilla bug.mozilla to github for the leaving complicated infrastructure of mozilla.

2015-12-04 Thread R Kent James
Yonggang Luo, I've been trying to figure out how to understand what you 
are doing for over a year now without success. You've been doing some 
amazing work, and I get the vague impression that you are working on a 
Chinese fork of Thunderbird. Now you are proposing some specific changes 
to Thunderbird infrastructure, but you are really not a Thunderbird 
contributor. Could you give some sort of public statement of how your 
technical work fits into the broader picture of products?


I've been trying to get people working on open-source email clients to 
talk to teach other and figure out ways to work together better. In my 
dreams, we all work together to make an amazing product rather than have 
small teams each duplicating each other's work. We'd love to be able to 
work with you, even if you are not specifically contributing to 
Thunderbird. But I have no context now to understand how.


R. Kent James
Chair, Thunderbird Council

On 12/3/2015 8:52 PM, 罗勇刚(Yonggang Luo)  wrote:

I think the relationship between Thunderbird/SeaMonkey and
gecko-dev should be like the relation ship between Atom editor vs Electron.
So there is no need that Thunderbird/SeaMonkey chasing the mozilla-central.
And considerate the hard for contributing code in mozilla infrastructure
for thunderbird,
I suggest to completely to leave the mozilla infrastructure.
To reach this target, we need to do the following things.
1、All the repos are git based and hosted on github
I've already writing scripts to mirror all branches/tags of comm &
gecko-dev,
They are now placed at
https://github.com/mail-apps/comm
https://github.com/mail-apps/gecko-dev
I have the scripts to do that.
After the migration, we only need to maintain the gecko-dev mirror
2、Thunderbird/SeaMonkey bugs moved to github.
 I didn't write the scripts yet,
3、The leaving Makefile.in should be clear up,
4、Removing all the #if #endif in
  xul/js/mozilla files. This is not a must be procedure
  We may hack the mozbuild system that
  generate install_manifests for those files.
5、Leaving the C/C++ building system from moz.build
 This is very important, cause the current moz.build for building
 are to tight with the lowering mozilla building infrastructure, and if
we want to leaving the mozilla source tree, this would be a
unavoidable step.
 For building C/C++, we may choose gyp/cmake/qbs or other means
 that's was a self-contained echo-system and that's doesn't depends on
 the complicated mozbuild eco-system.
For example: LLVM are leaving autoconf to CMake.
We may choose, cause Thunderbird/SeaMonkey is much
 smaller than gecko-dev, so that's won't be a big deal.

6、 Building the All Thunderbird C/C++ compoents as a single
 thunderbird.dll or other things. I've already implement that, and
doesn't
 need to modify much of the source code.
7、Getting all the locales install to be build with pure python.
 This is necessary, and because of this, we may also need to convert all
the locales
  repo from hg to git.
 Or we may using a much simpler method. place all the locales files into
the comm source-tree.

8、Building xulrunner distribution for Thunderbird/Gecko like
https://github.com/atom/electron/releases, so we only need to downloading
those binaries.

9、Packaging the mozbuild building system into the xulrunner, so that we
won't need
  the whole mozilla source-tree.
10、 The xulrunner only chasing for stable version of firefox, unless there
is much need of new functional, then the xulrunner only based on ESR
version of firefox.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebUSB

2015-12-04 Thread Robert O'Callahan
On Thu, Dec 3, 2015 at 11:48 PM, Jonas Sicking  wrote:

> On Wed, Dec 2, 2015 at 2:13 PM, Robert O'Callahan 
> wrote:
> > 1) What I suggested: Whitelist vendor origins for access to their devices
> > and have vendor-hosted pages ("Web drivers"?) expose "safe" API to
> > third-party applications.
> > 2) Design a permissions API that one way or another lets users authorize
> > access to USB devices by third-party applications.
> > 3) Wrap USB devices in Web-exposed believed-to-be-safe standardized APIs
> > built into browsers.
>
> There's also
>
> 4) Design a new USB-protocol which enables USB devices to indicate
> that they are "web safe" and which lets the USB device know which
> website is talking to it. Then let the user authorize a website to use
> a given device.
>
> This is similar to what we did with TCP (through WebSocket), UDP
> (WebRTC) and HTTP (through CORS).


Yes, that's another possibility.

However, for USB the "Web driver" approach seems better than that, to me.
It makes it easy to update the vendor library to fix security bugs and
update the API. If the Web API is baked into the device firmware that's a
lot harder.

Rob
-- 
lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
toD
selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
rdsme,aoreseoouoto
o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
lurpr
.a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Leaving hg.mozilla bug.mozilla to github for the leaving complicated infrastructure of mozilla.

2015-12-04 Thread Wayne

On 12/4/2015 3:06 AM, 罗勇刚(Yonggang Luo)  wrote:

When I trying submit a patch to thunderbird community, that's always need
to create a
bug in bug.mozilla.org, and that's very hard to use and slow.
The merging progress are very slow. That's why I am hesitate to
contributing things back.


Do you a guess of _why_ they are slow?
How serious is the slowness?
How much faster would it need to be to be acceptable?


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebUSB

2015-12-04 Thread Martin Thomson
On Fri, Dec 4, 2015 at 8:04 PM, Robert O'Callahan  wrote:
> However, for USB the "Web driver" approach seems better than that, to me.
> It makes it easy to update the vendor library to fix security bugs and
> update the API. If the Web API is baked into the device firmware that's a
> lot harder.

The only concession needed in option 4 would be some way of indicating
what was safe to use, which might be conditioned on origin.
Management of that might even be mediated by the browser rather than
the firmware.  In a sense, this is a more general form of option 1 if
the vendor's origin is whitelisted.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Proposal to a) rewrite Gecko's encoding converters and b) to do it in Rust

2015-12-04 Thread Henri Sivonen
Hi,

I have written a proposal to a) rewrite Gecko's encoding converters
and b) to do it in Rust:
https://docs.google.com/document/d/13GCbdvKi83a77ZcKOxaEteXp1SOGZ_9Fmztb9iX22v0/edit

I'd appreciate comments--especially from the owners of the uconv
module and from people who have worked on encoding-related Rust code
and on Rust code that needs encoding converters and is on track to be
included in Gecko.

I've put the proposal on Google Docs in order to benefit from the GDoc
commenting feature that allows comments from multiple reviewers to be
attached to particular bits of text.

The document is rather long. The summary is:

I think we should rewrite of Gecko's character encoding converters
such that conversion to and from both in-memory UTF-16 and UTF-8 is
supported, because

1) Currently, we can convert to and from UTF-16, which steers us to
write parsers that operate on UTF-16. This is bad: ideally parsers
would operate on UTF-8 to allow parsers to traverse a more compact
memory representation (even HTML has plenty of ASCII markup; other
formats we parse are even more ASCII-dominated). To make sure we don't
write more UTF-16-based parsers in the future, we should have
converters that can convert to and from UTF-8, too, but without paying
the footprint cost of two independent sets of converters.

2) The footprint of Gecko is still a relevant concern in the Fennec
case. (See e.g. the complications arising from Gecko developers being
blocked from including ICU [not its converters] into Gecko on
Android.) Our current converters are bloated due to optimizing the
encode operation for legacy encoding for speed at the expense of
lookup table size and we could make Gecko a bit smaller (i.e. make
some room for good stuff on Android) by being smarter about encoding
converter data tables. (Optimizing the relatively rare and performance
non-sensitive encode operation for legacy encodings for size instead
of speed.)

3) We should ensure the correctness of our converters and then stop
tweaking them.

4) ...But our current converters are so unmaintainable that making
these changes would be the easiest to accomplish via a rewrite.

Furthermore, I think the rewrite should be in Rust, because

a) Now that we have Rust and are starting to include Rust code in
Gecko, it doesn't make sense to write new C++ code when the component
is isolated enough to be suited for being written in Rust.

b ) Importing a separate UTF-8-oriented conversion library written in
Rust for use by future Gecko components written in Rust (which would
ideally use UTF-8 internally, since Rust strings are UTF-8) would be a
footprint problem compared to a single conversion library designed for
both UTF-16 and UTF-8 with the same data tables. (For example, the URL
parser is being rewritten in Rust and the URL parser depends on the
rust-encoding library which doesn’t share data with our
UTF-16-oriented C++-based converters.)

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: referrerpolicy attribute

2015-12-04 Thread Franziskus Kiefer
On Fri, Dec 4, 2015 at 8:57 AM, Jonas Sicking  wrote:

> I think our implementation still has the problem that specifying
> referrerpolicy="none-when-downgrade" on an element has no effect. This
> is because the ReferrerPolicy enum uses the same value for RP_Unset,
> RP_Default and RP_No_Referrer_When_Downgrade.
>

well, that might be the case, but "no-referrer-when-downgrade" is also not
a valid value for referrerpolicy attributes yet. This is a recent spec
change that should be implemented in bug 1178337.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to a) rewrite Gecko's encoding converters and b) to do it in Rust

2015-12-04 Thread Henri Sivonen
On Fri, Dec 4, 2015 at 5:54 PM, Henri Sivonen  wrote:
> On Fri, Dec 4, 2015 at 3:18 PM, Ted Mielczarek  wrote:
>> 2) Instead of a clean-room implementation, would it be possible to fix
>> the problems you see with rust-encoding so that it's suitable for our
>> use? Especially if Servo is already using it, it would be a shame to
>> wind up with two separate implementations.
>
> I'm shy to suggest to the rust-encoding maintainer that I should be
> able to come in and trample all over the rust-encoding internals
> because of Gecko and what I see as priorities from the Gecko-informed
> point of view.

I should have mentioned that Ms2ger filed
https://github.com/lifthrasiir/rust-encoding/issues/92

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FYI: e10s will be enabled in beta 44/45

2015-12-04 Thread Ehsan Akhgari

On 2015-12-04 9:02 AM, jmath...@mozilla.com wrote:

Hey all,

FYI e10s will be enabled in beta/44 in a limited way for a short period time 
through an experiment. [1] The purpose of this experiment is to collect e10s 
related performance measurements specific to beta.

The current plan is to then enabled e10s in beta/45 for a respectable chunk of 
our beta population. This population will *exclude* users who have recently 
loaded accessibility and users who have a large number of addons installed.

If you know of serious e10s related bugs in your components that you feel 
should be fixed for the beta/45 rollout please get those bugs tracked for 45 
and owned. If you have any issues or questions you can tag the bug with 
tracking-e10s:? and the e10s team will come through to help triage.


Does this mean that not all tracking-e10s+ bugs will need to be fixed 
before we ship e10s?  What's the indicator that a bug "blocks" shipping 
e10s?


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to a) rewrite Gecko's encoding converters and b) to do it in Rust

2015-12-04 Thread Ms2ger
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/04/2015 04:54 PM, Henri Sivonen wrote:
> On Fri, Dec 4, 2015 at 3:18 PM, Ted Mielczarek 
> wrote:
>> 1) What does Servo do, just use rust-encoding directly?
> 
> That is my understanding, but it would be good if a Servo
> developer could confirm.

This is correct.

>> 2) Instead of a clean-room implementation, would it be possible
>> to fix the problems you see with rust-encoding so that it's
>> suitable for our use? Especially if Servo is already using it, it
>> would be a shame to wind up with two separate implementations.
> 
> I'm shy to suggest to the rust-encoding maintainer that I should
> be able to come in and trample all over the rust-encoding
> internals because of Gecko and what I see as priorities from the
> Gecko-informed point of view. It doesn't make sense to implement a
> Gecko-ish API where the caller allocates the output buffer (like
> I'm proposing) on top of rust-encoding. However, it would make
> sense to implement (at least the common call patterns of) the
> rust-encoding API on top of the kind of API that I'm proposing. But
> in order to do so, the internals of rust-encoding would end up
> replaced with the kind of library that I'm proposing.
> 
> As for whether any parts of the rust-encoding internals would be 
> reusable in the kind of library that I'm proposing, as noted in
> the proposal document, it generally makes no sense to adapt one 
> implementation strategy for the single-byte encodings to another.
> If you want a different implementation strategy for the
> single-byte encodings, it simpler to just rewrite from scratch to
> the strategy that you want.
> 
> As for the multi-byte converters, the concern I have is that 
> rust-encoding implements them using a macro that generates a state 
> machine and this makes the code look different from the algorithms 
> from the spec. It might be really dumb of me to suggest not using
> that macro, but I think there is value to having the code look like
> the algorithms in the spec so that it's easy to compare the two. So
> in the case of multi-byte converters it's mainly a question of
> whether we prefer the (possible really cool) macro or code that is
> easy to compare with the spec. I currently prefer code looking like
> the spec, but maybe I could be convinced otherwise. (Either way,
> I'd get rid of the encode-optimized lookup tables and search the
> decode tables in reverse instead.)
> 
> It would probably be worthwhile to copy and paste from the 
> rust-encoding UTF-8 and UTF-16 converters.

I agree that it is useful to have code looking like the spec in the
general case. However, if we were to get to the point where that was
the only argument against unifying your proposed library and
rust-unicode, I think we should probably go for the unification anyway.

HTH
Ms2ger

-BEGIN PGP SIGNATURE-

iQEcBAEBAgAGBQJWYblSAAoJEOXgvIL+s8n2uwQH/ipxhkqZpqZIEZIZcAezbKfw
1on3mC/0cnwJywu9yqqlTSXAoQJxONbdWeLJnRU9RvEgreT+EOp+0ktRgUubg34h
qAew1zdRhS7ldIZTWyePX4EOpUsvtIqXXpyJcw3Tl76bTx+skp3mov+lhZxTLS/3
ZsDayhHuYwhSB/h2KfYP09ee5i3AyKPjWkPyWIMw9jRXxJD+bWVcj++V1s2/3V9R
A4MnAJOB8Cqyhp+aMi1+mbx2QTYEqXqLak9ifKV0hHfF80+qI3aGkFQhr/fbhdgl
9rSBz7gI4lVzXtt8wSNaBGnRSf5mGvGmz5wQC7VyGtj2NMYKcmbCErXavhiSW6g=
=ixRO
-END PGP SIGNATURE-
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to a) rewrite Gecko's encoding converters and b) to do it in Rust

2015-12-04 Thread Henri Sivonen
On Fri, Dec 4, 2015 at 3:18 PM, Ted Mielczarek  wrote:
> 1) What does Servo do, just use rust-encoding directly?

That is my understanding, but it would be good if a Servo developer
could confirm.

> 2) Instead of a clean-room implementation, would it be possible to fix
> the problems you see with rust-encoding so that it's suitable for our
> use? Especially if Servo is already using it, it would be a shame to
> wind up with two separate implementations.

I'm shy to suggest to the rust-encoding maintainer that I should be
able to come in and trample all over the rust-encoding internals
because of Gecko and what I see as priorities from the Gecko-informed
point of view. It doesn't make sense to implement a Gecko-ish API
where the caller allocates the output buffer (like I'm proposing) on
top of rust-encoding. However, it would make sense to implement (at
least the common call patterns of) the rust-encoding API on top of the
kind of API that I'm proposing. But in order to do so, the internals
of rust-encoding would end up replaced with the kind of library that
I'm proposing.

As for whether any parts of the rust-encoding internals would be
reusable in the kind of library that I'm proposing, as noted in the
proposal document, it generally makes no sense to adapt one
implementation strategy for the single-byte encodings to another. If
you want a different implementation strategy for the single-byte
encodings, it simpler to just rewrite from scratch to the strategy
that you want.

As for the multi-byte converters, the concern I have is that
rust-encoding implements them using a macro that generates a state
machine and this makes the code look different from the algorithms
from the spec. It might be really dumb of me to suggest not using that
macro, but I think there is value to having the code look like the
algorithms in the spec so that it's easy to compare the two. So in the
case of multi-byte converters it's mainly a question of whether we
prefer the (possible really cool) macro or code that is easy to
compare with the spec. I currently prefer code looking like the spec,
but maybe I could be convinced otherwise. (Either way, I'd get rid of
the encode-optimized lookup tables and search the decode tables in
reverse instead.)

It would probably be worthwhile to copy and paste from the
rust-encoding UTF-8 and UTF-16 converters.

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to a) rewrite Gecko's encoding converters and b) to do it in Rust

2015-12-04 Thread Ted Mielczarek
On Fri, Dec 4, 2015, at 06:53 AM, Henri Sivonen wrote:
> Hi,
> 
> I have written a proposal to a) rewrite Gecko's encoding converters
> and b) to do it in Rust:
> https://docs.google.com/document/d/13GCbdvKi83a77ZcKOxaEteXp1SOGZ_9Fmztb9iX22v0/edit
> 
> I'd appreciate comments--especially from the owners of the uconv
> module and from people who have worked on encoding-related Rust code
> and on Rust code that needs encoding converters and is on track to be
> included in Gecko.

I don't really know anything about our encoding story, so I'll leave
that to others, but I'm generally in favor of writing new code in Rust
and replacing bits of Gecko with new Rust implementations. I don't know
that we've worked out all the kinks in including Rust code in Gecko
yet[1], but we're getting pretty close.

I have two questions:
1) What does Servo do, just use rust-encoding directly?
2) Instead of a clean-room implementation, would it be possible to fix
the problems you see with rust-encoding so that it's suitable for our
use? Especially if Servo is already using it, it would be a shame to
wind up with two separate implementations.

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=oxidation
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


FYI: e10s will be enabled in beta 44/45

2015-12-04 Thread jmathies
Hey all,

FYI e10s will be enabled in beta/44 in a limited way for a short period time 
through an experiment. [1] The purpose of this experiment is to collect e10s 
related performance measurements specific to beta.

The current plan is to then enabled e10s in beta/45 for a respectable chunk of 
our beta population. This population will *exclude* users who have recently 
loaded accessibility and users who have a large number of addons installed.

If you know of serious e10s related bugs in your components that you feel 
should be fixed for the beta/45 rollout please get those bugs tracked for 45 
and owned. If you have any issues or questions you can tag the bug with 
tracking-e10s:? and the e10s team will come through to help triage.

Thanks!

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1229104
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1218484
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FYI: e10s will be enabled in beta 44/45

2015-12-04 Thread Armen Zambrano G.
LastPass bring the browser to a crawl making it almost impossible to
use. If we have users using LastPass on the beta population using e10s
we're going to have a lot of people upset.
https://bugzilla.mozilla.org/show_bug.cgi?id=1008768


On 15-12-04 10:44 AM, Ehsan Akhgari wrote:
> On 2015-12-04 9:02 AM, jmath...@mozilla.com wrote:
>> Hey all,
>>
>> FYI e10s will be enabled in beta/44 in a limited way for a short
>> period time through an experiment. [1] The purpose of this experiment
>> is to collect e10s related performance measurements specific to beta.
>>
>> The current plan is to then enabled e10s in beta/45 for a respectable
>> chunk of our beta population. This population will *exclude* users who
>> have recently loaded accessibility and users who have a large number
>> of addons installed.
>>
>> If you know of serious e10s related bugs in your components that you
>> feel should be fixed for the beta/45 rollout please get those bugs
>> tracked for 45 and owned. If you have any issues or questions you can
>> tag the bug with tracking-e10s:? and the e10s team will come through
>> to help triage.
> 
> Does this mean that not all tracking-e10s+ bugs will need to be fixed
> before we ship e10s?  What's the indicator that a bug "blocks" shipping
> e10s?
> 


-- 
Zambrano Gasparnian, Armen
Automation & Tools Engineer
http://armenzg.blogspot.ca
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Leaving hg.mozilla bug.mozilla to github for the leaving complicated infrastructure of mozilla.

2015-12-04 Thread luoyonggang
<<< text/html; charset=utf-8: Unrecognized >>>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Leaving hg.mozilla bug.mozilla to github for the leaving complicated infrastructure of mozilla.

2015-12-04 Thread Yonggang Luo
On Saturday, December 5, 2015 at 12:33:56 AM UTC+8, Nathan Froyd wrote:
> On Fri, Dec 4, 2015 at 8:52 AM, luoyonggang  wrote:
> 
> > Do you a guess of _why_ they are slow?
> > How serious is the slowness?
> > How much faster would it need to be to be acceptable?
> >
> > Under win32, the llinkage time of xul.dll is 5minutes on ssd machine,
> > that's not acceptable.
> >
> 
> How much RAM do you have?  And is this a debug build or an optimized build?
> 
> -Nathan
16GB RAM with i7, debug build is a bit faster, release build is much slower.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Leaving hg.mozilla bug.mozilla to github for the leaving complicated infrastructure of mozilla.

2015-12-04 Thread Nathan Froyd
On Fri, Dec 4, 2015 at 8:52 AM, luoyonggang  wrote:

> Do you a guess of _why_ they are slow?
> How serious is the slowness?
> How much faster would it need to be to be acceptable?
>
> Under win32, the llinkage time of xul.dll is 5minutes on ssd machine,
> that's not acceptable.
>

How much RAM do you have?  And is this a debug build or an optimized build?

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: FIDO U2F API

2015-12-04 Thread smaug

On 12/04/2015 06:56 PM, smaug wrote:

Looks like the spec could be made implementable by fixing
https://fidoalliance.org/specs/fido-u2f-v1.0-nfc-bt-amendment-20150514/fido-u2f-javascript-api.html#high-level-javascript-api

"provide a namespace object u2f of the following interface" doesn't mean 
anything, so either there is supposed to be an instance of u2f interface
somewhere (in Window object?), but feels odd to expose interface called u2f and 
having u2f as a property of Window.
Perhaps the idea there is that the interface has only static methods?
Then register() and sign() should be marked as static and there wouldn't be an 
instance of u2f, but one would just call those static methods
on the u2f interface.



But whether that is compatible with Chrome - no idea.





(Nit, the convention is that interfaces start with a capital letter. For some 
odd reason 'u2f' doesn't follow that.)



-Olli



On 12/02/2015 11:20 PM, smaug wrote:

On 12/02/2015 03:23 AM, Richard Barnes wrote:

The FIDO Alliance has been developing standards for hardware-based
authentication of users by websites [1].  Their work is getting significant
traction, so the Mozilla Foundation has decided to join the FIDO Alliance.
Work has begun in the W3C to create open standards using FIDO as a starting
point. We are proposing to implement the FIDO U2F API in Firefox in its
current form and then track the evolving W3C standard.

Background: The FIDO Alliance has been developing a standard for
hardware-based user authentication known as “Universal Two-Factor” or U2F
[2].  This standard allows a website to verify that a user is in possession
of a specific device by having the device sign a challenge with a private
key that is held on the hardware device.  The browser’s role is mainly (1)
to route messages between the website and the token, and (2) to add the
origin of the website to the message signed by the token (so that the
signature is bound to the site).

Several major websites now support U2F for authentication, including Google
[3], Dropbox [4], and Github [5].  Axel Nennker has filed a Bugzilla bug
for U2F support in Gecko [6].  The W3C has  begun the process of forming a
“WebAuthentication” working group that will work on a standard for enhanced
authentication using FIDO as a starting point [7].

Proposed: To implement the high-level U2F API described in the FIDO JS API
specification, with support for the USB HID token interface.


As I said in the other email,
I don't understand how this could be implemented when the spec has left the key 
piece undefined, as far as I see.
As the spec puts it "This specification does not describe how such a port is 
made available to RP web pages, as this is (for now) implementation and
browser dependent. "





Please send comments on this proposal to the list no later than Monday,
December 14, 2015.

-

Personally, I have some reservations about implementing this, but I still
think it’s worth doing, given the clear need for something to augment
passwords.

It’s unfortunate that the initial FIDO standards were developed in a closed
group, but there is good momentum building toward making FIDO more open.  I
have some specific concerns about the U2F API itself, but they’re
relatively minor.  For example, the whole system is highly vertically
integrated, so if we want to change any part of it (e.g., to use a curve
other than P-256 for signatures), we’ll need to build a whole new API.  But
these are issues that can be addressed in the W3C process.

We will continue to work on making standards for secure authentication more
open.  In the meantime, U2F is what’s here now, and there’s demonstrated
developer interest, so it makes sense for us to work on implementing it.

Thanks,
--Richard

[1] https://fidoalliance.org/
[2] https://fidoalliance.org/specifications/download/
[3] https://support.google.com/accounts/answer/6103523?hl=en
[4] https://blogs.dropbox.com/dropbox/2015/08/u2f-security-keys/
[5]
https://github.com/blog/2071-github-supports-universal-2nd-factor-authentication
[6] https://bugzilla.mozilla.org/show_bug.cgi?id=1065729
[7] http://w3c.github.io/websec/web-authentication-charter







___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: FIDO U2F API

2015-12-04 Thread smaug

Looks like the spec could be made implementable by fixing
https://fidoalliance.org/specs/fido-u2f-v1.0-nfc-bt-amendment-20150514/fido-u2f-javascript-api.html#high-level-javascript-api

"provide a namespace object u2f of the following interface" doesn't mean 
anything, so either there is supposed to be an instance of u2f interface
somewhere (in Window object?), but feels odd to expose interface called u2f and 
having u2f as a property of Window.
Perhaps the idea there is that the interface has only static methods?
Then register() and sign() should be marked as static and there wouldn't be an 
instance of u2f, but one would just call those static methods
on the u2f interface.



(Nit, the convention is that interfaces start with a capital letter. For some 
odd reason 'u2f' doesn't follow that.)



-Olli



On 12/02/2015 11:20 PM, smaug wrote:

On 12/02/2015 03:23 AM, Richard Barnes wrote:

The FIDO Alliance has been developing standards for hardware-based
authentication of users by websites [1].  Their work is getting significant
traction, so the Mozilla Foundation has decided to join the FIDO Alliance.
Work has begun in the W3C to create open standards using FIDO as a starting
point. We are proposing to implement the FIDO U2F API in Firefox in its
current form and then track the evolving W3C standard.

Background: The FIDO Alliance has been developing a standard for
hardware-based user authentication known as “Universal Two-Factor” or U2F
[2].  This standard allows a website to verify that a user is in possession
of a specific device by having the device sign a challenge with a private
key that is held on the hardware device.  The browser’s role is mainly (1)
to route messages between the website and the token, and (2) to add the
origin of the website to the message signed by the token (so that the
signature is bound to the site).

Several major websites now support U2F for authentication, including Google
[3], Dropbox [4], and Github [5].  Axel Nennker has filed a Bugzilla bug
for U2F support in Gecko [6].  The W3C has  begun the process of forming a
“WebAuthentication” working group that will work on a standard for enhanced
authentication using FIDO as a starting point [7].

Proposed: To implement the high-level U2F API described in the FIDO JS API
specification, with support for the USB HID token interface.


As I said in the other email,
I don't understand how this could be implemented when the spec has left the key 
piece undefined, as far as I see.
As the spec puts it "This specification does not describe how such a port is 
made available to RP web pages, as this is (for now) implementation and
browser dependent. "





Please send comments on this proposal to the list no later than Monday,
December 14, 2015.

-

Personally, I have some reservations about implementing this, but I still
think it’s worth doing, given the clear need for something to augment
passwords.

It’s unfortunate that the initial FIDO standards were developed in a closed
group, but there is good momentum building toward making FIDO more open.  I
have some specific concerns about the U2F API itself, but they’re
relatively minor.  For example, the whole system is highly vertically
integrated, so if we want to change any part of it (e.g., to use a curve
other than P-256 for signatures), we’ll need to build a whole new API.  But
these are issues that can be addressed in the W3C process.

We will continue to work on making standards for secure authentication more
open.  In the meantime, U2F is what’s here now, and there’s demonstrated
developer interest, so it makes sense for us to work on implementing it.

Thanks,
--Richard

[1] https://fidoalliance.org/
[2] https://fidoalliance.org/specifications/download/
[3] https://support.google.com/accounts/answer/6103523?hl=en
[4] https://blogs.dropbox.com/dropbox/2015/08/u2f-security-keys/
[5]
https://github.com/blog/2071-github-supports-universal-2nd-factor-authentication
[6] https://bugzilla.mozilla.org/show_bug.cgi?id=1065729
[7] http://w3c.github.io/websec/web-authentication-charter





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: SPS Profiles are now captured for entire subprocess lifetime

2015-12-04 Thread Benoit Girard
Thanks Mike for your hard work pushing this through!

In theory it does let us profile e10s on TAlos, but I'm sure well will find
more usability issues. It's unclear if they will be a blocker or not. If
there's outstanding issues I don't think we know about them. Please file
them and CC me on any issues you run into. I want us to continue to make
Talos profiler easier.

Once we've reached a good baseline we can have another look at how to
display 'comparative profiles' and making it easier to highlight
difference. For instance it took us way too long to identify that image
cache was disable on e10s and I'd like to get us to the point where this
type of regression would be trivial from a before/after profile.

On Thu, Dec 3, 2015 at 5:27 PM, Kartikaya Gupta  wrote:

> \o/
>
> Does this get us all the way to "profile talos runs with e10s
> enabled", or are there still pieces missing for that? IIRC this set of
> patches was a prerequisite for being able to do that.
>
> On Thu, Dec 3, 2015 at 4:52 PM, Mike Conley  wrote:
> > Just a heads up that there have been recent developments with regards to
> > gathering SPS profiles from multiple processes.
> >
> > Bug 1103094[1] recently landed in mozilla-central, which makes it so that
> > if a subprocess starts up _after_ profiling has already been started in
> the
> > parent, then the subprocess will start profiling as well using the same
> > features and settings as the parent.
> >
> > Bug 1193838[2], which is still on inbound, will make it so that if we are
> > profiling while a process exits, we will hold onto that profile until the
> > profiles are all requested by the parent process for analysis. Right now
> we
> > hold these "exit profiles" in a circular buffer that's hardcoded at an
> > arbitrary limit of 5 profiles.
> >
> > The upshot is that in many cases, if you start profiling, you'll not lose
> > any profiles for subprocesses that start or finish before you choose to
> > analyze the profiles. \o/
> >
> > Just wanted to point those out. Thanks to BenWa for the reviews! Happy
> > profiling,
> >
> > -Mike
> >
> > [1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1103094
> > [2]: https://bugzilla.mozilla.org/show_bug.cgi?id=1193838
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: SPS Profiles are now captured for entire subprocess lifetime

2015-12-04 Thread Mike Conley
Part of the way. The last bit is to de-duplicate all of the Profiler.js
scripts in talos, and get them to use the asynchronous mechanisms for
profile gathering and writing (since they're currently using
dumpProfileToFile, which prevents us from getting out-of-process profiles).

That'll be in bug 1182595, I think.

Almost there. :)



On 3 December 2015 at 17:27, Kartikaya Gupta  wrote:

> \o/
>
> Does this get us all the way to "profile talos runs with e10s
> enabled", or are there still pieces missing for that? IIRC this set of
> patches was a prerequisite for being able to do that.
>
> On Thu, Dec 3, 2015 at 4:52 PM, Mike Conley  wrote:
> > Just a heads up that there have been recent developments with regards to
> > gathering SPS profiles from multiple processes.
> >
> > Bug 1103094[1] recently landed in mozilla-central, which makes it so that
> > if a subprocess starts up _after_ profiling has already been started in
> the
> > parent, then the subprocess will start profiling as well using the same
> > features and settings as the parent.
> >
> > Bug 1193838[2], which is still on inbound, will make it so that if we are
> > profiling while a process exits, we will hold onto that profile until the
> > profiles are all requested by the parent process for analysis. Right now
> we
> > hold these "exit profiles" in a circular buffer that's hardcoded at an
> > arbitrary limit of 5 profiles.
> >
> > The upshot is that in many cases, if you start profiling, you'll not lose
> > any profiles for subprocesses that start or finish before you choose to
> > analyze the profiles. \o/
> >
> > Just wanted to point those out. Thanks to BenWa for the reviews! Happy
> > profiling,
> >
> > -Mike
> >
> > [1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1103094
> > [2]: https://bugzilla.mozilla.org/show_bug.cgi?id=1193838
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebUSB

2015-12-04 Thread Eric Rescorla
r

On Fri, Dec 4, 2015 at 2:25 PM, Robert O'Callahan 
wrote:

> On Fri, Dec 4, 2015 at 1:56 PM, Eric Rescorla  wrote:
>
>> On Wed, Dec 2, 2015 at 2:13 PM, Robert O'Callahan 
>> wrote:
>>
>>> There are three possible approaches I can see to expose USB devices to
>>> third-party applications:
>>> 1) What I suggested: Whitelist vendor origins for access to their
>>> devices and have vendor-hosted pages ("Web drivers"?) expose "safe" API to
>>> third-party applications.
>>> 2) Design a permissions API that one way or another lets users authorize
>>> access to USB devices by third-party applications.
>>> 3) Wrap USB devices in Web-exposed believed-to-be-safe standardized APIs
>>> built into browsers.
>>>
>>
>> I can think of at least one more:
>> (4) Have the APIs hidden behind access controls that need to be enabled
>> by an extension
>> (but a trivial one). Perhaps you think this is #2.
>>
>
> Yeah it seems like a version of #2.
>
> I think we should definitely support #1. Trusting device vendor code with
>>> access to their devices is no worse than loading their device driver, in
>>> most respects. Once we support such whitelisting device vendors can expose
>>> their own APIs to third party applications even with no further effort from
>>> us.
>>>
>>
>>
>> Color me unconvinced. One of the major difficulties with consumer
>> electronics devices
>> that are nominally connectable to your computer is that the vendors do a
>> bad job
>> of making it possible for third party vendors to talk to them. Sometimes
>> this is done
>> intentionally in the name of lock-in and sometimes it's done
>> unintentionally through
>> laziness, but in either case it's bad. However, at least in those cases,
>> the third party
>> vendor can at least in principle produce some compatible downloadable
>> driver
>> for the device, and its not much harder to install that than to install
>> the OEM driver.
>>
>> I don't think it's good for the Web for browser to be in the business of
>> enforcing vendor
>> lock-in by radically increasing the gap between the access the vendor has
>> to the
>> device and the access third parties do.
>>
>
> I see your point, I just don't think it's as important as you do.
>

Sure. Conversely, I don't find myself convinced by your position.

Would be happy to talk about this live if you think that's useful.

-Ekr
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebUSB

2015-12-04 Thread Robert O'Callahan
On Fri, Dec 4, 2015 at 1:56 PM, Eric Rescorla  wrote:

> On Wed, Dec 2, 2015 at 2:13 PM, Robert O'Callahan 
> wrote:
>
>> There are three possible approaches I can see to expose USB devices to
>> third-party applications:
>> 1) What I suggested: Whitelist vendor origins for access to their devices
>> and have vendor-hosted pages ("Web drivers"?) expose "safe" API to
>> third-party applications.
>> 2) Design a permissions API that one way or another lets users authorize
>> access to USB devices by third-party applications.
>> 3) Wrap USB devices in Web-exposed believed-to-be-safe standardized APIs
>> built into browsers.
>>
>
> I can think of at least one more:
> (4) Have the APIs hidden behind access controls that need to be enabled by
> an extension
> (but a trivial one). Perhaps you think this is #2.
>

Yeah it seems like a version of #2.

I think we should definitely support #1. Trusting device vendor code with
>> access to their devices is no worse than loading their device driver, in
>> most respects. Once we support such whitelisting device vendors can expose
>> their own APIs to third party applications even with no further effort from
>> us.
>>
>
>
> Color me unconvinced. One of the major difficulties with consumer
> electronics devices
> that are nominally connectable to your computer is that the vendors do a
> bad job
> of making it possible for third party vendors to talk to them. Sometimes
> this is done
> intentionally in the name of lock-in and sometimes it's done
> unintentionally through
> laziness, but in either case it's bad. However, at least in those cases,
> the third party
> vendor can at least in principle produce some compatible downloadable
> driver
> for the device, and its not much harder to install that than to install
> the OEM driver.
>
> I don't think it's good for the Web for browser to be in the business of
> enforcing vendor
> lock-in by radically increasing the gap between the access the vendor has
> to the
> device and the access third parties do.
>

I see your point, I just don't think it's as important as you do.

Rob
-- 
lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
toD
selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
rdsme,aoreseoouoto
o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
lurpr
.a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebUSB

2015-12-04 Thread Eric Rescorla
On Wed, Dec 2, 2015 at 2:13 PM, Robert O'Callahan 
wrote:

> On Wed, Dec 2, 2015 at 10:00 AM, Eric Rescorla  wrote:
>
>> On Wed, Dec 2, 2015 at 9:53 AM, Robert O'Callahan 
>> wrote:
>>
>> I'd really like to see WebUSB with USB device IDs are bound to specific
>>> origins (through a registry for legacy devices and through the USB protocol
>>> extensions defined in that spec) so that vendors can host apps that access
>>> their devices --- and so that vendor pages in an  can define and
>>> vend "safe" APIs to any third-party application.
>>>
>>
>> This seems to be roughly the API contemplated by the WebUSB spec.
>> To be honest, I'm not very excited about that design. Having a system
>> where the only people who can talk to USB device X are the manufacturers
>> and the browser is just a conduit for that interaction doesn't really
>> seem that
>> great for the Open Web.
>>
>
> There are three possible approaches I can see to expose USB devices to
> third-party applications:
> 1) What I suggested: Whitelist vendor origins for access to their devices
> and have vendor-hosted pages ("Web drivers"?) expose "safe" API to
> third-party applications.
> 2) Design a permissions API that one way or another lets users authorize
> access to USB devices by third-party applications.
> 3) Wrap USB devices in Web-exposed believed-to-be-safe standardized APIs
> built into browsers.
>

I can think of at least one more:
(4) Have the APIs hidden behind access controls that need to be enabled by
an extension
(but a trivial one). Perhaps you think this is #2.


I see pros and cons to all three. I don't think #3 is sustainable over the
> long term for more than a minority of device types, though I'm sure we'll
> do it for some specific cases. It inevitably leads to bloated browsers,
> interop issues, and the Web platform lagging behind native.
>

I agree with this. It's also not clear that such a safe system can be built.


I think we should definitely support #1. Trusting device vendor code with
> access to their devices is no worse than loading their device driver, in
> most respects. Once we support such whitelisting device vendors can expose
> their own APIs to third party applications even with no further effort from
> us.
>

Color me unconvinced. One of the major difficulties with consumer
electronics devices
that are nominally connectable to your computer is that the vendors do a
bad job
of making it possible for third party vendors to talk to them. Sometimes
this is done
intentionally in the name of lock-in and sometimes it's done
unintentionally through
laziness, but in either case it's bad. However, at least in those cases,
the third party
vendor can at least in principle produce some compatible downloadable driver
for the device, and its not much harder to install that than to install the
OEM driver.

I don't think it's good for the Web for browser to be in the business of
enforcing vendor
lock-in by radically increasing the gap between the access the vendor has
to the
device and the access third parties do.


I see that #2 could have some value for the open Web by allowing authors to
> bypass whatever API restrictions device vendors impose (including not
> exposing an API at all). But in practice, just like it's normal for people
> to use vendor-supplied device drivers, I think we should expect and
> encourage vendor-provided Web APIs, so I'd prioritize #1 over #2. Plus the
> design choices for #2 are all suboptimal.
>

I don't agree with this assessment of the tradeoffs for the reasons listed
above.

-Ekr
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebUSB

2015-12-04 Thread Robert O'Callahan
On Fri, Dec 4, 2015 at 2:43 PM, Eric Rescorla  wrote:

>
> Sure. Conversely, I don't find myself convinced by your position.
>
> Would be happy to talk about this live if you think that's useful.
>

Probably not ... these are judgement calls that are difficult to resolve.

Rob
-- 
lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
toD
selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
rdsme,aoreseoouoto
o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
lurpr
.a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FYI: e10s will be enabled in beta 44/45

2015-12-04 Thread jmathies
On Friday, December 4, 2015 at 9:44:36 AM UTC-6, Ehsan Akhgari wrote:
> On 2015-12-04 9:02 AM, jmath...@mozilla.com wrote:
> > Hey all,
> >
> > FYI e10s will be enabled in beta/44 in a limited way for a short period 
> > time through an experiment. [1] The purpose of this experiment is to 
> > collect e10s related performance measurements specific to beta.
> >
> > The current plan is to then enabled e10s in beta/45 for a respectable chunk 
> > of our beta population. This population will *exclude* users who have 
> > recently loaded accessibility and users who have a large number of addons 
> > installed.
> >
> > If you know of serious e10s related bugs in your components that you feel 
> > should be fixed for the beta/45 rollout please get those bugs tracked for 
> > 45 and owned. If you have any issues or questions you can tag the bug with 
> > tracking-e10s:? and the e10s team will come through to help triage.
> 
> Does this mean that not all tracking-e10s+ bugs will need to be fixed 
> before we ship e10s?  What's the indicator that a bug "blocks" shipping 
> e10s?

That's a flag for bugs we want to keep track of but it does not include bugs 
that block rollout. We currently have two blocking lists, our ongoing 
milestone-8[1] and P1 performance bugs[2].

[1] http://is.gd/gL9Qfj
[2] http://is.gd/N4Kska

Jim
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FYI: e10s will be enabled in beta 44/45

2015-12-04 Thread jmathies
On Friday, December 4, 2015 at 11:08:08 AM UTC-6, Armen Zambrano G. wrote:
> LastPass bring the browser to a crawl making it almost impossible to
> use. If we have users using LastPass on the beta population using e10s
> we're going to have a lot of people upset.

Not an issue since initial rollout to beta and release will be to users who do 
not have addons installed.

Jim
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform