Re: Support for non-UTF-8 platform charset

2013-11-25 Thread Simon Montagu
On 11/25/2013 01:46 PM, Henri Sivonen wrote:

> Questions:
>
>  * On Windows, do we really need to pay homage to the pre-NT legacy
> when doing Save As? How about we just use UTF-8 for "HTML Page,
> complete" reserialization like on Mac?

Do you mean Save As Text? Do we really use the platform charset when
saving as HTML Page, complete? For Save As Text I would have thought
that the time was ripe by now to go over to using UTF-8.

>  * Do we (or gtk) really still support non-UTF-8 platform charset
> values on *nix? (MXR turns up so little that it makes me wonder
> non-UTF-8 support might have already gone away for practical
> purposes.)

You wonder correctly. There are probably vestigial remnants of support
for other platform charsets still around, but I can testify that any
code that I contributed in the last 5 years or more only supports UTF-8
on *nix.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there any reason not to shut down bonsai?

2013-11-25 Thread Phil Ringnalda
On 11/21/13, 11:43 AM, Laura Thomson wrote:
> If you don't know what that is--and few people do, which is even more
> reason to shut it off--it's a search engine for some of our CVS
> repositories, of which I think none are in active development.

Thanks for the reminder that it still exists - I just used it to
successfully figure out what to do in the heat of tree-bustage-battle
(and no, I wouldn't have liked to install git, clone an enormous git
repo, and look up some complicated git command to do approximately the
same thing instead).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A/B testing with telemetry

2013-11-25 Thread Taras Glek



Henri Sivonen wrote:

Do we have a formalized way to do A/B testing with telemetry? That is,
assuming that there is a telemetry probe that measures problem
symptoms and a boolean pref for  turning on a potential solution, is
there a way the declare the pref as something that telemetry queries
can be constrained by so that it would be possible to compare the
symptom probe with and without the potential solution?


We don't have anything formal like that. We leave it up to authors(eg 
their input telemetry + analysis scripts) to decide on optimal a/b 
patterns. Once we have enough of such patterns, might formalize a few 
common ones.




If not, is there a better way to do this than duplicating probes and
checking the pref to see which probe should be fed?

I'm a bit late here. This one 'correct' way of doing A/B testing.

Alternatively you can send the value of that pref in the .info section 
of telemetry. See one of many fields in 
http://mxr.mozilla.org/mozilla-central/source/toolkit/components/telemetry/TelemetryPing.js#332



Taras
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reacting more strongly to low-memory situations in Firefox 25

2013-11-25 Thread Bas Schouten




- Original Message -
> From: "Benjamin Smedberg" 
> To: dev-platform@lists.mozilla.org, "Bas Schouten" , 
> "David Major" ,
> "Nathan Froyd" , "Firefox Dev" 
> Sent: Monday, November 25, 2013 5:02:50 PM
> Subject: Reacting more strongly to low-memory situations in Firefox 25
> 
> 
> Unfortunately, often when we are out of memory crash reports come back
> as empty minidumps (because the crash reporter has to allocation memory
> and/or VM space to create minidumps). We believe that most of the
> empty-minidump crashes present on crash-stats are in fact also
> out-of-memory crashes.
> 
> I've been creating reports about OOM crashes using crash-stats and found
> some startling data:
> Looking just at the Windows crashes from last Friday (22-Nov):
> * probably not OOM: 91565
> * probably OOM: 57841
> * unknown (not enough data because they are running an old version of
> Windows that doesn't report VM information in crash reports): 150874
> 
> The criterion for "probably OOM" are:
> * Has an OOMAnnotationSize marking meaning jemalloc aborted an
> infallible allocator
> * Has "ABORT: OOM" in the app notes meaning XPCOM aborted in infallible
> string/hashtable/array code
> * Has <50MB of contiguous free VM space
> 
> This data seems to indicate that almost 40% of our Firefox crashes are
> due to OOM conditions.
> 
> Because one of the long-term possibilities discussed for solving this
> issue is releasing a 64-bit version of Firefox, I additionally broke
> down the "OOM" crashes into users running a 32-bit version of Windows
> and users running a 64-bit version of Windows:
> 
> OOM,win64,15744
> OOM,win32,42097
> 
> I did this by checking the "TotalVirtualMemory" annotation in the crash
> report: if it reports 4G of TotalVirtualMemory, then the user has a
> 64-bit Windows, and if it reports either 2G or 3G, the user is running a
> 32-bit Windows. So I do not expect that doing Firefox for win64 will
> help users who are already experiencing memory issues, although it may
> well help new users and users who are running memory-intensive
> applications such as games.

I'm a little confused, when I force OOM my firefox build on 64-bit windows it 
-definitely- goes down before it reaches more than 3GB of working set. Am I 
missing something here?

> 
> Scripts for this analysis at
> https://github.com/mozilla/jydoop/blob/master/scripts/oom-classifier.py
> if you want to see what it's doing.
> 
> = Next Steps =
> 
> As far as I can tell, there are several basic problems that we should be
> tackling. For now, I'm going to brainstorm some ideas and hope that
> people will react or take of these items.
> 

...

> 
> == Graphics Solutions ==
> 
> The issues reported in bug 930797 at least appear to be related to HTML5
>  rendering. The STR aren't precise, but it seems that we should
> try and understand and fix the issue reported by that user. Disabling
> hardware acceleration does not appear to help.
> 
> Bas has a bunch of information in bug 859955 about degenerate behavior
> of graphics drivers: they often map textures into the Firefox process,
> and sometimes cache the latest N textures (N=200 in one test) no matter
> what the texture size is. I have a feeling that we need to do something
> here, but it's not clear what. Perhaps it's driver-specific workarounds,
> or blacklisting old driver versions, or working with driver vendors to
> have better behavior.

I should highlight something here, caching the last N textures is only 
occurring in drivers which do -not- map into your address space as far as I 
have see in my testing. Intel stock drivers seem to map into your address 
space, but do -not- seem to do any caching. The most likely cause of the OOM 
here is simply that currently, we keep both the texture, and a RAM copy around 
of any image in our image cache that has been drawn. This means for users using 
Direct2D with these drivers an image will use twice as much address space as 
for users using software rendering. We should probably alter imagelib to 
discard the RAM copy when having hardware acceleration, and in that case actual 
address space usage should be the same for users with, and without hardware 
acceleration.

For what it's worth, just to add some info to this, in my own experience on my 
machines in most cases Firefox seems to climb to about 1.1-1.3 GB of memory 
usage fairly quickly (i.e. < 2 days of keeping it open), and sort of stabilize 
around that number. Usually when I do an about memory in this case my memory 
reports about 500 MB+ in JS, a surprising amount (150 MB) in VRAM usage for 
DrawTargets (this would be in our address space on the affected intel 
machines), we should investigate the latter. This is usually with about 20 tabs 
open.

Bas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Shutting down my mozilla git mirror in three weeks

2013-11-25 Thread Gregory Szorc
It looks like GitHub decided to disable Ehsan's repo and all forks 
without notice. https://bugzilla.mozilla.org/show_bug.cgi?id=943132 
tracks it from our end.


On 11/25/13, 11:39 PM, Ehsan Akhgari wrote:

Dear all,

For the past two and a half years I have been maintaining a git mirror
of mozilla-central plus a lot of the other branches that people found
useful at https://github.com/mozilla/mozilla-central.  Over the years
this proved to be too much work for me to keep up with, and given the
existence of the new git mirrors that are supported by RelEng, I'm
planning to shut down the update jobs for this repository on Friday, Dec
13, 2013 and take the repository down.

I strongly suggest that if you have been using and relying on this
repository before, please consider switching to the RelEng repositories
as soon as possible. https://github.com/mozilla/gecko-dev is the main
repository where you can find branches such as trunk/aurora/b2g
branches/etc and https://github.com/mozilla/gecko-projects is a
repository which contains project branches and twigs.  (Note that my
repository hosts all of these branches in a single location, but from
now on if you use both of these branch subsets you will need to have two
upstream remotes added in your local git clone.)

The RelEng repositories have a similar history line including the CVS
history but they have completely different commit SHA1s, so pulling that
repo into your existing clones is probably not what you want.  If you
don't have a lot of non-merged branches, your safest bet is cloning from
scratch and then port your existing branches manually.  If you have a
lot of local branches, you may want to wait for the script that John
Schoenick is working on in bug 929338 which will assist you in rebasing
those branches on top of the new history line.

Last but not least, I really hate to have to disrupt your workflow like
this.  I do hope that three weeks is enough advance notice for everybody
to successfully switch over to the new mirrors, but if you have a reason
for me to delay this please let me know and I will do my best to
accommodate.

Cheers,
--
Ehsan



___
firefox-dev mailing list
firefox-...@mozilla.org
https://mail.mozilla.org/listinfo/firefox-dev



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Analyze C++/compiler usage and code stats easily

2013-11-25 Thread Gregory Szorc
The repo was hiding that old changeset because it has been obsoleted 
(Mercurial changeset evolution magic). I updated the server config to 
expose hidden changesets, so the link works again.


However, the changeset has been obsoleted, so you'll want to fetch the 
newest one.


The best way to fetch it is to pull the bookmark tracking it:

  $ hg pull -B gps/build/measure-compiler 
http://hg.gregoryszorc.com/gecko-collab


Or, look for that bookmark at 
http://hg.gregoryszorc.com/gecko-collab/bookmarks and track down the 
changeset.


(Also, while looking at the Mercurial source code to see if I could 
expose hidden changesets through the web UI, I noticed that the official 
resource/URL for linking to changesets is "changeset" not "rev." So, all 
our Bugzilla URLs using /rev/ are using an alias. I'm curious if 
this is intentional or an accident.)


On 11/25/13, 9:35 PM, Ehsan Akhgari wrote:

This patch doesn't seem to exist any more.  Do you have another copy of
it lying somewhere?

Thanks!

--
Ehsan



On Fri, Nov 15, 2013 at 12:43 AM, Gregory Szorc mailto:g...@mozilla.com>> wrote:

C++ developers,

Over 90% of the CPU time required to build the tree is spent
compiling or linking C/C++. So, anything we can do to make that
faster will make the overall build faster.

I put together a quick patch [1] to make it rather simple to extract
compiler resource usage and very basic code metrics during builds.

Simply apply that patch and build with `mach build
--profile-compiler` and your machine will produce all kinds of
potentially interesting measurements. They will be stuffed into
objdir/.mozbuild/compilerprof/__. If you don't feel like waiting (it
will take about 5x longer than a regular build because it performs
separate preprocessing, ast, and codegen compiler invocations 3
times each), grab an archive of an OS X build I just performed from
[2] and extract it to objdir/.mozbuild/.

I put together an extremely simple `mach compiler-analyze` command
to sift through the results. e.g.

$ mach compiler-analyze preprocessor-relevant-lines
$ mach compiler-analyze codegen-sizes
$ mach compiler-analyze codegen-total-times

Just run `mach help compiler-analyze` to see the full list of what
can be printed. Or, write your own code to analyze the produced JSON
files.

I'm sure people who love getting down and dirty with C++ will be
interested in this data. I have no doubt that are compiler time and
code size wins waiting to be discovered through this data. We may
even uncover a perf issue or two. Who knows.

Here are some questions I have after casually looking at the data:

* The mean ratio of .o size to lines from preprocessor is 16.35
bytes/line. Why do 38/4916 (0.8%) files have a ratio over 100? Why
are a lot of these in js/ and gfx?

* What's in the 150 files that have 100k+ lines after preprocessing
that makes them so large?

* Why does lots of js/'s source code gravitate towards the "bad"
extreme for most of the metrics (code size, compiler time,
preprocessor size)?

Disclaimer: This patch is currently hacked together. If there is an
interest in getting this checked into the tree, I can clean it up
and do it. Just file a bug in Core :: Build Config and I'll make it
happen when I have time. Or, if an interested party wants to
champion getting it landed, I'll happily hand it off :)

[1] http://hg.gregoryszorc.com/__gecko-collab/rev/741f3074e313

[2] https://people.mozilla.org/~__gszorc/compiler_profiles.tar.__bz2

_
dev-platform mailing list
dev-platform@lists.mozilla.org 
https://lists.mozilla.org/__listinfo/dev-platform





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: W3C Proposed Recommendations: Performance Timeline, User Timing, JSON-LD

2013-11-25 Thread Boris Zbarsky

On 11/25/13 7:07 PM, L. David Baron wrote:

   http://www.w3.org/TR/performance-timeline/
   Performance Timeline

   http://www.w3.org/TR/user-timing/
   User Timing


Have we had anyone at all review these specs?  My past experience with 
that working group and set of editors doesn't make me sanguine about 
them producing specs that can actually be implemented without 
reverse-engineering IE or Chrome


If we _haven't_ had someone look at these before, we should do that now. 
 And we probably need someone whose job description includes 
sanity-checking the stuff this working group produces.


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to replace Promise.jsm and promise.js with DOM Promises

2013-11-25 Thread Bobby Holley
On Mon, Nov 25, 2013 at 3:51 PM,  wrote:

> Based on bholley's comments on bug 939636, chrome code will need to do
>
> Cu.importGlobalProperties(["Promise"]);
>
> to use DOM Promises.
>

Well, more specifically, chrome code running in non-Window scopes (so
JS-implemented Components, JSMs, privileged sandboxes, etc).

bholley
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: W3C Proposed Recommendations: XQuery, XPath, XSLT, EXI, API for Media Resources

2013-11-25 Thread L. David Baron
On Tuesday 2013-10-29 12:01 +0200, Henri Sivonen wrote:
> On Tue, Oct 29, 2013 at 1:39 AM, Ralph Giles  wrote:
> > On 2013-10-28 2:11 PM, L. David Baron wrote:
> >>   API for Media Resources 1.0
> >>   http://www.w3.org/TR/mediaont-api-1.0/
> ...
> > Thus I think we can be positive about this recommendation
> 
> I would reply "abstain" and "don't plan to implement" for this API and
> on the XML related specs in this batch.

I did so for the API spec; I managed to miss the deadline for the
XML-related specs, though.  Sorry about that.

-David

-- 
π„ž   L. David Baron http://dbaron.org/   𝄂
𝄒   Mozilla  https://www.mozilla.org/   𝄂
 Before I built a wall I'd ask to know
 What I was walling in or walling out,
 And to whom I was like to give offense.
   - Robert Frost, Mending Wall (1914)


signature.asc
Description: Digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charters: Data Activity, {Data, CSV} on the Web

2013-11-25 Thread L. David Baron
On Tuesday 2013-10-29 12:22 +0200, Henri Sivonen wrote:
> On Mon, Oct 28, 2013 at 11:28 PM, L. David Baron  wrote:
> > W3C is proposing a data activity (an area of work)
> ...
> > Please reply to this thread if you think
> > there's something we should say.
> 
> Maybe it would make sense to abstain explicitly, since this Activity
> doesn't seem particularly relevant to Mozilla.

I managed to miss the deadline here (I had a note in my todo list,
but the note was too cryptic for me to remember what it meant).
Sorry about that; I'll try to get better about circling back to
these.

-David

-- 
π„ž   L. David Baron http://dbaron.org/   𝄂
𝄒   Mozilla  https://www.mozilla.org/   𝄂
 Before I built a wall I'd ask to know
 What I was walling in or walling out,
 And to whom I was like to give offense.
   - Robert Frost, Mending Wall (1914)


signature.asc
Description: Digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


W3C Proposed Recommendations: Performance Timeline, User Timing, JSON-LD

2013-11-25 Thread L. David Baron
W3C recently published the following proposed recommendations (the
stage before W3C's final stage, Recommendation):

  http://www.w3.org/TR/performance-timeline/
  Performance Timeline

  http://www.w3.org/TR/user-timing/
  User Timing

  http://www.w3.org/TR/json-ld/
  JSON-LD 1.0: A JSON-based Serialization for Linked Data

  http://www.w3.org/TR/json-ld-api/
  JSON-LD 1.0 Processing Algorithms and API

There's a call for review to W3C member companies (of which Mozilla
is one) open until November 28 (for the first two) and December 5
(for the later two).

If there are comments you think Mozilla should send as part of the
review, or if you think Mozilla should voice support or opposition
to the specification, please say so in this thread.  (I'd note,
however, that there have been many previous opportunities to make
comments, so it's somewhat bad form to bring up fundamental issues
for the first time at this stage.)

-David

-- 
π„ž   L. David Baron http://dbaron.org/   𝄂
𝄒   Mozilla  https://www.mozilla.org/   𝄂
 Before I built a wall I'd ask to know
 What I was walling in or walling out,
 And to whom I was like to give offense.
   - Robert Frost, Mending Wall (1914)


signature.asc
Description: Digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to replace Promise.jsm and promise.js with DOM Promises

2013-11-25 Thread nsm . nikhil
On Tuesday, November 19, 2013 10:48:24 AM UTC-8, Brandon Benvie wrote:> 
> There's two mostly orthogonal concerns here. The first is the sync/async 
> 
> issue:
> 
> 
> 
>  console.log(1);
> 
>  promise.resolve().then(() => {
> 
>console.log(2);
> 
>  });
> 
>  console.log(3);
> 
> 
> 
> Addon SDK Promises print 1, 2, 3. Every other Promise library prints 1, 
> 
> 3, 2. This introduces subtle timing dependencies that are very hard to 
> 
> track down, as they tend to span across almost completely unrelated 
> 
> code. It's very difficult to track down these bugs, but the changes are 
> 
> usually quite minor in terms of amount of code changed.
> 
> 
> 
> 
> 
> The second concern is switching from Deferreds, which used in most 
> 
> existing Promise libraries, to using the Promise constructor:
> 
> 
> 
> promises/a+:
> 
> 
> 
>  let deferred = Promise.defer();
> 
>  async(() => {
> 
>deferred.resolve();
> 
>  });
> 
>  return deferred.promise;
> 
> 
> 
> DOM Promises:
> 
> 
> 
>  return new Promise((resolve, reject) => {
> 
>async(() => {
> 
>  resolve();
> 
>});
> 
>  });
> 
> 
> 
> 
> 
> This same conversion has to happen for code relying on either 
> 
> Promise.jsm or Addon SDK Promises. It's an easy, mostly mechanical 
> 
> change, but it involves touching a huge amount of code. This conversion 
> 
> also requires the sync -> async conversion for Addon SDK Promises.
> 
> 
> 
> 
> 
> I think it makes sense to break these up, first doing the sync -> async 
> 
> conversion and then doing the Deferred to Promise constructor 
> 
> conversion. As soon as a given file has been converted to Promise.jsm it 
> 
> can then be converted to DOM Promises, so these efforts can be done in 
> 
> parallel as there is a lot of existing Promise.jsm usage in the tree.


Succinctly expressed and good point! It would be great if devs could hang off 
bugs from bug 939636 for known tests/implementations where the 'promise.js 
resolve() is synchronous' needs fixing to use the 'DOM Promise resolve() is 
always async', not doing which would break some code. Most code does not seem 
to be like that.

My status: I've gotten a few places in dom/ and browser/ updated, but my 
current focus is to get toolkit/ up to speed to use DOM Promise exclusively. 
I've made progress on toolkit/modules/, where Task.jsm and Sqlite.jsm were 
slightly tricky. Will have patches up soon.

Based on bholley's comments on bug 939636, chrome code will need to do

Cu.importGlobalProperties(["Promise"]);

to use DOM Promises.

Nikhil
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to replace Promise.jsm and promise.js with DOM Promises

2013-11-25 Thread nsm . nikhil
On Monday, November 25, 2013 12:25:00 PM UTC-8, jsan...@gmail.com wrote:
> Is there a consensus on removing Promise.jsm completely? As Benvie said, the 
> majority of work will be migrating over from `sdk/core/promise.js` (sync) to 
> the async Promise.jsm, which share APIs. Converting Promise.jsm to consume 
> DOM Promises is pretty straight forward, and would still give us our current 
> defer/all utilities, which are used extensively. 
> 
> 

The spec provides all() and race(), and they should land in m-c soon [1]. 
Deferreds, while intuitive, offer no advantages over the new callback syntax. 
The callback syntax forces better code in most situations. In places where the 
'deferred resolution out of Promise callback' pattern is required since the 
resolve/reject may happen at a different call site, it is easy to get a 
reference to the resolve/reject functions of the promise as member variables.

FooUtil.prototype = {
  _rejectTransaction: null,
  beginTransaction: function() {
return new Promise((resolve, reject) => {
  this._rejectTransaction = reject;
  asyncStuff(resolve);
});
  },
  close: function() {
if (transactionInProgress) {
  this._rejectTransaction("close() called!");
}
  }
}

[1]: https://bugzil.la/939332

As for promise utilities from other libraries, any library that conforms to the 
Promise/A+ spec can accept a DOM Promise and operate on it with no issue.

I would like to get rid of Promise implementations that reimplement stuff in 
the spec, but I'm not against adding more utilities as PromiseUtils.jsm or 
similar if it is required.

Nikhil
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [b2g] PSA: Shutting down my mozilla git mirror in three weeks

2013-11-25 Thread Nicholas Nethercote
On Mon, Nov 25, 2013 at 1:46 PM, Aki Sasaki  wrote:
> Github has closed access to https://github.com/mozilla/mozilla-central :
>
> This repository has been disabled.
> Access to this repository has been disabled by GitHub staff.
> Contact support to restore access to this repository.
>
> We're currently trying to reach github support to reverse this decision;
> it would be much better to do a scheduled rollout than have to scramble
> like this.

Are there plans to move Gaia from Github onto Mozilla servers?  Having
portions of our primary products on third-party servers is a terrible
idea.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reacting more strongly to low-memory situations in Firefox 25

2013-11-25 Thread Mike Hommey
On Mon, Nov 25, 2013 at 12:02:50PM -0500, Benjamin Smedberg wrote:
> Note that in many cases, the user hasn't actually run out of memory:
> they have plenty of physical memory and page file available. In most
> cases they also have enough available VM space! Often, however, this
> VM space is fragmented to the point where normal allocations (64k
> jemalloc heap blocks, or several-megabyte graphics or network
> buffers)

jemalloc heap blocks are 1MB.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Platform-specific nsICollation implementations

2013-11-25 Thread Ehsan Akhgari

On 11/25/2013, 6:42 AM, Axel Hecht wrote:

On 11/25/13 12:16 PM, Henri Sivonen wrote:

Now that we have ICU in the tree, do we still need platform-specific
nsICollation implementations? In particular, the Windows and Unix back
ends look legacy enough that it seems that making nsICollation
ICU-backed would be a win.



We don't build ICU yet on many of our platforms,
https://bugzilla.mozilla.org/show_bug.cgi?id=912371 being the biggest
blocker on the way there, I guess (cross-compiling brokken)


The other blocker is bug 915735 which will hopefully allow us to use ICU 
from libxul.


Cheers,
Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reacting more strongly to low-memory situations in Firefox 25

2013-11-25 Thread David Major
It seems that the 12MB reservation was aborting due to an invalid parameter. 
I've filed bug 943051.

- Original Message -
From: "Benjamin Smedberg" 
To: "Ehsan Akhgari" , dev-platform@lists.mozilla.org
Sent: Monday, November 25, 2013 9:18:02 AM
Subject: Re: Reacting more strongly to low-memory situations in Firefox 25

On 11/25/2013 12:11 PM, Ehsan Akhgari wrote:
>
> Do we know how much memory we tend to use during the minidump 
> collection phase?
No, we don't. It seems that the Windows code maps all of the DLLs into 
memory again in order to extract information from them.
>   Does it make sense to try to reserve an address space range large 
> enough for those allocations, and free it up right before trying to 
> collect a crash report to make sure that the crash reporter would not 
> run out of (V)memory in most cases?
We already do this with a 12MB reservation, which had no apparent effect 
(bug 837835).

--BDS

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Support for non-UTF-8 platform charset

2013-11-25 Thread Karl Tomlinson
Henri Sivonen writes:

> On *nix platforms, it's not clear to me what exactly the platform
> charset is used for these days. An MXR search turns up surprisingly
> little.

>  * Do we (or gtk) really still support non-UTF-8 platform charset
> values on *nix? (MXR turns up so little that it makes me wonder
> non-UTF-8 support might have already gone away for practical
> purposes.)
>
>  * If we do, we want to / need to support *nix variants with an
> encoding other than UTF-8 as the system encoding?

I think this might be used for interpreting and generating native
file names, but it is entirely possible that much code using this
assumes UTF-8 and so doesn't work well with non-UTF-8 charsets.

I'm guessing it would be no great loss to treat filenames as
UTF-8.

The one thing we really want to know is that, if native filenames
are converted to Unicode and back to native, does this give the
same as the original filename?

Does Unicode provide a way to represent broken UTF-8 sequences?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed changes to RelEng's OSX build and test infrastructure

2013-11-25 Thread Armen Zambrano G.
On 11/22/2013, 3:44 PM, Johnathan Nightingale wrote:
> On Nov 22, 2013, at 12:29 PM, Ted Mielczarek wrote:
> 
>> On 11/21/2013 4:56 PM, John O'Duinn wrote:
>>> 6) If a developer lands a patch that works on 10.9, but it fails somehow
>>> on 10.7 or 10.8, it is unlikely that we would back out the fix, and we
>>> would instead tell users to upgrade to 10.9 anyways, for the security
>> fixes.
> 
>> This seems to go against our historical policy. While it's true that we
>> might not back a patch out for 10.7/10.8 failures (since we won't have
>> automated test coverage), if they're still supported platforms then we
>> would still look to fix the bug. That might require backing a patch out
>> or landing a new fix. I don't think we need to over-rotate on this, this
>> is no different than any of the myriad of regressions or bugs we have
>> reported by users with software configurations different than what we're
>> able to run tests on.
>>
>> I would instead simply say "10.7 and 10.8 will remain supported OSes,
>> and bugs affecting only those platforms will be considered and
>> prioritized as necessary". It sounds a little weasely when I write it
>> that way, but I don't think we should WONTFIX bugs just because they're
>> on a supported platform without test coverage, we'd simply treat them as
>> we would any other bug a user reports: something we ought to fix,
>> prioritized as is seen fit by developers.
> 
> 
> I agree - we have not decided to mark 10.7 or 10.8 as tier 2 or otherwise 
> less supported. I don't mind assuming that 10.6/10.9 tests oughta catch most 
> of the problems, but if they miss one and we break 10.7/10.8, I'd expect us 
> to find a solution for that, or back out if the bustage is significant and 
> not easily fixable.
> 
> J
> 
> ---
> Johnathan Nightingale
> VP Firefox
> @johnath
> 

Thanks Jonathan. Yes, it makes sense to me that approach.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed changes to RelEng's OSX build and test infrastructure

2013-11-25 Thread Armen Zambrano G.
On 11/22/2013, 12:29 PM, Ted Mielczarek wrote:
> 
> I think this plan is generally sound. Users are moving en-masse to 10.9
> with the free update, so we should focus our resources there, and keep
> 10.6 around to support those users that can't update for hardware
> reasons. I just have one point of contention with what you've written.
> 
> On 11/21/2013 4:56 PM, John O'Duinn wrote:
>> 6) If a developer lands a patch that works on 10.9, but it fails somehow
>> on 10.7 or 10.8, it is unlikely that we would back out the fix, and we
>> would instead tell users to upgrade to 10.9 anyways, for the security
> fixes.
> 
> This seems to go against our historical policy. While it's true that we
> might not back a patch out for 10.7/10.8 failures (since we won't have
> automated test coverage), if they're still supported platforms then we
> would still look to fix the bug. That might require backing a patch out
> or landing a new fix. I don't think we need to over-rotate on this, this
> is no different than any of the myriad of regressions or bugs we have
> reported by users with software configurations different than what we're
> able to run tests on.
> 
> I would instead simply say "10.7 and 10.8 will remain supported OSes,
> and bugs affecting only those platforms will be considered and
> prioritized as necessary". It sounds a little weasely when I write it
> that way, but I don't think we should WONTFIX bugs just because they're
> on a supported platform without test coverage, we'd simply treat them as
> we would any other bug a user reports: something we ought to fix,
> prioritized as is seen fit by developers.
> 
> -Ted
> 
> 

Yes, this is better worded.
Thanks ted.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to replace Promise.jsm and promise.js with DOM Promises

2013-11-25 Thread jsantell
Is there a consensus on removing Promise.jsm completely? As Benvie said, the 
majority of work will be migrating over from `sdk/core/promise.js` (sync) to 
the async Promise.jsm, which share APIs. Converting Promise.jsm to consume DOM 
Promises is pretty straight forward, and would still give us our current 
defer/all utilities, which are used extensively. 

It seems the argument that DOM Promise API produces easier to read code, which 
sometimes is the case (IMO), but other times, we really need to use deferreds 
and utilities like `all`, and perhaps other utilities from popular promise 
libraries that we may find useful in the future.

Not having a shared, promise utility library seems like a step backwards.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reacting more strongly to low-memory situations in Firefox 25

2013-11-25 Thread Boris Zbarsky

On 11/25/13 1:31 PM, Ryan VanderMeulen wrote:

What does the OOM rate of 32bit builds on Win64 have to do with whether
or not making 64bit Firefox builds would help?


I think Benjamin's point was that only about 1/4 of the OOM crashes were 
on Win64 [1].  The rest were on Win32, and hence would not be helped by 
a 64-bit Firefox for Win64, since they're not on Win64 to start with.


-Boris

The number:

OOM,win64,15744
OOM,win32,42097

both sets of users are running 32-bit Firefox builds.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reacting more strongly to low-memory situations in Firefox 25

2013-11-25 Thread Ryan VanderMeulen

On 11/25/2013 1:26 PM, Benjamin Smedberg wrote:

This is an analysis of the crash reports from a particular day, so it's
almost all going to be 32-bit Firefox because that's the only thing we
release. 64-bit Firefox Nightly crashes due to OOM would be lumped into
the OOM-win64 bucket in the analysis, but are probably not relevant.


> So I do not expect that doing Firefox for win64 will
> help users who are already experiencing memory issues

What does the OOM rate of 32bit builds on Win64 have to do with whether 
or not making 64bit Firefox builds would help?


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reacting more strongly to low-memory situations in Firefox 25

2013-11-25 Thread Benjamin Smedberg

On 11/25/2013 1:09 PM, Ryan VanderMeulen wrote:


So we're clear, this analysis is of 32bit Firefox builds running a 
64bit Windows OS, right? So the process is still limited to 4GB of 
address space. Wouldn't a native 64bit Firefox build have 
significantly higher address space available to it?
This is an analysis of the crash reports from a particular day, so it's 
almost all going to be 32-bit Firefox because that's the only thing we 
release. 64-bit Firefox Nightly crashes due to OOM would be lumped into 
the OOM-win64 bucket in the analysis, but are probably not relevant.


--BDS

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reacting more strongly to low-memory situations in Firefox 25

2013-11-25 Thread Ryan VanderMeulen
On 11/25/2013 12:02 PM, Benjamin Smedberg wrote:> Because one of the 
long-term possibilities discussed for solving this

> issue is releasing a 64-bit version of Firefox, I additionally broke
> down the "OOM" crashes into users running a 32-bit version of Windows
> and users running a 64-bit version of Windows:
>
> OOM,win64,15744
> OOM,win32,42097
>
> I did this by checking the "TotalVirtualMemory" annotation in the crash
> report: if it reports 4G of TotalVirtualMemory, then the user has a
> 64-bit Windows, and if it reports either 2G or 3G, the user is running a
> 32-bit Windows. So I do not expect that doing Firefox for win64 will
> help users who are already experiencing memory issues, although it may
> well help new users and users who are running memory-intensive
> applications such as games.

So we're clear, this analysis is of 32bit Firefox builds running on a 
64bit Windows OS, right? So the process is still limited to 4GB of 
address space. Wouldn't a native 64bit Firefox build have significantly 
more address space available to it?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reacting more strongly to low-memory situations in Firefox 25

2013-11-25 Thread Ryan VanderMeulen

On 11/25/2013 12:02 PM, Benjamin Smedberg wrote:

Because one of the long-term possibilities discussed for solving this
issue is releasing a 64-bit version of Firefox, I additionally broke
down the "OOM" crashes into users running a 32-bit version of Windows
and users running a 64-bit version of Windows:

OOM,win64,15744
OOM,win32,42097

I did this by checking the "TotalVirtualMemory" annotation in the crash
report: if it reports 4G of TotalVirtualMemory, then the user has a
64-bit Windows, and if it reports either 2G or 3G, the user is running a
32-bit Windows. So I do not expect that doing Firefox for win64 will
help users who are already experiencing memory issues, although it may
well help new users and users who are running memory-intensive
applications such as games.


So we're clear, this analysis is of 32bit Firefox builds running a 64bit 
Windows OS, right? So the process is still limited to 4GB of address 
space. Wouldn't a native 64bit Firefox build have significantly higher 
address space available to it?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reacting more strongly to low-memory situations in Firefox 25

2013-11-25 Thread Benjamin Smedberg

On 11/25/2013 12:11 PM, Ehsan Akhgari wrote:


Do we know how much memory we tend to use during the minidump 
collection phase?
No, we don't. It seems that the Windows code maps all of the DLLs into 
memory again in order to extract information from them.
  Does it make sense to try to reserve an address space range large 
enough for those allocations, and free it up right before trying to 
collect a crash report to make sure that the crash reporter would not 
run out of (V)memory in most cases?
We already do this with a 12MB reservation, which had no apparent effect 
(bug 837835).


--BDS

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reacting more strongly to low-memory situations in Firefox 25

2013-11-25 Thread Ehsan Akhgari

On 2013-11-25 12:02 PM, Benjamin Smedberg wrote:

Unfortunately, often when we are out of memory crash reports come back
as empty minidumps (because the crash reporter has to allocation memory
and/or VM space to create minidumps). We believe that most of the
empty-minidump crashes present on crash-stats are in fact also
out-of-memory crashes.


Do we know how much memory we tend to use during the minidump collection 
phase?  Does it make sense to try to reserve an address space range 
large enough for those allocations, and free it up right before trying 
to collect a crash report to make sure that the crash reporter would not 
run out of (V)memory in most cases?  That could be easier to implement 
than an OOP crash reporter.


Cheers,
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Reacting more strongly to low-memory situations in Firefox 25

2013-11-25 Thread Benjamin Smedberg
In crashkill we have been tracking crashes that occur in low-memory 
situations for a while. However, we are seeing a troubling uptick of 
issues in Firefox 23 and then 25. I believe that some people may not be 
able to use Firefox because of these bugs, and I think that we should be 
reacting more strongly to diagnose and solve these issues and get any 
fixes that already exist sent up the trains.


Followup to dev-platform, please.

= Data and Background =

See, as some anecdotal evidence:

Bug 930797 is a user who just upgraded to Firefox 25 and is seeing these 
a lot.
Bug 937290 is another user who just upgraded to Firefox 25 and is seeing 
a bunch of crashes, some of which are empty-dump and some of which are 
all over the place (maybe OOM crashes).
See also a recent thread "How to track down why Firefox is crashing so 
much." in firefox-dev, where two additional users are reporting 
consistent issues (one mac, one windows).


Note that in many cases, the user hasn't actually run out of memory: 
they have plenty of physical memory and page file available. In most 
cases they also have enough available VM space! Often, however, this VM 
space is fragmented to the point where normal allocations (64k jemalloc 
heap blocks, or several-megabyte graphics or network buffers) cannot be 
made. Because of work done during the recent tree closure, we now have 
this measurement in about:memory (on Windows) as vsize-max-contiguous. 
It is also being computed for Windows crashes on crash-stats for clients 
that are new enough (win7+).


Unfortunately, often when we are out of memory crash reports come back 
as empty minidumps (because the crash reporter has to allocation memory 
and/or VM space to create minidumps). We believe that most of the 
empty-minidump crashes present on crash-stats are in fact also 
out-of-memory crashes.


I've been creating reports about OOM crashes using crash-stats and found 
some startling data:

Looking just at the Windows crashes from last Friday (22-Nov):
* probably not OOM: 91565
* probably OOM: 57841
* unknown (not enough data because they are running an old version of 
Windows that doesn't report VM information in crash reports): 150874


The criterion for "probably OOM" are:
* Has an OOMAnnotationSize marking meaning jemalloc aborted an 
infallible allocator
* Has "ABORT: OOM" in the app notes meaning XPCOM aborted in infallible 
string/hashtable/array code

* Has <50MB of contiguous free VM space

This data seems to indicate that almost 40% of our Firefox crashes are 
due to OOM conditions.


Because one of the long-term possibilities discussed for solving this 
issue is releasing a 64-bit version of Firefox, I additionally broke 
down the "OOM" crashes into users running a 32-bit version of Windows 
and users running a 64-bit version of Windows:


OOM,win64,15744
OOM,win32,42097

I did this by checking the "TotalVirtualMemory" annotation in the crash 
report: if it reports 4G of TotalVirtualMemory, then the user has a 
64-bit Windows, and if it reports either 2G or 3G, the user is running a 
32-bit Windows. So I do not expect that doing Firefox for win64 will 
help users who are already experiencing memory issues, although it may 
well help new users and users who are running memory-intensive 
applications such as games.


Scripts for this analysis at 
https://github.com/mozilla/jydoop/blob/master/scripts/oom-classifier.py 
if you want to see what it's doing.


= Next Steps =

As far as I can tell, there are several basic problems that we should be 
tackling. For now, I'm going to brainstorm some ideas and hope that 
people will react or take of these items.


== Measurement ==

* Move minidump collection out of the Firefox process. This is something 
we've been talking about for a while but apparently never filed, so it's 
now filed as https://bugzilla.mozilla.org/show_bug.cgi?id=942873
* Develop a tool/instructions for users to profile the VM allocations in 
their Firefox process. We know that many of the existing VM problems are 
graphics-related, but we're not sure exactly who is making the 
allocations, and whether they are leaks, cached textures, or other 
things, and whether it's Firefox code, Windows code, or driver code 
causing problems. I know dmajor is working on some xperf logging for 
this, and we should probably try to expand that out into something that 
we can ask end users who are experiencing problems to run.
* The about:memory patches which add contiguous-vm measurement should 
probably be uplifted to Fx26, and any other measurement tools that would 
be valuable diagnostics.


== VM fragmentation ==

Bug 941837 identified a bad VM allocation pattern in our JS code which 
was causing 1MB VM fragmentation. Getting this patch uplifted seems 
important. But I know that several other things landed as a part of 
fixing the recent tree closure: has anyone identified whether any of the 
other patches here could be affecting release users and should be uplifted?


=

Re: A/B testing with telemetry

2013-11-25 Thread Lawrence Mandel
- Original Message -
> Do we have a formalized way to do A/B testing with telemetry? That is,
> assuming that there is a telemetry probe that measures problem
> symptoms and a boolean pref for  turning on a potential solution, is
> there a way the declare the pref as something that telemetry queries
> can be constrained by so that it would be possible to compare the
> symptom probe with and without the potential solution?

No, but I think this is desirable. 
> 
> If not, is there a better way to do this than duplicating probes and
> checking the pref to see which probe should be fed?

A probe is not restricted to boolean values. You can define a histogram that 
maps values to conditions. As such, you can have a single probe that captures 
all of the required data. Depending on your use case, this structure may be 
more difficult to read after the data is aggregated on the server.

Lawrence
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Support for non-UTF-8 platform charset

2013-11-25 Thread Yuri Dario
Hi,

>  * Is the OS/2 port still alive and supported as an in-tree port? The
> latest releases I've seen are 10.x ESR.

the OS/2 port is alive, we already have a beta release for 17.x and a 
more current version will follow.


-- 
Bye,

Yuri Dario

/*
 * OS/2 open source software
 * http://web.os2power.com/yuri
 * http://www.netlabs.org
*/

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: Shutting down my mozilla git mirror in three weeks

2013-11-25 Thread Ehsan Akhgari
Dear all,

For the past two and a half years I have been maintaining a git mirror of
mozilla-central plus a lot of the other branches that people found useful
at https://github.com/mozilla/mozilla-central.  Over the years this proved
to be too much work for me to keep up with, and given the existence of the
new git mirrors that are supported by RelEng, I'm planning to shut down the
update jobs for this repository on Friday, Dec 13, 2013 and take the
repository down.

I strongly suggest that if you have been using and relying on this
repository before, please consider switching to the RelEng repositories as
soon as possible.  https://github.com/mozilla/gecko-dev is the main
repository where you can find branches such as trunk/aurora/b2g
branches/etc and https://github.com/mozilla/gecko-projects is a repository
which contains project branches and twigs.  (Note that my repository hosts
all of these branches in a single location, but from now on if you use both
of these branch subsets you will need to have two upstream remotes added in
your local git clone.)

The RelEng repositories have a similar history line including the CVS
history but they have completely different commit SHA1s, so pulling that
repo into your existing clones is probably not what you want.  If you don't
have a lot of non-merged branches, your safest bet is cloning from scratch
and then port your existing branches manually.  If you have a lot of local
branches, you may want to wait for the script that John Schoenick is
working on in bug 929338 which will assist you in rebasing those branches
on top of the new history line.

Last but not least, I really hate to have to disrupt your workflow like
this.  I do hope that three weeks is enough advance notice for everybody to
successfully switch over to the new mirrors, but if you have a reason for
me to delay this please let me know and I will do my best to accommodate.

Cheers,
--
Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Unified builds

2013-11-25 Thread Ehsan Akhgari

On 2013-11-23 7:22 AM, ISHIKAWA,chiaki wrote:

(2013/11/23 1:41), Ehsan Akhgari wrote:

On 2013-11-21 1:12 PM, ISHIKAWA,chiaki wrote:

(2013/11/22 2:17), Ehsan Akhgari wrote:


FWIW if this proves to be common, it's a huge problem since it would
affect our crash stats etc... :(


Well, if the problem is related to the symptom
that I observed with the use of -gsplit-dwarf option
with ccache, then we may be in for a big surprise.

But in that case, the debug information had a file name that
was produced by the initial CPP invocation (*.i), and despite the
#file, #line information, the invocation of gcc using this intermediate
file as input produced gdb debug information using "*.i" file name, and
thus later gdb could not locate source files at all.

Here we are NOT producing such a strange intermediate file, correct?

Hmm, but I think I see some trouble coming up on the horizon :-(
(A few tweaks to the front end of compilers affected may be necessary.)


Hmm to the best of my knowledge, we don't generate the *.i files unless
if you explicitly request them.  Is that what you did in that build?



No, the buildγ€€in question is from automated periodic test.
(So I have no control over how it is compiled, etc. It should be using
the standard OSX build toolchain with default setting, etc.)


It's hard to say exactly what's going on here without knowing more about 
how this build is produced.  It would be really great if you could file 
a bug about this with more details on how to reproduce this broken build.


Thanks!
Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Analyze C++/compiler usage and code stats easily

2013-11-25 Thread Ehsan Akhgari
This patch doesn't seem to exist any more.  Do you have another copy of it
lying somewhere?

Thanks!

--
Ehsan



On Fri, Nov 15, 2013 at 12:43 AM, Gregory Szorc  wrote:

> C++ developers,
>
> Over 90% of the CPU time required to build the tree is spent compiling or
> linking C/C++. So, anything we can do to make that faster will make the
> overall build faster.
>
> I put together a quick patch [1] to make it rather simple to extract
> compiler resource usage and very basic code metrics during builds.
>
> Simply apply that patch and build with `mach build --profile-compiler` and
> your machine will produce all kinds of potentially interesting
> measurements. They will be stuffed into objdir/.mozbuild/compilerprof/.
> If you don't feel like waiting (it will take about 5x longer than a regular
> build because it performs separate preprocessing, ast, and codegen compiler
> invocations 3 times each), grab an archive of an OS X build I just
> performed from [2] and extract it to objdir/.mozbuild/.
>
> I put together an extremely simple `mach compiler-analyze` command to sift
> through the results. e.g.
>
> $ mach compiler-analyze preprocessor-relevant-lines
> $ mach compiler-analyze codegen-sizes
> $ mach compiler-analyze codegen-total-times
>
> Just run `mach help compiler-analyze` to see the full list of what can be
> printed. Or, write your own code to analyze the produced JSON files.
>
> I'm sure people who love getting down and dirty with C++ will be
> interested in this data. I have no doubt that are compiler time and code
> size wins waiting to be discovered through this data. We may even uncover a
> perf issue or two. Who knows.
>
> Here are some questions I have after casually looking at the data:
>
> * The mean ratio of .o size to lines from preprocessor is 16.35
> bytes/line. Why do 38/4916 (0.8%) files have a ratio over 100? Why are a
> lot of these in js/ and gfx?
>
> * What's in the 150 files that have 100k+ lines after preprocessing that
> makes them so large?
>
> * Why does lots of js/'s source code gravitate towards the "bad" extreme
> for most of the metrics (code size, compiler time, preprocessor size)?
>
> Disclaimer: This patch is currently hacked together. If there is an
> interest in getting this checked into the tree, I can clean it up and do
> it. Just file a bug in Core :: Build Config and I'll make it happen when I
> have time. Or, if an interested party wants to champion getting it landed,
> I'll happily hand it off :)
>
> [1] http://hg.gregoryszorc.com/gecko-collab/rev/741f3074e313
> [2] https://people.mozilla.org/~gszorc/compiler_profiles.tar.bz2
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is for...of on a live DOM node list supposed to do the right thing?

2013-11-25 Thread Anne van Kesteren
On Fri, Nov 22, 2013 at 6:00 PM, Boris Zbarsky  wrote:
>   for (var x of Array.from(list))
>
> to explicitly snapshot?  Cross-browser compat might not be great so far, but
> it's not for for..of either.

Right. And going forward we'll make sure to introduce less live lists
on the platform side. E.g. querySelector et al return static lists.
The proposed query and queryAll methods will return a subclass of
Array.


-- 
http://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Running a backout branch with Mercurial at JAWS.

2013-11-25 Thread Gijs Kruitbosch

On 25/11/13, 10:25 , Philip Chee wrote:

Original blog post at


An other problem with identifying a patch as Australis specific are
patches with bits in Australis/browser and bits in Gecko (e.g. layout or
content). The bits in Gecko are necessary for Australis to work but are
otherwise not specific to Australis and indeed might contain some
welcome optimizations that would be useful for other Gecko based software.

Phil


I believe this problem is too tiny to worry about. By definition, 
Australis changes are almost entirely in browser/ (with sprinklings of 
toolkit/ and teensy bits of widget/cocoa and layout|gfx for mac titlebar 
stuff, IIRC). All of the general perf improvement stuff that was 
required for some of the Australis work (e.g. SVG cache improvements, 
windows titlebar hittest improvements) has landed on m-c pre-Australis 
and is therefore also on holly. So, so far, there is almost nothing that 
fits the bill of the issue you described.


The backout branch isn't permanent, so even for whatever fraction of 
fractions that could potentially be adapted to work on the backout 
branch with the "old" layout of the browser, the most that would happen 
is that such changes are delayed for a cycle or two before riding the 
trains. We're not near a new ESR so I don't see any issues there either.


Much more importantly, I suspect that unless there is compelling 
evidence that such changes should be uplifted (e.g. security, large perf 
improvements with low risk, etc.), we'd keep them out of holly because 
we want a stable base for Aurora, and holly will see a lot less testing 
than mainline m-c. We're already experiencing this now as a seemingly 
innocent and well-tested patch for gfx regions is causing perma-orange 
on holly but not on m-c. ( 
https://bugzilla.mozilla.org/show_bug.cgi?id=942250 )


~ Gijs


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Support for non-UTF-8 platform charset

2013-11-25 Thread Robert O'Callahan
On Tue, Nov 26, 2013 at 12:46 AM, Henri Sivonen wrote:

> We have a concept of platform charset that goes back to pre-NT
> Windows, Mac OS Classic, OS/2 and pre-UTF-8 *nix platforms. This
> concept gets in the way of doCOMtaminating old code to use only new
> facilities in mozilla::dom::EncodingUtils.
>
> These days, on Mac and Android, we say the platform charset is always
> UTF-8.
>
> On Windows, it seems that we access various system resources through
> post-NT Unicode APIs, but we still try to use a locale-affiliated
> legacy encoding for Save As *content*, for the old times' sake.
>
> On *nix platforms, it's not clear to me what exactly the platform
> charset is used for these days. An MXR search turns up surprisingly
> little.
>
> On OS/2, it seems the platform charset is always one of several
> non-UTF-8 encodings.
>

We shouldn't keep anything around just for OS/2 or non-UTF8 *nix, IMHO. So
the only question is what's required on Windows.

Rob
-- 
Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Platform-specific nsICollation implementations

2013-11-25 Thread Jonathan Kew

On 25/11/13 11:16, Henri Sivonen wrote:

Now that we have ICU in the tree, do we still need platform-specific
nsICollation implementations? In particular, the Windows and Unix back
ends look legacy enough that it seems that making nsICollation
ICU-backed would be a win.



In particular, this would allow us to ensure consistent behavior across 
platforms, which I doubt the current implementations can provide. (And 
it'd be a good opportunity to document more carefully what the flags in 
nsICollation.idl really mean, or fix them if they're broken... 
kCollationCaseInsensitiveAscii, I'm looking at you!)


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Support for non-UTF-8 platform charset

2013-11-25 Thread Henri Sivonen
We have a concept of platform charset that goes back to pre-NT
Windows, Mac OS Classic, OS/2 and pre-UTF-8 *nix platforms. This
concept gets in the way of doCOMtaminating old code to use only new
facilities in mozilla::dom::EncodingUtils.

These days, on Mac and Android, we say the platform charset is always UTF-8.

On Windows, it seems that we access various system resources through
post-NT Unicode APIs, but we still try to use a locale-affiliated
legacy encoding for Save As *content*, for the old times' sake.

On *nix platforms, it's not clear to me what exactly the platform
charset is used for these days. An MXR search turns up surprisingly
little.

On OS/2, it seems the platform charset is always one of several
non-UTF-8 encodings.

Questions:

 * On Windows, do we really need to pay homage to the pre-NT legacy
when doing Save As? How about we just use UTF-8 for "HTML Page,
complete" reserialization like on Mac?

 * Is the OS/2 port still alive and supported as an in-tree port? The
latest releases I've seen are 10.x ESR.

 * Do we (or gtk) really still support non-UTF-8 platform charset
values on *nix? (MXR turns up so little that it makes me wonder
non-UTF-8 support might have already gone away for practical
purposes.)

 * If we do, we want to / need to support *nix variants with an
encoding other than UTF-8 as the system encoding?

 * If the previous question needs telemetry to answer, do we even get
telemetry from distro Linux builds, from Solaris builds or from *BSD
builds? (Are there other *nix ports alive?)

 * If we do need to support non-UTF-8 system encodings, do we need to
support EUC-TW? For Solaris maybe? Out of the encodings that have
appeared as past *nix system encodings, it's the one that we wouldn't
need to keep code for Web-motivated reasons.

-- 
Henri Sivonen
hsivo...@hsivonen.fi
http://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Platform-specific nsICollation implementations

2013-11-25 Thread Axel Hecht

On 11/25/13 12:16 PM, Henri Sivonen wrote:

Now that we have ICU in the tree, do we still need platform-specific
nsICollation implementations? In particular, the Windows and Unix back
ends look legacy enough that it seems that making nsICollation
ICU-backed would be a win.



We don't build ICU yet on many of our platforms, 
https://bugzilla.mozilla.org/show_bug.cgi?id=912371 being the biggest 
blocker on the way there, I guess (cross-compiling brokken)


Axel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Platform-specific nsICollation implementations

2013-11-25 Thread Henri Sivonen
Now that we have ICU in the tree, do we still need platform-specific
nsICollation implementations? In particular, the Windows and Unix back
ends look legacy enough that it seems that making nsICollation
ICU-backed would be a win.

-- 
Henri Sivonen
hsivo...@hsivonen.fi
http://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsIPluginTagInfo::getDocumentEncoding()

2013-11-25 Thread Henri Sivonen
On Sun, Nov 24, 2013 at 2:09 PM, Henri Sivonen  wrote:
> What's this API for?

Looks like it's for this:
http://www-archive.mozilla.org/projects/blackwood/java-plugins/api/org/mozilla/pluglet/mozilla/PlugletTagInfo2.html

-- 
Henri Sivonen
hsivo...@hsivonen.fi
http://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Running a backout branch with Mercurial at JAWS.

2013-11-25 Thread Philip Chee
Original blog post at


An other problem with identifying a patch as Australis specific are
patches with bits in Australis/browser and bits in Gecko (e.g. layout or
content). The bits in Gecko are necessary for Australis to work but are
otherwise not specific to Australis and indeed might contain some
welcome optimizations that would be useful for other Gecko based software.

Phil

-- 
Philip Chee , 
http://flashblock.mozdev.org/ http://xsidebar.mozdev.org
Guard us from the she-wolf and the wolf, and guard us from the thief,
oh Night, and so be good for us to pass.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform