Re: Use of MOZ_ARRAY_LENGTH for static constants?

2014-02-26 Thread Neil

Ehsan Akhgari wrote:


On 2/23/14, 4:05 PM, Neil wrote:

Both ArrayLength and MOZ_ARRAY_LENGTH are typesafe when compiled as 
C++, however ArrayLength has the disadvantage that it's not a 
constant expression in MSVC. In unoptimised builds this is direly 
slow as the templated function does not even get inlined, but even in 
optimised builds the MSVC compiler is unable to completely optimise 
away static variables. In particular, the variable is treated as if 
it is forward-declared: the value is fetched from memory in each 
function that uses it. (In my simple test there were enough registers 
to avoid fetching the value more than once, but I don't know what 
happens if this is not the case. And at least the optimiser was able 
to avoid creating any static constructors.) Would it therefore be 
preferable to use MOZ_ARRAY_LENGTH in such cases?


Which cases are those exactly? 


The one that I spotted is that MSVC is unable to optimise static 
variables, e.g. when you write static const length = ArrayLength(array); 
If you write this as a local then the compiler is able to optimise it 
away in release builds.


Note that I don't think that we need to care about the performance of 
ArrayLength() in non-optimized builds.


Debug builds are now sufficiently slow that I no longer dogfood them, 
but as it happens MSVC is able to optimise MOZ_ARRAY_LENGTH in both opt 
and debug builds.



On 2014-02-24, 1:25 PM, Chris Peterson wrote:

To avoid developers and reviewers from having to remember special 
cases, maybe MOZ_ARRAY_LENGTH should just be the standard everywhere.


They're not quivalent.


Would you mind xpanding on that?

--
Warning: May contain traces of nuts.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: support for Visual Studio 2010

2014-02-26 Thread Ted Mielczarek
On 2/26/2014 2:15 AM, Cameron McCormack wrote:
 When do we plan to drop support for Visual Studio 2010?  I remember at
 one point that it was not possible to generate builds that ran on
 Windows XP with VS2010, but Update 1 (released in November) added
 support for that.

There are no immediate plans for this AFAIK. We're still using VS2010 as
the compiler for our official builds. We were investigating VS2013 to
replace it since 2013 ships with a 64 bit - 32 bit cross toolchain,
which would alleviate our PGO memory usage woes permanently, but in the
meantime between the include what you use and unified build work the
PGO linker memory usage has gone way down anyway, so it's not pressing.

Historically we haven't updated toolchains without a pressing reason,
since there's a lot of hassle involved. Is there a specific reason
you're asking?

-Ted


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We live in a memory-constrained world

2014-02-26 Thread Nicholas Nethercote
On Tue, Feb 25, 2014 at 8:18 PM, Mike Hommey m...@glandium.org wrote:
 
  I never understood why we need those jobs to be builds. Why not turn
  --enable-valgrind on m-c builds, and run valgrind as a test job?

 --disable-jemalloc is needed as well.

 That shouldn't be needed anymore with --soname-synonyms=somalloc=NONE
 on the valgrind command line.

I did not know about that! Thank you. I filed
https://bugzilla.mozilla.org/show_bug.cgi?id=977067 to update |mach
valgrind-test| accordingly, and I will also update the docs.

That will make it easier to use standard builds... turning on
--enable-valgrind for m-c builds is pretty reasonable, because it
makes only tiny differences (insertion of a few nops in
non-perf-critical places) to the resulting code.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: support for Visual Studio 2010

2014-02-26 Thread Ehsan Akhgari
On Wed, Feb 26, 2014 at 6:58 AM, Ted Mielczarek t...@mielczarek.org wrote:

 On 2/26/2014 2:15 AM, Cameron McCormack wrote:
  When do we plan to drop support for Visual Studio 2010?  I remember at
  one point that it was not possible to generate builds that ran on
  Windows XP with VS2010, but Update 1 (released in November) added
  support for that.
 
 There are no immediate plans for this AFAIK. We're still using VS2010 as
 the compiler for our official builds. We were investigating VS2013 to
 replace it since 2013 ships with a 64 bit - 32 bit cross toolchain,
 which would alleviate our PGO memory usage woes permanently, but in the
 meantime between the include what you use and unified build work the
 PGO linker memory usage has gone way down anyway, so it's not pressing.


FWIW that work is stalled on a compiler bug, and we won't be able to switch
to VS2013 until that is resolved by Microsoft.

Cheers,
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Web APIs documentation meeting on Friday at 9 AM PST

2014-02-26 Thread Eric Shepherd
The Web API documentation meeting is Friday at 9 AM Pacific Time. 
Everyone's welcome to attend; if you're interested in ensuring that 
these APIs are properly documented, we'd love your input.


We have an agenda, as well as details on how to join, here:

https://etherpad.mozilla.org/WebAPI-docs-2014-02-28.

We look forward to seeing you there!

--
Eric Shepherd
Developer Documentation Lead
Mozilla
Blog: http://www.bitstampede.com/
Twitter: @sheppy

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We live in a memory-constrained world

2014-02-26 Thread Ehsan Akhgari

On 2014-02-26, 8:44 AM, Nicholas Nethercote wrote:

On Tue, Feb 25, 2014 at 8:18 PM, Mike Hommey m...@glandium.org wrote:


I never understood why we need those jobs to be builds. Why not turn
--enable-valgrind on m-c builds, and run valgrind as a test job?


--disable-jemalloc is needed as well.


That shouldn't be needed anymore with --soname-synonyms=somalloc=NONE
on the valgrind command line.


I did not know about that! Thank you. I filed
https://bugzilla.mozilla.org/show_bug.cgi?id=977067 to update |mach
valgrind-test| accordingly, and I will also update the docs.

That will make it easier to use standard builds... turning on
--enable-valgrind for m-c builds is pretty reasonable, because it
makes only tiny differences (insertion of a few nops in
non-perf-critical places) to the resulting code.


Sorry if this is driving the thread off-topic, but can you please 
provide a list of the things that --enable-valgrind changes?  I am very 
curious to know the implications of turning this on.


Thanks!
Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We live in a memory-constrained world

2014-02-26 Thread Jonathan Griffin
Splitting the valgrind tests up and running them separately as test jobs 
in TBPL is definitely something the A*Team can help with.  I've filed 
bug 977240 for this.


Jonathan

On 2/25/14 7:25 PM, Nicholas Nethercote wrote:

On Tue, Feb 25, 2014 at 2:32 PM, Mike Hommey m...@glandium.org wrote:

I never understood why we need those jobs to be builds. Why not turn
--enable-valgrind on m-c builds, and run valgrind as a test job?

--disable-jemalloc is needed as well.

As for the structure... I just took what already existed and got it
into good enough shape to make visible on TBPL. (That was more than
enough for a non-TBPL/buildbot expert like me to take on.) I'm fine
with the idea of the Valgrind test job being split up into suites and
using the test machines for the testing part, but I'm not jumping up
and down to volunteer to do it.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Brendan Dahl
Yury Delendik worked on reformatting the files a bit and was able to get them 
down to 1.1MB binary which gzips to 990KB. This seems like a reasonable size to 
me and involves a lot less work than setting up a process for distributing 
these files via CDN.

Brendan

On Feb 24, 2014, at 10:14 PM, Rik Cabanier caban...@gmail.com wrote:

 
 
 
 On Mon, Feb 24, 2014 at 5:01 PM, Andreas Gal andreas@gmail.com wrote:
 
 My assumption is that certain users only need certain CMaps because they tend 
 to read only documents in certain languages. This seems like something we can 
 really optimize and avoid ahead-of-time download cost for.
 
 So, you'd only install the Korean CMaps if the language is Korean?
 The problem with that is that if a user might install a English version of 
 Firefox but still open Korean PDFs (which will then display as junk)
  
 
 The fact that we don’t do this yet doesn’t seem like a good criteria. There 
 is a lot of good things we aren’t doing yet. You can be the first to change 
 that on this particular topic, if it technically makes sense.
 
 Load-on-demand (with an option to download all of them) seems like a nice 
 solution. A large majority of users will never need CMaps or only a very 
 small subset.
  
 On Feb 25, 2014, at 1:27 AM, Brendan Dahl bd...@mozilla.com wrote:
 
  It’s certainly possible to load dynamically. Do we currently do this for 
  any other Firefox resources?
 
  From what I’ve seen, many PDF’s use CMaps even if they don’t necessarily 
  have CJK characters, so it may just be better to include them. FWIW both 
  Popper and Mupdf embed the CMaps.
 
  Brendan
 
  On Feb 24, 2014, at 3:01 PM, Andreas Gal andreas@gmail.com wrote:
 
  Is this something we could load dynamically and offline cache?
 
  Andreas
 
  Sent from Mobile.
 
  On Feb 24, 2014, at 23:41, Brendan Dahl bd...@mozilla.com wrote:
 
  PDF.js plans to soon start including and using Adobe CMap files for 
  converting character codes to character id's(CIDs) and mapping character 
  codes to unicode values. This will fix a number of bugs in PDF.js and 
  will improve our support for Chinese, Korean, and Japanese(CJK) documents.
 
  I wanted to inform dev-platform because there are quite a few files and 
  they are large. The files are loaded lazily as needed so they shouldn't 
  affect the size of Firefox when running, but they will affect the 
  installation size. There are 168 files with an average size of ~40KB, and 
  all of the files together are roughly:
  6.9M
  2.2M when gzipped
 
  http://sourceforge.net/adobe/cmap/wiki/Home/
 
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform
 
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Bobby Holley
That's still a ton for something that most of our users will not (or will
rarely) use. I think we absolutely need to get an on-demand story for this
kind of stuff. It isn't the first time it has come up.

bholley


On Wed, Feb 26, 2014 at 11:38 AM, Brendan Dahl bd...@mozilla.com wrote:

 Yury Delendik worked on reformatting the files a bit and was able to get
 them down to 1.1MB binary which gzips to 990KB. This seems like a
 reasonable size to me and involves a lot less work than setting up a
 process for distributing these files via CDN.

 Brendan

 On Feb 24, 2014, at 10:14 PM, Rik Cabanier caban...@gmail.com wrote:

 
 
 
  On Mon, Feb 24, 2014 at 5:01 PM, Andreas Gal andreas@gmail.com
 wrote:
 
  My assumption is that certain users only need certain CMaps because they
 tend to read only documents in certain languages. This seems like something
 we can really optimize and avoid ahead-of-time download cost for.
 
  So, you'd only install the Korean CMaps if the language is Korean?
  The problem with that is that if a user might install a English version
 of Firefox but still open Korean PDFs (which will then display as junk)
 
 
  The fact that we don't do this yet doesn't seem like a good criteria.
 There is a lot of good things we aren't doing yet. You can be the first to
 change that on this particular topic, if it technically makes sense.
 
  Load-on-demand (with an option to download all of them) seems like a
 nice solution. A large majority of users will never need CMaps or only a
 very small subset.
 
  On Feb 25, 2014, at 1:27 AM, Brendan Dahl bd...@mozilla.com wrote:
 
   It's certainly possible to load dynamically. Do we currently do this
 for any other Firefox resources?
  
   From what I've seen, many PDF's use CMaps even if they don't
 necessarily have CJK characters, so it may just be better to include them.
 FWIW both Popper and Mupdf embed the CMaps.
  
   Brendan
  
   On Feb 24, 2014, at 3:01 PM, Andreas Gal andreas@gmail.com
 wrote:
  
   Is this something we could load dynamically and offline cache?
  
   Andreas
  
   Sent from Mobile.
  
   On Feb 24, 2014, at 23:41, Brendan Dahl bd...@mozilla.com wrote:
  
   PDF.js plans to soon start including and using Adobe CMap files for
 converting character codes to character id's(CIDs) and mapping character
 codes to unicode values. This will fix a number of bugs in PDF.js and will
 improve our support for Chinese, Korean, and Japanese(CJK) documents.
  
   I wanted to inform dev-platform because there are quite a few files
 and they are large. The files are loaded lazily as needed so they shouldn't
 affect the size of Firefox when running, but they will affect the
 installation size. There are 168 files with an average size of ~40KB, and
 all of the files together are roughly:
   6.9M
   2.2M when gzipped
  
   http://sourceforge.net/adobe/cmap/wiki/Home/
  
   ___
   dev-platform mailing list
   dev-platform@lists.mozilla.org
   https://lists.mozilla.org/listinfo/dev-platform
  
 
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Andreas Gal

This randomly reminds me that it might be time to review zip as our compression 
format for omni.ja.

ls -l omni.ja 

7862939

ls -l omni.tar.xz (tar and then xz -z)

4814416

LZMA2 is available as a public domain implementation. It uses a bit more memory 
than zip, but its still within reason (the default level 6 is around 1MB to 
decode I believe). A fairly easy way to use it would be to add support for a 
custom compression format for our version of libjar.

Andreas

On Feb 26, 2014, at 8:38 PM, Brendan Dahl bd...@mozilla.com wrote:

 Yury Delendik worked on reformatting the files a bit and was able to get them 
 down to 1.1MB binary which gzips to 990KB. This seems like a reasonable size 
 to me and involves a lot less work than setting up a process for distributing 
 these files via CDN.
 
 Brendan
 
 On Feb 24, 2014, at 10:14 PM, Rik Cabanier caban...@gmail.com wrote:
 
 
 
 
 On Mon, Feb 24, 2014 at 5:01 PM, Andreas Gal andreas@gmail.com wrote:
 
 My assumption is that certain users only need certain CMaps because they 
 tend to read only documents in certain languages. This seems like something 
 we can really optimize and avoid ahead-of-time download cost for.
 
 So, you'd only install the Korean CMaps if the language is Korean?
 The problem with that is that if a user might install a English version of 
 Firefox but still open Korean PDFs (which will then display as junk)
 
 
 The fact that we don’t do this yet doesn’t seem like a good criteria. There 
 is a lot of good things we aren’t doing yet. You can be the first to change 
 that on this particular topic, if it technically makes sense.
 
 Load-on-demand (with an option to download all of them) seems like a nice 
 solution. A large majority of users will never need CMaps or only a very 
 small subset.
 
 On Feb 25, 2014, at 1:27 AM, Brendan Dahl bd...@mozilla.com wrote:
 
 It’s certainly possible to load dynamically. Do we currently do this for 
 any other Firefox resources?
 
 From what I’ve seen, many PDF’s use CMaps even if they don’t necessarily 
 have CJK characters, so it may just be better to include them. FWIW both 
 Popper and Mupdf embed the CMaps.
 
 Brendan
 
 On Feb 24, 2014, at 3:01 PM, Andreas Gal andreas@gmail.com wrote:
 
 Is this something we could load dynamically and offline cache?
 
 Andreas
 
 Sent from Mobile.
 
 On Feb 24, 2014, at 23:41, Brendan Dahl bd...@mozilla.com wrote:
 
 PDF.js plans to soon start including and using Adobe CMap files for 
 converting character codes to character id's(CIDs) and mapping character 
 codes to unicode values. This will fix a number of bugs in PDF.js and 
 will improve our support for Chinese, Korean, and Japanese(CJK) documents.
 
 I wanted to inform dev-platform because there are quite a few files and 
 they are large. The files are loaded lazily as needed so they shouldn't 
 affect the size of Firefox when running, but they will affect the 
 installation size. There are 168 files with an average size of ~40KB, and 
 all of the files together are roughly:
 6.9M
 2.2M when gzipped
 
 http://sourceforge.net/adobe/cmap/wiki/Home/
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Andreas Gal

Lets turn this question around. If we had an on-demand way to load stuff like 
this, what else would we want to load on demand?

Andreas

On Feb 26, 2014, at 8:53 PM, Bobby Holley bobbyhol...@gmail.com wrote:

 That's still a ton for something that most of our users will not (or will
 rarely) use. I think we absolutely need to get an on-demand story for this
 kind of stuff. It isn't the first time it has come up.
 
 bholley
 
 
 On Wed, Feb 26, 2014 at 11:38 AM, Brendan Dahl bd...@mozilla.com wrote:
 
 Yury Delendik worked on reformatting the files a bit and was able to get
 them down to 1.1MB binary which gzips to 990KB. This seems like a
 reasonable size to me and involves a lot less work than setting up a
 process for distributing these files via CDN.
 
 Brendan
 
 On Feb 24, 2014, at 10:14 PM, Rik Cabanier caban...@gmail.com wrote:
 
 
 
 
 On Mon, Feb 24, 2014 at 5:01 PM, Andreas Gal andreas@gmail.com
 wrote:
 
 My assumption is that certain users only need certain CMaps because they
 tend to read only documents in certain languages. This seems like something
 we can really optimize and avoid ahead-of-time download cost for.
 
 So, you'd only install the Korean CMaps if the language is Korean?
 The problem with that is that if a user might install a English version
 of Firefox but still open Korean PDFs (which will then display as junk)
 
 
 The fact that we don't do this yet doesn't seem like a good criteria.
 There is a lot of good things we aren't doing yet. You can be the first to
 change that on this particular topic, if it technically makes sense.
 
 Load-on-demand (with an option to download all of them) seems like a
 nice solution. A large majority of users will never need CMaps or only a
 very small subset.
 
 On Feb 25, 2014, at 1:27 AM, Brendan Dahl bd...@mozilla.com wrote:
 
 It's certainly possible to load dynamically. Do we currently do this
 for any other Firefox resources?
 
 From what I've seen, many PDF's use CMaps even if they don't
 necessarily have CJK characters, so it may just be better to include them.
 FWIW both Popper and Mupdf embed the CMaps.
 
 Brendan
 
 On Feb 24, 2014, at 3:01 PM, Andreas Gal andreas@gmail.com
 wrote:
 
 Is this something we could load dynamically and offline cache?
 
 Andreas
 
 Sent from Mobile.
 
 On Feb 24, 2014, at 23:41, Brendan Dahl bd...@mozilla.com wrote:
 
 PDF.js plans to soon start including and using Adobe CMap files for
 converting character codes to character id's(CIDs) and mapping character
 codes to unicode values. This will fix a number of bugs in PDF.js and will
 improve our support for Chinese, Korean, and Japanese(CJK) documents.
 
 I wanted to inform dev-platform because there are quite a few files
 and they are large. The files are loaded lazily as needed so they shouldn't
 affect the size of Firefox when running, but they will affect the
 installation size. There are 168 files with an average size of ~40KB, and
 all of the files together are roughly:
 6.9M
 2.2M when gzipped
 
 http://sourceforge.net/adobe/cmap/wiki/Home/
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Jonathan Kew

On 26/2/14 19:57, Andreas Gal wrote:


Lets turn this question around. If we had an on-demand way to load stuff like 
this, what else would we want to load on demand?


A few examples:

Spell-checking dictionaries
Hyphenation tables
Fonts for additional scripts

JK




Andreas

On Feb 26, 2014, at 8:53 PM, Bobby Holley bobbyhol...@gmail.com wrote:


That's still a ton for something that most of our users will not (or will
rarely) use. I think we absolutely need to get an on-demand story for this
kind of stuff. It isn't the first time it has come up.

bholley


On Wed, Feb 26, 2014 at 11:38 AM, Brendan Dahl bd...@mozilla.com wrote:


Yury Delendik worked on reformatting the files a bit and was able to get
them down to 1.1MB binary which gzips to 990KB. This seems like a
reasonable size to me and involves a lot less work than setting up a
process for distributing these files via CDN.

Brendan

On Feb 24, 2014, at 10:14 PM, Rik Cabanier caban...@gmail.com wrote:





On Mon, Feb 24, 2014 at 5:01 PM, Andreas Gal andreas@gmail.com

wrote:


My assumption is that certain users only need certain CMaps because they

tend to read only documents in certain languages. This seems like something
we can really optimize and avoid ahead-of-time download cost for.


So, you'd only install the Korean CMaps if the language is Korean?
The problem with that is that if a user might install a English version

of Firefox but still open Korean PDFs (which will then display as junk)



The fact that we don't do this yet doesn't seem like a good criteria.

There is a lot of good things we aren't doing yet. You can be the first to
change that on this particular topic, if it technically makes sense.


Load-on-demand (with an option to download all of them) seems like a

nice solution. A large majority of users will never need CMaps or only a
very small subset.


On Feb 25, 2014, at 1:27 AM, Brendan Dahl bd...@mozilla.com wrote:


It's certainly possible to load dynamically. Do we currently do this

for any other Firefox resources?


 From what I've seen, many PDF's use CMaps even if they don't

necessarily have CJK characters, so it may just be better to include them.
FWIW both Popper and Mupdf embed the CMaps.


Brendan

On Feb 24, 2014, at 3:01 PM, Andreas Gal andreas@gmail.com

wrote:



Is this something we could load dynamically and offline cache?

Andreas

Sent from Mobile.


On Feb 24, 2014, at 23:41, Brendan Dahl bd...@mozilla.com wrote:

PDF.js plans to soon start including and using Adobe CMap files for

converting character codes to character id's(CIDs) and mapping character
codes to unicode values. This will fix a number of bugs in PDF.js and will
improve our support for Chinese, Korean, and Japanese(CJK) documents.


I wanted to inform dev-platform because there are quite a few files

and they are large. The files are loaded lazily as needed so they shouldn't
affect the size of Firefox when running, but they will affect the
installation size. There are 168 files with an average size of ~40KB, and
all of the files together are roughly:

6.9M
2.2M when gzipped

http://sourceforge.net/adobe/cmap/wiki/Home/




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Benjamin Smedberg

On 2/26/2014 3:21 PM, Jonathan Kew wrote:

On 26/2/14 19:57, Andreas Gal wrote:


Lets turn this question around. If we had an on-demand way to load 
stuff like this, what else would we want to load on demand?


A few examples:

Spell-checking dictionaries
Hyphenation tables
Fonts for additional scripts

Yes!

Also maybe ICU data tables, although the current web-facing APIs don't 
support asynchronous download very well.


--BDS

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Gregory Szorc
https://bugzilla.mozilla.org/show_bug.cgi?id=977292

Assigned to nobody.

On 2/26/2014 12:49 PM, Andreas Gal wrote:
 
 This sounds like quite an opportunity to shorten download times and reduce 
 CDN load. Who wants to file the bug? :)
 
 Andreas
 
 On Feb 26, 2014, at 9:44 PM, Benjamin Smedberg benja...@smedbergs.us wrote:
 
 On 2/26/2014 3:21 PM, Jonathan Kew wrote:
 On 26/2/14 19:57, Andreas Gal wrote:

 Lets turn this question around. If we had an on-demand way to load stuff 
 like this, what else would we want to load on demand?

 A few examples:

 Spell-checking dictionaries
 Hyphenation tables
 Fonts for additional scripts
 Yes!

 Also maybe ICU data tables, although the current web-facing APIs don't 
 support asynchronous download very well.

 --BDS

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: support for Visual Studio 2010

2014-02-26 Thread Cameron McCormack

Ted Mielczarek wrote:

Historically we haven't updated toolchains without a pressing reason,
since there's a lot of hassle involved. Is there a specific reason
you're asking?


The two recent things I have had to work around in VS2010 were bugs in 
handling sized enums  32 bits (enum Blah : uint64_t) and forward 
declaration of sized enums.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Nick Alexander

On 2/26/2014, 11:56 AM, Andreas Gal wrote:


This randomly reminds me that it might be time to review zip as our compression 
format for omni.ja.

ls -l omni.ja

7862939

ls -l omni.tar.xz (tar and then xz -z)

4814416

LZMA2 is available as a public domain implementation. It uses a bit more memory 
than zip, but its still within reason (the default level 6 is around 1MB to 
decode I believe). A fairly easy way to use it would be to add support for a 
custom compression format for our version of libjar.


Is there a meta ticket for this?  I'm interested in evaluating how much 
this would trim the mobile/android APK size.


Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Boris Zbarsky

On 2/26/14 3:58 PM, Wesley Hardman wrote:

Personally, I would prefer to have it already available.


We have several deployment targets with different tradeoffs.  Broadly 
speaking:


Phones: expensive data, limited storage.  Want to not use up the 
storage, so download lazily.


Consumer laptops/desktops: cheap data, plentiful storage.  Probably ok 
to download opportunistically after initial install even if not 
immediately needed.


Locked-down corporate laptops/desktops: Need a way to push out an 
install with everything already included.


Limited-connectivity kiosks and whatnot: Need a way to push out an 
install with whatever components are desired already included.



I tend to live by Its better to have it and not need it than to not have it and 
need it.


If you have unlimited storage, sure.  We don't, on phones.

-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: support for Visual Studio 2010

2014-02-26 Thread Ehsan Akhgari

On 2014-02-26, 4:11 PM, Cameron McCormack wrote:

Ted Mielczarek wrote:

Historically we haven't updated toolchains without a pressing reason,
since there's a lot of hassle involved. Is there a specific reason
you're asking?


The two recent things I have had to work around in VS2010 were bugs in
handling sized enums  32 bits (enum Blah : uint64_t) and forward
declaration of sized enums.


I learned today that the bug I was talking about is fixed in VS2013 
Update 2.  dmajor is investigating things, but if we can successfully 
use the 64-bit toolchain to produce x86 binaries, that would be a great 
reason to upgrade to VS2013.  But note that dropping support for VS2010 
would be a different conversation to be had after the compiler upgrade 
is complete.


Cheers,
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Mike Hommey
On Wed, Feb 26, 2014 at 08:56:37PM +0100, Andreas Gal wrote:
 
 This randomly reminds me that it might be time to review zip as our
 compression format for omni.ja.
 
 ls -l omni.ja 
 
 7862939
 
 ls -l omni.tar.xz (tar and then xz -z)
 
 4814416
 
 LZMA2 is available as a public domain implementation. It uses a bit
 more memory than zip, but its still within reason (the default level 6
 is around 1MB to decode I believe). A fairly easy way to use it would
 be to add support for a custom compression format for our version of
 libjar.

IIRC, it's also slower both to compress and decompress. Note you're
comparing oranges with apples, too.
Jars are per-file compression. tar.xz is per-archive compression.
This is what i get:

$ stat -c %s ../omni.ja
8609399

$ unzip -q ../omni.ja
$ find -type f -not -name *.xz | while read f; do a=$(stat -c %s $f); xz --keep 
-z $f; b=$(stat -c %s $f.xz); if [ $a -lt $b ]; then rm $f.xz; else rm $f; 
fi; done
# The above compresses each file individually, and keeps either the
# decompressed file of the compressed file depending which is smaller,
# which is essentially what we do when creating omni.ja

$ find -type f | while read f; do stat -c %s $f; done | awk '{t+=$1}END{print 
t}'
# Sum all file sizes, excluding directories that du would add.
7535827

That is, obviously, without jar headers.
$ unzip -lv ../omni.ja 2/dev/null | tail -1
27696753  8260243  70%2068 files
$ echo $((8609399 - 8260243))
349156

Thus, that same omni.ja that is 8609399, with xz compression would be
7884983. Not much of a win, and i doubt it's worth it considering the
runtime implication.

However, there is probably room for improvement on the installer side.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Mike Hommey
On Thu, Feb 27, 2014 at 08:25:00AM +0900, Mike Hommey wrote:
 On Wed, Feb 26, 2014 at 08:56:37PM +0100, Andreas Gal wrote:
  
  This randomly reminds me that it might be time to review zip as our
  compression format for omni.ja.
  
  ls -l omni.ja 
  
  7862939
  
  ls -l omni.tar.xz (tar and then xz -z)
  
  4814416
  
  LZMA2 is available as a public domain implementation. It uses a bit
  more memory than zip, but its still within reason (the default level 6
  is around 1MB to decode I believe). A fairly easy way to use it would
  be to add support for a custom compression format for our version of
  libjar.
 
 IIRC, it's also slower both to compress and decompress. Note you're
 comparing oranges with apples, too.
 Jars are per-file compression. tar.xz is per-archive compression.
 This is what i get:
 
 $ stat -c %s ../omni.ja
 8609399
 
 $ unzip -q ../omni.ja
 $ find -type f -not -name *.xz | while read f; do a=$(stat -c %s $f); xz 
 --keep -z $f; b=$(stat -c %s $f.xz); if [ $a -lt $b ]; then rm $f.xz; 
 else rm $f; fi; done
 # The above compresses each file individually, and keeps either the
 # decompressed file of the compressed file depending which is smaller,
 # which is essentially what we do when creating omni.ja
 
 $ find -type f | while read f; do stat -c %s $f; done | awk '{t+=$1}END{print 
 t}'
 # Sum all file sizes, excluding directories that du would add.
 7535827
 
 That is, obviously, without jar headers.
 $ unzip -lv ../omni.ja 2/dev/null | tail -1
 27696753  8260243  70%2068 files
 $ echo $((8609399 - 8260243))
 349156

Well, the overhead would be different because of different alignments,
but the order of magnitude should be the same.

 Thus, that same omni.ja that is 8609399, with xz compression would be
 7884983. Not much of a win, and i doubt it's worth it considering the
 runtime implication.
 
 However, there is probably room for improvement on the installer side.
 
 Mike
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Andreas Gal

Could we compress major parts of omni.ja en block? We could for example stick 
all JS we load at startup into a zip with zero compression and then compress 
that into an outer zip. I think we already support nested containers like that. 
Assuming your math is correct even without adding LZMA2 just sticking with zip 
we should get better compression and likely better load times. Wdyt?

Andreas

On Feb 27, 2014, at 12:25 AM, Mike Hommey m...@glandium.org wrote:

 On Wed, Feb 26, 2014 at 08:56:37PM +0100, Andreas Gal wrote:
 
 This randomly reminds me that it might be time to review zip as our
 compression format for omni.ja.
 
 ls -l omni.ja 
 
 7862939
 
 ls -l omni.tar.xz (tar and then xz -z)
 
 4814416
 
 LZMA2 is available as a public domain implementation. It uses a bit
 more memory than zip, but its still within reason (the default level 6
 is around 1MB to decode I believe). A fairly easy way to use it would
 be to add support for a custom compression format for our version of
 libjar.
 
 IIRC, it's also slower both to compress and decompress. Note you're
 comparing oranges with apples, too.
 Jars are per-file compression. tar.xz is per-archive compression.
 This is what i get:
 
 $ stat -c %s ../omni.ja
 8609399
 
 $ unzip -q ../omni.ja
 $ find -type f -not -name *.xz | while read f; do a=$(stat -c %s $f); xz 
 --keep -z $f; b=$(stat -c %s $f.xz); if [ $a -lt $b ]; then rm $f.xz; 
 else rm $f; fi; done
 # The above compresses each file individually, and keeps either the
 # decompressed file of the compressed file depending which is smaller,
 # which is essentially what we do when creating omni.ja
 
 $ find -type f | while read f; do stat -c %s $f; done | awk '{t+=$1}END{print 
 t}'
 # Sum all file sizes, excluding directories that du would add.
 7535827
 
 That is, obviously, without jar headers.
 $ unzip -lv ../omni.ja 2/dev/null | tail -1
 27696753  8260243  70%2068 files
 $ echo $((8609399 - 8260243))
 349156
 
 Thus, that same omni.ja that is 8609399, with xz compression would be
 7884983. Not much of a win, and i doubt it's worth it considering the
 runtime implication.
 
 However, there is probably room for improvement on the installer side.
 
 Mike

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Architecture for viewing web archives (MHTML and MAFF)

2014-02-26 Thread parzivalrm
From what I have read on the web, the .maff format seems a far better way to 
save webpages than the .mht format.  First, it is just a .zip file, so if 
Firefox collapses, the files will still be readable.  Secondly, they are only 
about half the size.  Thirdly, they can more easily contain media files.  

My problem is that I cannot preview them in Directory Opus (the gee-whiz 
Windows Explorer replacement).  Would it be possible for the developers to 
write a plugin for viewing .maff files in DOpus?  I'm not a techie, so it's too 
much for me.

Regards, Bill
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of MOZ_ARRAY_LENGTH for static constants?

2014-02-26 Thread Ehsan Akhgari

On 2014-02-26, 4:52 AM, Neil wrote:

Ehsan Akhgari wrote:


On 2/23/14, 4:05 PM, Neil wrote:


Both ArrayLength and MOZ_ARRAY_LENGTH are typesafe when compiled as
C++, however ArrayLength has the disadvantage that it's not a
constant expression in MSVC. In unoptimised builds this is direly
slow as the templated function does not even get inlined, but even in
optimised builds the MSVC compiler is unable to completely optimise
away static variables. In particular, the variable is treated as if
it is forward-declared: the value is fetched from memory in each
function that uses it. (In my simple test there were enough registers
to avoid fetching the value more than once, but I don't know what
happens if this is not the case. And at least the optimiser was able
to avoid creating any static constructors.) Would it therefore be
preferable to use MOZ_ARRAY_LENGTH in such cases?


Which cases are those exactly?


The one that I spotted is that MSVC is unable to optimise static
variables, e.g. when you write static const length = ArrayLength(array);
If you write this as a local then the compiler is able to optimise it
away in release builds.


So you mean the problematic cases happen when that variable is at a 
global scope or something?



Note that I don't think that we need to care about the performance of
ArrayLength() in non-optimized builds.


Debug builds are now sufficiently slow that I no longer dogfood them,
but as it happens MSVC is able to optimise MOZ_ARRAY_LENGTH in both opt
and debug builds.


Wanting faster debug builds is a good thing.  I doubt that the subject 
of this thread will give you those though.  :-)


My suggestions: first, build with --enable-debug --enable-optimize.  If 
that is still slow enough, try profiling the build and see what jumps at 
you as the biggest costly things we do, and file bugs about them.



On 2014-02-24, 1:25 PM, Chris Peterson wrote:


To avoid developers and reviewers from having to remember special
cases, maybe MOZ_ARRAY_LENGTH should just be the standard everywhere.


They're not quivalent.


Would you mind xpanding on that?


Sure.  mozilla::ArrayLength has a special case for mozilla::Array 
http://mxr.mozilla.org/mozilla-central/source/mfbt/ArrayUtils.h#52.


Cheers,
Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [Proposal] Move global namespace classes under dom/events to mozilla or mozilla::dom

2014-02-26 Thread Masayuki Nakano

On 2014/02/12 23:22, Boris Zbarsky wrote:

In general, the names for things that are standardized should just match
the standard name, in the mozilla::dom namespace.  In some (rare) cases
the standard name starts with DOM; in those situations we should have
our name start with DOM as well.


I'd like to want some suggestions about our classes which do NOT 
represent DOM classes.


* nsASyncDOMEvent(derived from nsRunnable)
* nsContentEventHandler
* nsDOMDataTransfer (?)  (derived from nsIDOMDataTransfer)
* nsDOMEventTargetHelper (derived from mozilla::dom::EventTarget)
* nsEventDispatcher
* nsEventListenerManager
* nsEventListenerService (derived from nsIEventListenerService)
* nsEventStateManager
* nsEventStates
* nsIMEStateManager
* nsJSEventListener  (derived from nsIJSEventListener)
* nsPaintRequest (derived from nsIDOMPaintRequest)
* mozilla::TextComposition

Approaches:

1. All of them which don't start with nsDOM are in mozilla. The 
others in mozilla::dom.  However, this approach needs dom:: at some 
places in nsEventStateManager.h, nsEventListenerManager.h and

nsEventDispatcher.h.

2. Some of them which may be used in other modules and not specific 
classes of represent DOM classes e.g., nsContentEventHandler, 
nsIMEStateManager and TextComposition are in mozilla (for avoiding 
dom:: in header files in other modules) and not specific classes about 
DOM implementation.


3. Or, all of them should be in mozilla::dom.

Any ideas?

# I like #1 because it's clear rule and non-nsDOM* classes use classes 
which are defined in other modules (i.e., not in mozilla::dom namespace).


--
Masayuki Nakano masay...@d-toybox.com
Manager, Internationalization, Mozilla Japan.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [Proposal] Move global namespace classes under dom/events to mozilla or mozilla::dom

2014-02-26 Thread Boris Zbarsky

On 2/26/14 11:06 PM, Masayuki Nakano wrote:

I'd like to want some suggestions about our classes which do NOT
represent DOM classes.


I don't have terribly strong opinions on these, in general...


1. All of them which don't start with nsDOM are in mozilla.


This seems fine as a general rule of thumb.

-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform