Re: Switching to Visual Studio 2013

2014-10-08 Thread David Major
Update: the switch will happen early in version 36.

The build machines needed a reinstall of VS2013 due to a deployment issue. As 
merge day is coming up, it's not a good time for major changes. We'll wait for 
36 to have more bake time. 

On the plus side, we picked up Update 3 in the process.

David

- Original Message -
 From: David Major dma...@mozilla.com
 To: dev-platform@lists.mozilla.org, firefox-...@mozilla.org
 Sent: Friday, August 22, 2014 6:27:58 PM
 Subject: Switching to Visual Studio 2013
 
 We plan to switch the Windows build machines to Visual Studio 2013 on the
 Firefox 35 train.
 
 Some benefits from this change:
 * No more linker OOM crashes. VS2013 includes a 64-bit toolchain for 32-bit
 builds, so the linker will no longer be limited to 4GB address space.
 * The linker capacity opens the door for merging our binaries into libxul
 (like we do on the other platforms)
 * More than 2x improvement in PGO build times
 * Better language support
 
 The tracking bug is 914596. The remaining blockers are here:
 https://bugzilla.mozilla.org/showdependencytree.cgi?id=914596hide_resolved=1
 
 David
 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


nsWrapperCache::SetIsDOMBinding is no more

2014-10-08 Thread Peter Van der Beken
When implementing a class inheriting from nsWrapperCache and using DOM 
bindings it used to be the case that one needed to call 
SetIsDOMBinding() from the constructor. This is no longer necessary, and 
nsWrapperCache::SetIsDOMBinding has been removed.


We've added a private nsWrapperCache::SetIsNotDOMBinding, which is still 
called by a couple of whitelisted classes. This function is going away 
in the near future, and no calls to it should be added. If you think you 
need to add a call you should talk to me first.


The relevant bug is https://bugzilla.mozilla.org/show_bug.cgi?id=1078744.

Peter
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WOFF2 webfont format

2014-10-08 Thread Anne van Kesteren
On Wed, Oct 8, 2014 at 12:10 PM, Gervase Markham g...@mozilla.org wrote:
 (This situation is basically the Accept: problem.)

There's a bit more elaboration here for those new to it:

  https://wiki.whatwg.org/wiki/Why_not_conneg


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WOFF2 webfont format

2014-10-08 Thread Patrick McManus
On Wed, Oct 8, 2014 at 6:10 AM, Gervase Markham g...@mozilla.org wrote:

 On 07/10/14 14:53, Patrick McManus wrote:
  content format negotiation is what accept is meant to do. Protocol level
  negotiation also allows designated intermediaries to potentially
 transcode
  between formats.

 Do you know of any software which transcodes font formats on the fly as
 they move across the network?


I'm not aware of font negotiation - but negotiation is most useful when
introducing new types (such as woff2). The google compression proxy already
does exactly that for images and people are successfully using the AWS
cloudfront proxy in environments where the same thing is done. Accept is
used to opt-in to webp on those services and that allows them to avoid
doing UA sniffing. They don't normally give firefox webp, but if you make
an add-on that changes the accept header to include webp they will serve
firefox that format. That's what we want to encourage instead of UA
sniffing.



  imo you should add woff2 to the accept header.


as with webp, this is particularly useful to opt-in to a new format. I
agree that as a list of legacy formats and q-values is all rather useless,
but as a signal that you want something new that might not be widely
implemented its a pretty good thing. In this case its certainly better than
the txt/html based header being used.


 Do you know of any software which pays attention to this header?


above.

http request header byte counts aren't something to be super concerned with
within reason (uris, cookies, and congestion control pretty much determine
your performance fate on the request side). And it sounds like wrt fonts
the accept header could be made more relevant and actually smaller as well.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WOFF2 webfont format

2014-10-08 Thread Jonathan Kew

On 8/10/14 15:44, Patrick McManus wrote:



On Wed, Oct 8, 2014 at 6:10 AM, Gervase Markham g...@mozilla.org
mailto:g...@mozilla.org wrote:

On 07/10/14 14:53, Patrick McManus wrote:
 content format negotiation is what accept is meant to do. Protocol level
 negotiation also allows designated intermediaries to potentially transcode
 between formats.

Do you know of any software which transcodes font formats on the fly as
they move across the network?


I'm not aware of font negotiation - but negotiation is most useful when
introducing new types (such as woff2). The google compression proxy
already does exactly that for images and people are successfully using
the AWS cloudfront proxy in environments where the same thing is done.
Accept is used to opt-in to webp on those services and that allows them
to avoid doing UA sniffing. They don't normally give firefox webp, but
if you make an add-on that changes the accept header to include webp
they will serve firefox that format. That's what we want to encourage
instead of UA sniffing.



But the model for webfonts is explicitly *not* to have a single URL that 
may be delivered in any of several formats, but rather to offer several 
distinct resources with different URLs, and let the browser decide which 
of them to request.


So the negotiation is handled within the browser, on the basis of the 
information provided in the CSS stylesheet, *prior* to sending any 
request for an actual font resource.


Given that this is the established model, defined in the spec for 
@font-face and implemented all over the place, I don't see much value in 
adding things to the Accept header for the actual font resource request.


Even where a service (like Google fonts, AIUI) is currently sniffing UA 
versions and varying its behavior, it wouldn't help to advertise WOFF2 
support via the Accept header for the font request, because that won't 
result in them serving the appropriate WOFF2-supporting CSS to Firefox. 
We need to get them to serve the right CSS; and once they do that 
(either universally, or based on UA sniffing) the existing @font-face 
mechanism will let us choose the best of the available resource formats 
for our use.




 imo you should add woff2 to the accept header.


as with webp, this is particularly useful to opt-in to a new format. I
agree that as a list of legacy formats and q-values is all rather
useless, but as a signal that you want something new that might not be
widely implemented its a pretty good thing. In this case its certainly
better than the txt/html based header being used.

Do you know of any software which pays attention to this header?

above.

http request header byte counts aren't something to be super concerned
with within reason (uris, cookies, and congestion control pretty much
determine your performance fate on the request side). And it sounds like
wrt fonts the accept header could be made more relevant and actually
smaller as well.


FWIW, when DNT was being created HTTP request header byte count seemed 
to be a pretty strong concern, which (AIUI) was why we ended up with 
DNT: 1 rather than something clearer like DoNotTrack: true.


JK

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WOFF2 webfont format

2014-10-08 Thread Patrick McManus
On Wed, Oct 8, 2014 at 11:18 AM, Jonathan Kew jfkth...@gmail.com wrote:


 So the negotiation is handled within the browser, on the basis of the
 information provided in the CSS stylesheet, *prior* to sending any request
 for an actual font resource.


I'm not advocating that we don't do the css bits too. That's all cool.
Jonas's suggestion was also adding an appropriate accept bit.


 Given that this is the established model, defined in the spec for
 @font-face and implemented all over the place, I don't see much value in
 adding things to the Accept header for the actual font resource request.


intermediaries, as I mentioned before, are a big reason. It provides an
opt-in opportunity for transcoding where appropriate (and I'm not claiming
I'm up to speed on the ins and outs of font coding).

y'all can do what you want - but using protocol negotiation in addition to
the css negotiation is imo a good thing for the web.


 FWIW, when DNT was being created HTTP request header byte count seemed to
 be a pretty strong concern, which (AIUI) was why we ended up with DNT: 1
 rather than something clearer like DoNotTrack: true.


I know - but I disagree pretty strongly with the analysis there. The impact
is extremely marginal... and trust me, I'm very interested in HTTP
performance :)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WOFF2 webfont format

2014-10-08 Thread Anne van Kesteren
On Wed, Oct 8, 2014 at 5:34 PM, Patrick McManus mcma...@ducksong.com wrote:
 intermediaries, as I mentioned before, are a big reason. It provides an
 opt-in opportunity for transcoding where appropriate (and I'm not claiming
 I'm up to speed on the ins and outs of font coding).

If the format is negotiated client-side before a URL is fetched,
that's not going to help, is it?


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WOFF2 webfont format

2014-10-08 Thread Patrick McManus
On Wed, Oct 8, 2014 at 11:44 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Wed, Oct 8, 2014 at 5:34 PM, Patrick McManus mcma...@ducksong.com
 wrote:
  intermediaries, as I mentioned before, are a big reason. It provides an
  opt-in opportunity for transcoding where appropriate (and I'm not
 claiming
  I'm up to speed on the ins and outs of font coding).

 If the format is negotiated client-side before a URL is fetched,
 that's not going to help, is it?


scenario - origin only enumerates ttf in the css, client requests ttf
(accept: woff2, */*), intermediary transcodes to woff2 assuming such a
transcoding is a meaningful operation.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WOFF2 webfont format

2014-10-08 Thread Jonathan Kew

On 8/10/14 16:48, Patrick McManus wrote:



On Wed, Oct 8, 2014 at 11:44 AM, Anne van Kesteren ann...@annevk.nl
mailto:ann...@annevk.nl wrote:

On Wed, Oct 8, 2014 at 5:34 PM, Patrick McManus
mcma...@ducksong.com mailto:mcma...@ducksong.com wrote:
 intermediaries, as I mentioned before, are a big reason. It provides an
 opt-in opportunity for transcoding where appropriate (and I'm not claiming
 I'm up to speed on the ins and outs of font coding).

If the format is negotiated client-side before a URL is fetched,
that's not going to help, is it?


scenario - origin only enumerates ttf in the css, client requests ttf
(accept: woff2, */*), intermediary transcodes to woff2 assuming such a
transcoding is a meaningful operation.


Possible in theory, I guess; unlikely in practice. The compression 
algorithm used in WOFF2 is extremely asymmetrical, offering fast 
decoding but at the cost of slow encoding. The intent is that a large 
library like Google Fonts can pre-compress their fonts offline, and then 
benefit from serving smaller files; it's not expected to be suitable for 
on-the-fly compression.


JK

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WOFF2 webfont format

2014-10-08 Thread Patrick McManus
On Wed, Oct 8, 2014 at 12:03 PM, Jonathan Kew jfkth...@gmail.com wrote:

 Possible in theory, I guess; unlikely in practice. The compression
 algorithm used in WOFF2 is extremely asymmetrical, offering fast decoding
 but at the cost of slow encoding. The intent is that a large library like
 Google Fonts can pre-compress their fonts offline, and then benefit from
 serving smaller files; it's not expected to be suitable for on-the-fly
 compression.



accelerators like cloudflare and mod_pagespeed/mod_proxy exist to do this
kind of general thing as reverse proxies for specific origins.. they can
cache the transcoding locally. Obviously that's a lot harder for forward
proxies to do. Reverse proxies are often the termination of https:// as
well - so this transformation remains relevant in the https world we want.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Web APIs documentation meeting Friday

2014-10-08 Thread Eric Shepherd
The Web APIs documentation meeting is Friday at 10 AM Pacific Time (see 
http://bit.ly/APIdocsMDN for your time zone). Everyone's welcome to 
attend; if you're interested in ensuring that all Web APIs are properly 
documented, we'd love your input.


We have an agenda, as well as details on how to join, here:

https://etherpad.mozilla.org/WebAPI-docs-2014-10-10.

If you have topics you wish to discuss, please feel free to add them to 
the agenda.


We look forward to seeing you there!

If you're unable to attend but have been working on documentation 
related to APIs, please add notes to the agenda about what you've been 
doing so we can share your progress.


--
Eric Shepherd
Developer Documentation Lead
Mozilla
Blog: http://www.bitstampede.com/
Twitter: @sheppy

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WOFF2 webfont format

2014-10-08 Thread Jonas Sicking
On Wed, Oct 8, 2014 at 2:15 PM, Robert Kaiser ka...@kairo.at wrote:
 Jonathan Kew schrieb:

 But the model for webfonts is explicitly *not* to have a single URL that
 may be delivered in any of several formats, but rather to offer several
 distinct resources with different URLs, and let the browser decide which
 of them to request.

 So the negotiation is handled within the browser


 Right. And if I remember correctly, we also just invented the picture
 element for HTML5 to do the same for images as it's actually *better* in
 many regards to the dilemma we have with all the Accept: negotiation. Or am
 I wrong there?

Sometimes client side negotiation is the better solution, sometimes
server side is.

It can be a pain in the ass to try to get your hosting provider to
install modules that handles content negotiation based on accept. Or
to switch service provider to one that lets you run server side
script. Or learn your server infrastructure to figure out how to add
negotiation. In those cases it's great that developers can handle it
on the client.

But there are also cases when it's a pain to figure out the client
side code to modify it to pass through the right parameters to do
negotiation on the client. Or to ask all your customers to rewrite
their apps to handle content negotiation. Or find all the places where
your referring to a given resource and replace it with logic to do
client side negotiation. In those cases it's good if the server has
access to all the information needed to serve the most appropriate
resource.

When we design the platform such that we require people to use a
particular solution we better be really sure that that solution will
work in all the (common) situations that people need the problem
solved. When we get it wrong, which so far happens a lot, people end
up with horrible workarounds, buggy apps, slower productivity and more
resource usage. In short, with a worse user experience.

We far too often close our eyes to the realities of web development in
an effort to keep the platform simple. However simple doesn't always
mean fewer features. When developers have to work around the lack of
features that doesn't make their jobs simpler.

That said, adding all the features isn't always the answer either of course.

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


|mach debug| and |mach dmd| have been merged into |mach run|

2014-10-08 Thread Nicholas Nethercote
Hi alll,

I just landed https://bugzilla.mozilla.org/show_bug.cgi?id=1074656 on
mozilla-inbound, which merged |mach debug| and |mach dmd| into |mach
run|. This avoids a lot of code duplication in mach and also lets you
run the browser under a debugger and DMD at the same time.

If you used to use |mach debug|, you now need to run |mach run
--debug|. And |mach run --dmd| will run DMD.

Also note that the +foo style options in these commands were recently
changed to -foo style options by
https://bugzilla.mozilla.org/show_bug.cgi?id=1076649. (And if you
dislike having both -foo and --foo options, see
https://bugzilla.mozilla.org/show_bug.cgi?id=1080302.)

|mach help run| has the full details. I've included its output below.

Nick


usage: mach [global arguments] run [command arguments]

Run the compiled program, possibly under a debugger or DMD.

Global Arguments:
  -v, --verbose Print verbose output.
  -l FILENAME, --log-file FILENAME
Filename to write log data to.
  --log-intervalPrefix log line with interval from last message rather
than relative time. Note that this is NOT execution
time if there are parallel operations.
  --log-no-timesDo not prefix log lines with times. By default, mach
will prefix each output line with the time since
command start.
  -h, --helpShow this help message.

Command Arguments for the compiled program:
  paramsCommand-line arguments to be passed through to the
program. Not specifying a -profile or -P option will
result in a temporary profile being used.
  -remote, -r   Do not pass the -no-remote argument by default.
  -background, -b   Do not pass the -foreground argument by default on
Mac.
  -noprofile, -nDo not pass the -profile argument by default.

Command Arguments for debugging:
  --debug   Enable the debugger. Not specifying a --debugger
option will result in the default debugger being used.
The following arguments have no effect without this.
  --debugger DEBUGGER   Name of debugger to use.
  --debugparams params  Command-line arguments to pass to the debugger itself;
split as the Bourne shell would.
  --slowscript  Do not set the JS_DISABLE_SLOW_SCRIPT_SIGNALS env
variable; when not set, recoverable but misleading
SIGSEGV instances may occur in Ion/Odin JIT code.

Command Arguments for DMD:
  --dmd Enable DMD. The following arguments have no effect
without this.
  --sample-below SAMPLE_BELOW
Sample blocks smaller than this. Use 1 for no
sampling. The default is 4093.
  --max-frames MAX_FRAMES
The maximum depth of stack traces. The default and
maximum is 24.
  --show-dump-stats Show stats when doing dumps.
  --mode {normal,test,stress}
Mode of operation. The default is normal.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling NSPR logging in release builds

2014-10-08 Thread Eric Rahm

On 10/3/14 1:12 PM, Eric Rahm wrote:

Hi all-

In bug 806819 we're planning on landing a change that allows us to turn
on NSPR logging in release builds [1]. To be clear, by default all
logging output is disabled, this will just allow you to turn on logging
via the same mechanisms [2] that have been available to debug builds and
modules that already force logging to be enabled.

Initial tests show no impact to performance, but of course it's possible
there will be unforeseen regressions. Testing included all Talos tests,
averaging of mochitest runtimes, and local ad-hoc performance measurements.

Areas to look out for:
   * Logging being done in hot areas. PR_LOG is no longer a no-op so
 there is a slight amount of overhead with each logging call. If
 your output is truly debug only consider wrapping in a #ifdef DEBUG
 block.
   * Creating a log message and then logging it. PR_LOG supports printf
 style formatting, so you can save some overhead by using that
 rather than writing to your own buffer.

 For example, if you once had:

   #ifdef PR_LOGGING
 char* msg = expensiveFunction();
 PR_LOG(module, PR_LOG_DEBUG, (msg));
   #endif

 You'll want to move to:

   PR_LOG(module, PR_LOG_DEBUG, (%s, expensiveFunction()));

If you're interested in making logging better, please take a look at our
meta bug [3] tracking various improvements. There's talk of making
improvements to NSPR logging, ditching NSPR logging all together, adding
streaming interfaces, switching log levels at runtime, and more. Feel
free to join in the conversation.

Please note: after this change you should never need to use FORCE_PR_LOG
(and you'll probably get build errors if you do).

A few side benefits:
   * All usage of FORCE_PR_LOG has been removed
   * Many more files are now eligible for unified building

-e

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=806819
[2]
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/Logging

[3] https://bugzilla.mozilla.org/show_bug.cgi?id=881389


This has landed, so far one issue has been reported for VS2013 builds 
[1] that has a pending solution for the underlying problem, this is not 
related to enabling logging but was exposed as part of the effort to get 
files added back in to unified compilation.


Many files were added back into unified compilation that were explicitly 
excluded due to NSPR logging and uses FORCE_PR_LOG (those are the 
terms I searched for), if you know of others that may not have been as 
easily found please follow up and add them back into the UNIFIED_SOURCES.


Also of course, don't use FORCE_PR_LOG anymore. You'll most likely get 
-Werror failures due to redefined macros. Please check your pending patches!


-e

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1080297
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform