Re: The sec-approval process makes users safer

2019-09-10 Thread Jeff Walden
On 9/10/19 7:55 AM, Dave Townsend wrote:
> How often do we go back and land those tests and comments after the fix has
> been in the release builds for a suitable amount of time?

I always land my tests...at some point.  I don't know  if everyone else 
adequately remembers to do so.  We don't formally or systematically track this.

I've argued for years that we should have something like a private repository 
for security fixes to allow us to actively test security patches (including 
their tests) without publicly exposing them.  (The resulting builds would be 
undistributable, but they could be tested.)  But it'd take a ton of infra work 
to have private repository/treeherder/etc. only accessible behind LDAP or 
something, so it's easier said than done.

Personally, I just keep tests in bookmarks that I continually rebase.  
Eventually I'll conclude a bookmark is safely landable (where "landable" 
depends on how likely tests are to bitrot, how long the security issue was in 
place, the earliest release that received a backport, ) and then land it.  
And because each test is staring at me every time I |hg wip|, I won't lose 
track of them forever.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


The sec-approval process makes users safer

2019-09-09 Thread Jeff Walden
Those of you longer in the tooth may remember Firefox was successfully 
exploited in Pwn2own 2012...and we didn't have to lift a finger to fix it.  We 
already had -- in the Firefox release shipping days later.  臘

https://bugzilla.mozilla.org/show_bug.cgi?id=735104 (pwn2own bug)
https://bugzilla.mozilla.org/show_bug.cgi?id=720511 (cover bug, discussion only 
of a spec-compliance issue)
https://bugzilla.mozilla.org/show_bug.cgi?id=720079 (sec bug noting the sec 
issue)

We later discussed whether the exploit had been "achieved" by reading our 
public commits.  https://bugzilla.mozilla.org/show_bug.cgi?id=735104#c2  The 
fruit of this discussion was our security approval process, where security 
patches land only after approval, in relative lockstep close to release, with 
incriminating tests/comments removed.  This is also where sec-approval comment 
hoop-jumping began.

sec-approval is a pain.  Is it really worth it?

The recent Apple iOS WebKit/JSC exploits conclusively answer "yes".  The 
Project Zero post 
https://googleprojectzero.blogspot.com/2019/08/jsc-exploits.html discusses the 
possibility that some exploits were acquired from published, pre-release 
security fixes and testcases:

> For many of the exploits it is unclear whether they were originally exploited 
> as 0day or as 1day after a fix had already shipped. It is also unknown how 
> the attackers obtained knowledge of the vulnerabilities in the first place. 
> Generally they could have discovered the vulnerabilities themselves or used 
> public exploits released after a fix had shipped.
>
> Furthermore, at least for WebKit, it is often possible to extract details of 
> a vulnerability from the public source code repository before the fix has 
> been shipped to users.
>
> CVE-2019-8518 https://bugs.chromium.org/p/project-zero/issues/detail?id=1775 
> can be used to highlight this problem (as can many other recent 
> vulnerabilities). The vulnerability was publicly fixed in WebKit HEAD on Feb 
> 9 2019 with commit 
> .
>  This commit contains a testcase that triggers the issue and causes an 
> out-of-bounds access into a JSArray - a scenario that is usually easy to 
> exploit. However, the fix only shipped to users with the release of iOS 12.2 
> on March 25 2019, roughly one and a half months after details about the 
> vulnerability were public.
>
> An attacker in possession of a working exploit for an older WebKit 
> vulnerability would likely only need a few days to replace the underlying 
> vulnerability and thus gain the capability to exploit up-to-date devices 
> without the need to find new vulnerabilities themselves. It is likely that 
> this happened for at least some of the following exploits. 

(paragraph breaks added)  Incredibly, saying, "It is likely that [attackers 
used public commits/testcases to create exploits]" *soft-pedals* it.  The first 
exploit the post discusses includes this text and image:

> [T]he exploit trigger is almost exactly the same as in the bug report and the 
> regression test file in the WebKit repository. This can be seen in the 
> following two images, the left one showing the testcase published in the 
> WebKit code repository as part of the bugfix and the right showing the part 
> of the in-the-wild exploit code that triggered the bug.
> https://1.bp.blogspot.com/-PEZlVLEefs0/XWg4BdDSxkI/NUs/ELjHWgzHOZIRKSTV45E-moRivJKrAWIkACLcBGAs/s1600/JSC%2BDIFF.png
>  (alt text: "This image shows a line-by-line, side-by-side comparison between 
> the vulnerability test case from the webkit tracker on the left, and the 
> vulnerability trigger code used in the exploit on the right. They are very 
> similar, with matching variable names and code structure. The only difference 
> is that one value which had the fixed value 1.45 in the test case has been 
> parameterised out to a variable named ow in the exploit, and the trigger has 
> been wrapped inside a function definition which takes ow as an argument.")

The *only* difference between testcase and exploit is s/1.45/ow/.  There's no 
plausible way that similarity (including variable names) is coincidental.  
Attackers (the Chinese government, it seems) copied the testcase into a 
function, then changed the one bit of the test controlling how a crash happens 
to make it usefully push-button.

So if you ever questioned whether all that sec-approval nonsense is really 
necessary...well, it is.  Approvals, syncing up with release timelines, getting 
rejected because too much risk, it's all pretty obnoxious.  But it's better 
we're careful for the users who aren't being exploited for one, and it's better 
than the week WebKit/JSC are probably having now for two.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: accumulating most JS scripts' data as UTF-8, then directly parsing as UTF-8 (without inflating to UTF-16)

2019-08-06 Thread Jeff Walden
On 8/6/19 8:16 AM, Anne van Kesteren wrote:
> On Sat, Jul 20, 2019 at 2:05 AM Jeff Walden  wrote:
>> (*Only* valid UTF-8: any invalidity, including for WTF-8, is an immediate 
>> error, no replacement-character semantics applied.)
> 
> Wouldn't adding this allow you to largely bypass the text decoder if
> it identifies the content to be UTF-8? Meaning we'd have to scan and
> copy bytes even less?

In principle "yes"; in current reality "no".

First and foremost, as our script-loading/networking code is set up now there's 
an inevitable copy.  When we load an HTTP channel we process its data using a 
|NS_NewIncrementalStreamLoader| that ultimately invokes this signature:

NS_IMETHODIMP
ScriptLoadHandler::OnIncrementalData(nsIIncrementalStreamLoader* aLoader,
 nsISupports* aContext,
 uint32_t aDataLength, const uint8_t* aData,
 uint32_t* aConsumedLength) {

This function is *provided* a prefilled (probably by NSPR, ultimately by 
network driver/card?) buffer, then we use an |intl::Decoder| to 
decode-by-copying that buffer's bytes into the script loader's JS 
allocator-allocated buffer, reallocating and expanding it as necessary.  (The 
buffer must be JS-allocated because it's transferred to the JS engine for 
parsing/compilation/execution.  If you don't transfer a JS-owned buffer, 
SpiderMonkey makes a fresh copy anyway.)  To avoid a copy, you'd need to 
intersperse decoding (and buffer-expanding) code into the networking layer -- 
theoretically doable, practically tricky (especially if we assume the buffer is 
mindlessly filled by networking driver code).

Second -- alternatively -- if the JS engine literally processed raw code units 
of utterly unknown validity and networking code could directly fill in the 
buffer the JS engine would process --the JS engine would require additional 
changes to handle invalid code units.  UTF-16 already demands this because JS 
is natively WTF-16 (and all 16-bit sequences are WTF-16).  But all UTF-8 
processing code assumes validity now.  Extra effort would be required to handle 
invalidity.  We do a lot of keeping raw pointers/indexes into source units and 
then using them later -- think for things like atomizing identifiers -- and all 
those process-this-range-of-data operations would have to be modified.  
*Possible*?  Yes.  Tricky?  For sure.  (Particularly as many of those 
coordinates we end up baking into the final compiled script representation -- 
so invalidity wouldn't be a transient concern but one that would persistent 
indefinitely.)

Third and maybe most important if the practical considerations didn't exist: 
like every sane person, I'm leery of adding yet another place where arbitrary 
ostensibly-UTF-8 bytes are decoded with any sort of 
invalid-is-not-immediate-error semantics.  In a very distant past I fixed an 
Acid3 failure where we mis-implemented UTF-16 replacement-character semantics.  
https://bugzilla.mozilla.org/show_bug.cgi?id=421576  New implementations of 
replacement-character semantics are disproportionately risky.  In fact when I 
fixed that I failed to fix a separate UTF-16 decoding implementation that had 
to be consistent with it, introducing a security bug.  
https://bugzilla.mozilla.org/show_bug.cgi?id=489041#c9  *Some* of that problem 
was because replacement-character stuff was at the time under-defined, and now 
it's well-defined...but still.  Risk.  Once burned, twice shy.

In principle this is all solvable.  But it's all rather complicated and 
fraught.  We can probably get better and safer wins making improvements 
elsewhere.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: accumulating most JS scripts' data as UTF-8, then directly parsing as UTF-8 (without inflating to UTF-16)

2019-07-22 Thread Jeff Walden
On 7/20/19 4:41 PM, Nicholas Nethercote wrote:
> This is excellent news. Do you have any measurements showing perf effects?

Perf changes should have first become visible with the landing of bug 1554362.  
There was one quasi-automatically-reported perf improvement, on the Sheets 
raptor test:

https://bugzilla.mozilla.org/show_bug.cgi?id=1554362#c19

If our tests do enough script-heavy things for the long-run memory improvements 
to be clearly visible, or if they run enough scripts that improvements to 
parsing are visible in big-picture takes, other improvements should be visible. 
 I dropped a bunch of time on trying to finagle perfherder into revealing more 
changes a month ago, but I didn't get anywhere and ended up dropping it.

The overall change is assuredly not Sheets-specific in any fashion, tho, so 
certainly more than just Sheets is affected by this.



Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: accumulating most JS scripts' data as UTF-8, then directly parsing as UTF-8 (without inflating to UTF-16)

2019-07-19 Thread Jeff Walden
# Intent to ship: UTF-8 parsing of external 

Intent to ship: Hashbang syntax in JS (a "#!" line at the very start of a script is a single-line comment)

2019-02-28 Thread Jeff Walden
# Intent to ship: Hashbang syntax

## Introduction

As JS metastasizes from the web to Node and shell scripts and the like, it 
becomes useful to run scripts directly in the shell and let the shell decide 
how.  The traditional way to do this is with an initial hashbang (or shebang) 
line indicating a path to an interpreter:

```
#!/usr/bin/env node
print("Hello world");
```

Thus many JS shells manually grunge past an initial "#!" line as convenience.  
SpiderMonkey's shell has done this since, well, before some people reading this 
were born.  :-)  https://dxr.mozilla.org/classic/source/js/src/js.c#143  But 
this means every implementation has micro-forked the language in perhaps subtly 
different manner.  (It's easy to mess up line numbers doing this.)

So it makes sense to standardize this as a micro-tax implementations will all 
pay consistently.  Hashbang isn't very valuable on the web -- polyglot web 
scripts/shell scripts are likely rare now -- but it's a minor nicety.

## How it works

If eval code, an entire script, or a module (but *not* the code string passed 
to the Function constructor, nor the contents of a block in normal JS source 
text) -- at its *very start* begins with the two code points U+0023 NUMBER SIGN 
followed by U+0021 EXCLAMATION MARK, all code points up to a LineTerminator are 
treated as a single-line comment -- exactly as if "//" were encountered instead 
of "#!".

Hashbang only begins a comment at the very start of a script; it's a syntax 
error elsewhere, including after any amount of whitespace.

A minor wrinkle: if you have an inline script element with the 

Re: Type-based alias analysis and Gecko C++

2019-02-21 Thread Jeff Walden
On 2/17/19 11:40 PM, Henri Sivonen wrote:
> Rust, which combines the
> perf benefits of -fstrict-aliasing with the understandability of
> -fno-strict-aliasing?

This is not really true of Rust.  Rust's memory model is not really defined yet 
https://doc.rust-lang.org/reference/memory-model.html but from what I've been 
able to read as to how you're "supposed" to and "should" use the language in 
unsafe code and through FFI, it *does* require the same sorts of things as C++ 
in "you can't dereference a pointer/reference unless it contains a well-formed 
value of the type of the pointer/reference".  Just, Rust has somewhat more 
tools that hide away this unsafety so you don't often manually bash on memory 
yourself in that manner.

As a practical matter, I don't honestly see how Rust can avoid having a memory 
model very similar to C++'s, including with respect to aliasing, even if 
they're not there *formally* yet.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: BigInt

2019-02-21 Thread Jeff Walden
On 2/21/19 11:56 AM, L. David Baron wrote:
> (What's the opinion of TC39 on shipping features that are at stage
> 3?  That doesn't seem obvious from
> https://tc39.github.io/process-document/ .)

Stage 3 is traditionally where it becomes okay to ship stuff, and it's the 
usual dividing line we have used for shippability.  Or at least that's what I 
remember to be the case...

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


JS components and modules, subscripts loaded via the subscript loader, mochitest-browser tests, and SJS scripts are now exclusively UTF-8

2018-12-20 Thread Jeff Walden
Hi all,

In the distant past, SpiderMonkey APIs consumed source text as two-byte UCS-2 
or one-byte |const char*|.  Was one-byte text ASCII?  UTF-8?  EBCDIC?  
Something else?  Who could say; no one thought about text encodings then.  *By 
happenstance* one-byte JS text was Latin-1: a byte is a code point.  And so 
lots of people used Latin-1 for JS purely because SpiderMonkey's carelessness 
made it easy.

SpiderMonkey's UTF-8 source support is far better and clearer now.  Most 
single-byte source users use UTF-8.  So I'm changing the remaining Gecko 
Latin-1 users to UTF-8.  The following scripts/script loaders now use 
exclusively UTF-8:

* JS components/modules (bug 1492932)
* subscripts via mozIJSSubScriptLoader.loadSubScript{,WithOptions} (bug 1492937)
* mochitest-browser scripts, because they're subscripts (bug 1492937)
* SJS scripts executed by httpd.js, because they're subscripts (bug 1513152, 
bug 1492937) [0]

Also, proxy autoconfig scripts may now be valid UTF-8 (bug 1492938).  (For 
compatibility reasons, invalid UTF-8 is treated as Latin-1, by inflating to 
UTF-16 and compiling that.)

Every affected script in the tree used UTF-8, so this just makes reality match 
expectation.  But it sometimes changes behavior and may affect patch backports:

* You may use non-ASCII code points directly in scripts (even outside comments) 
without needing escape sequences.
* If you *intend* to construct a string of the constituent UTF-8 code units of 
a non-ASCII code point, you must use hexadecimal escapes: "\xF0\x9F\x92\xA9".

Another step toward fewer text encodings.  \o/

Jeff

0. Note that until bug 1514075 lands, SJS scripts used in Android test runs 
will be interpreted as Latin-1 there (and only there).  Hopefully we can fix 
that quickly!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


JS compilation/evaluation APIs are now in #include "js/CompilationAndEvaluation.h"

2018-08-28 Thread Jeff Walden
jsapi.h and jsfriendapi.h are mega-huge, monolithic headers.  A one-stop shop 
is maybe (maybe) convenient for users.  But it's terrible for Gecko and 
SpiderMonkey: touching jsapi.h rebuilds the world.  (The one-stop shop approach 
is also pretty terrible for users because js*api.h are unreadable, disorganized 
messes.)

SpiderMonkey hackers are gradually splitting jsapi.h and jsfriendapi.h into 
many separate headers containing sensible, readable subsets of documented API.  
We want to let you skim js/public/* to find the likely header, then read it to 
see what's in it.  (This doesn't substitute for real docs, but it's a big help.)

I just took a notable step in the direction of splitting up jsapi.h.  
Specifically:

Every function in jsapi.h that compiles/evaluates JavaScript on the main thread 
now requires #include "js/CompilationAndEvaluation.h".  You must #include *that 
specific header* to compile/execute scripts.

(Off-thread compilation isn't fully moved to its own header, so jsapi.h still 
provides all that stuff still.)

And if you need to create a JS::SourceBufferHolder to call a compilation API, 
you must #include "js/SourceBufferHolder.h".  Headers only need a 
forward-declared JS::SourceBufferHolder; if you want to *create* a 
JS::SourceBufferHolder you must get the definition yourself.

I fixed existing code: you just need to know bootlegging script-evaluation APIs 
will fail now.

More changes to split up jsapi.h and jsfriendapi.h are coming, in no particular 
time frame.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: Stop using PodZero and PodArrayZero

2018-05-22 Thread Jeff Walden
Recent gcc adds a warning, -Wclass-memaccess, that warns if you use memset or 
similar on a non-trivial type -- in other words, a type that requires something 
be done to initialize instances of it (set fields, call a user-defined 
constructor, init a vptr, etc.).  The reason is that initialize-by-memset is 
fragile: if your class ever gains initialization, memset will either overwrite 
that initialization or falsely assure you that no further initialization is 
necessary.  Sad!

Mozilla code probably doesn't do this much *directly*, but unfortunately 
PodZero and PodArrayZero do.

Fortunately, C++11 lets you use class member-initializers (especially 
brace-initializers) to cleanly zero things.[0]  If that doesn't cut it, 
's std::fill and std::fill_n are type-safe alternatives that 
optimizing compilers will eat for breakfast.

Please don't add new uses of Pod{,Array}Zero, and start removing existing uses 
where possible.  It's generally not too tricky; I fixed all issues of this sort 
building the JS engine last week, and the only serious quirks I hit were for 
fairly unusual SpiderMonkey needs.  I'll probably rename these to 
DeprecatedPod{,Array}Zero to particularly discourage their use, until they can 
be removed.

See 

 for more information.

Thanks!

Jeff

0. The only exception are bit-fields.  The best solution for bit-fields is to 
dump them in a class, brace-initialize the whole thing to zeroes, then (if 
needed) overwrite any customized fields in the enclosing class's constructor.  
See  for an example.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla-central now compiles with C++14

2017-11-16 Thread Jeff Walden
On 11/16/2017 08:35 AM, Ben Kelly wrote:
> I would like to use initialized lambda capture as I think it will allow
> move-only objects to be used in lambdas like this:
> 
> UniquePtr uniqueThing = MakeUnique();
> nsCOMPtr r = NS_NewRunnableFunction([uniqueThing =
> Move(uniqueThing)] () {
>   uniqueThing->Stuff();
> });
> thread->Dispatch(r.forget());
> 
> Thanks.

Do it!  https://bugzilla.mozilla.org/show_bug.cgi?id=1322962 was WONTFIX'd for 
adding a dodgy workaround for this purpose, so it is expected that people will 
do/use this in this manner.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Introducing mozilla::Result for better error handling

2016-12-23 Thread Jeff Walden
On 12/22/2016 02:22 AM, Eric Rahm wrote:
> The key point for me is that we're hiding the return. I'm fine with the
> more verbose 
> explicitly-return-and-make-it-easier-for-the-reviewer-to-call-out-issues
> form. I understand this is going to have differing opinions -- I think
> there's merit to more concise code -- but for the things I look at I much
> prefer an explicit return.

It's a fair cop.  I'm still about ---> < close to thinking the macro's 
hidden early-return is just bad (and very nearly Result too is just bad, 
because the one without the other is pretty small-potatoes), and the whole idea 
should go.  

To me, the key is we're not designing anew.  We're implementing an established 
idiom from another language.  It's a fresh, new language not everyone is 
familiar with.  But it's also a language that's *our* language as near as 
anything can be, with lots of our own input on it, with lots of us working with 
it, with our having shipped Firefox depending on it, and with (at the time) its 
foreseeably becoming a Firefox build requirement (and now *is* one).

Against that backdrop, MOZ_TRY is quite similar to try!, and developers who 
have seen even small amounts of Rust will have seen try!.  The naming 
similarity will suggest its sense.  And the documentation in Result.h should 
fully clarify that meaning.  (Incidentally: questions/comments about 
functionality should probably funnel into MFBT bugs to improve the 
documentation.  Asking questions here, or on IRC, or wherever, does not scale.  
Clear documentation in headers does.)

That said, these were my conclusions over the several months mozilla::Result 
was percolating.  Only just this week I learned Rust has a new postfix ? 
operator, that does much the same thing as try!, including "hiding" an early 
return:

  let file = std::fs::File::create("foobar.txt")?;
  file.write_all(format!("line 1").as_bytes())?;
  file.write_all(format!("line 2").as_bytes())?;
  file.write_all(format!("line 3").as_bytes())?;

Mozilla's Rust compiler requirement is sufficient to ensure support for this 
operator.  So I would expect most people writing Rust, and most people writing 
Rust in Gecko, will decreasingly see try! in real-world code.

Given that, will try! remain enough of a Rust idiom for it and MOZ_TRY to 
symbiotically reinforce each other?  I don't know.  If there's no real 
precedent, is MOZ_TRY justifiable?  I don't know.  Maybe people teaching Rust 
will travel through try! before reaching ?, and so try! remains a (diminished) 
precedent.  Or maybe they won't.  Suffice to say that a newly-discovered 
operator a few days later isn't enough to convince me to rethink this so 
drastically.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ICU proposing to drop support for WinXP (and OS X 10.6)

2016-05-26 Thread Jeff Walden
On 05/14/2016 06:58 AM, Philip Chee wrote:
> Given the "two different implementations rule" is there any suitable
> alternative to ICU? I mean besides rolling our own.

No, or at least not something cross-platform.  It's probably possible to do 
something using Windows APIs, that would only be shippable there.  (And maybe 
only on particular versions of Windows, with a decent chance XP's not in that 
set.)

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ICU proposing to drop support for WinXP (and OS X 10.6)

2016-05-02 Thread Jeff Walden
On 04/30/2016 01:26 PM, L. David Baron wrote:
> I still find it sad that ECMAScript Intl came (as I understand it)
> very close to just standardizing on a piece of software (ICU)

I'm fuzzy on the details as well, but I don't believe it was ever going to be 
the case that the spec would be "do what ICU does". Language *had* to be 
hand-wavy, because what is the custom in one language now, will not always be 
the custom. So some flexibility must be granted to accommodate this, as well as 
to not (in effect) specify entire languages as they exist right now.

Now, in *practice* it might work out that behaviors would be what ICU does. But 
even there I'm not so sure about it. What ICU does, as I understand it, is 
approximately only what CLDR (all the language/locale-specific information 
about formatting, pluralization, word-breaking, etc.), plus standardized things 
like IETF BCP 47, tells it to. And CLDR's purview is not really areas where 
technical debate/disagreement/dissent makes a whole lot of sense, because it's 
just setting down what the rest of the world does.

In any event, the "doomsday" scenario did not occur, so we're safe.

> and also find it disturbing that we're going to extend that
> standardization on a particular piece of software (possibly even
> more rigidly) into other areas.

Using a library to do certain things we do other ways right now, in sometimes 
inferior fashion, doesn't seem inherently objectionable to me. So long as the 
library's internal decisions don't bleed too far into the visible API, which I 
don't believe they did here.

> I think many of the arguments we
> made against standardizing on SQLite seem to apply to ICU as well,
> such as security risk and needing to reverse-engineer when writing
> future implementations of parts of the Web platform.

I believe this should be mitigated by its being CLDR that will be the true 
dependency, and CLDR not containing anything but the most highly constrained of 
"algorithms" (for things like pluralization). But I'm happy to be 
corrected/enlightened by someone who understands ICU/CLDR better.

> I think enough of our users are on Windows XP that decisions about
> dropping Windows XP are not a purely engineering decision.

Unfortunately I agree.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ICU proposing to drop support for WinXP (and OS X 10.6)

2016-05-02 Thread Jeff Walden
On 04/29/2016 08:30 AM, sn...@snorp.net wrote:
> The engineers in Platform consistently want to dismiss mobile-specific 
> issues, and this is a great example. If you really want to get ICU into 
> Fennec, find a way to do it without bloating the APK size instead of bullying 
> the Fennec folks.

Your point is taken, modulo quibbling (that some might characterize more 
strongly).  Not all size questions should be/are viewed alike by Platform or 
even by individual engineers.  For example, I heard about a recent 
few-hundred-K hit purely for quality of service, which seemed a bit much to me 
and you both.  (Deliberately not saying whodunit.)  But okay.

DOM features rarely die, so there's a limit to how much code we can remove.  
And web developers (and by extension end users visiting their sites) expect 
browsers to do increasingly many things, so we must continue adding code.  Intl 
isn't special; it's just one new feature of many (albeit one with fundamentally 
irreducible complexity, human communication being beyond the control even of 
web standards bodies, let alone us).  This story will never change with Gecko.

But we can't fully serve developers with the stagnating system-based engine.  
It'll be difficult to impossible to offer new layout/rendering features, new 
DOM features, hypothetical browser-integration improvements into existing 
features, new JS features that aren't standard library additions, or new JS 
standard library additions *that are optimally fast in JITs*.  It'll be 
functionally impossible to affect the development of standards having 
particular (or perhaps sole) importance on mobile, using an uncontrollable 
rendering engine.  Yet another browser shell does not promote "choice and 
innovation on the Internet", especially so on mobile where increasingly more 
browsing occurs now.

Gecko, like any big codebase, has flab.  I can't estimate how much.  But 
there's practically zero possibility it has flab matching the numbers suggested 
in , implicitly 
desiring a 50%+ size reduction, no matter how far Platform bends over backward, 
or how cooperative Platform engineers are.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ICU proposing to drop support for WinXP (and OS X 10.6)

2016-04-28 Thread Jeff Walden
On 04/28/2016 10:00 AM, Jonathan Kew wrote:
> Thoughts?

Another option is to ship a WinXP-specific Firefox build that doesn't provide 
ICU and ECMAScript's Intl functionality.

This would break anyone's expectation that any version of Firefox past the 
mid-30s somewhere has Intl available in it.  I don't know if anyone makes such 
expectations rather than feature-detecting.

Also this would make the story http://caniuse.com/ and similar sites try to 
tell have an asterisk by it, or something rather ugly.  Given it's WinXP only 
(and Firefox for Android's recalcitrance ;-) may potentially be broken out into 
its own column by such sites), it seems to me that particular tradeoff is a bad 
one versus claiming full support everywhere.

But it's another option, and arguably the least work that lets us improve our 
internationalization support at the same rate ICU improves.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of 'auto'

2015-08-04 Thread Jeff Walden
On 08/02/2015 07:17 AM, smaug wrote:
 MakeAndAddRef would have the same problem as MakeUnique. Doesn't really tell 
 what type is returned.

For the MakeUnique uses I've added (doubtless many more have popped up since), 
I've pretty much always tried to use |auto|.  MakeUnique, and std::make_unique 
if STLport ever dies the final death, should IMO be considered idiomatic.  If 
you read them, you *know* that what's returned is a uniquely owned pointer.  
And by the very nature of these types, only one owner ever exists, so 
ownership/lifetime issues shouldn't exist.

For the exact same reasons the results of casts should be auto-able, I think 
the result of MakeUnique should also be auto-able.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to alter the coding style to not require the usage of 'virtual' where 'override' is used

2015-05-05 Thread Jeff Walden
Seeing this a touch late, commenting on things not noted yet.

On 04/27/2015 12:48 PM, Ehsan Akhgari wrote:
 I think we should change it to require the usage of exactly one of these
 keywords per *overridden* function: virtual, override, and final.  Here
 are the advantages:
 
 * It is a more succinct, as |virtual void foo() override;| doesn't convey
 more information than |void foo() override;|.
 * It makes it easier to determine what kind of function you are looking at
 by just looking at its declaration.  |virtual void foo();| means a virtual
 function that is not overridden, |void foo() override;| means an overridden
 virtual function, and |void foo() final;| means an overridden virtual
 function that cannot be further overridden.

All else equal, shorter is better.  But this concision hurts readability, even 
past the non-obvious final/override = virtual implication others have noted.  
(And yes, C++ can/should permit final/override on non-virtuals.  JSString and 
subclasses would be immediate users.)

Requiring removal of virtual from the signature for final/override prevents 
simply examining a declaration's start to determine whether the function is 
virtual.  You must read the entire declaration to know: a problem because 
final/override can blend in.  For longer (especially multiline) declarations 
this matters.  Consider these SpiderMonkey declarations:

 /* Standard internal methods. */
 virtual bool getOwnPropertyDescriptor(JSContext* cx, HandleObject proxy, 
 HandleId id,
   MutableHandleJSPropertyDescriptor 
 desc) const override;
 virtual bool defineProperty(JSContext* cx, HandleObject proxy, HandleId 
 id,
 HandleJSPropertyDescriptor desc,
 ObjectOpResult result) const override;
 virtual bool ownPropertyKeys(JSContext* cx, HandleObject proxy,
  AutoIdVector props) const override;
 virtual bool delete_(JSContext* cx, HandleObject proxy, HandleId id,
  ObjectOpResult result) const override;

virtual is extraordinarily clear in starting each declaration.  override 
and final alone would be obscured at the end of a long string of text, 
especially when skimming.  (I disagree with assuming syntax coloring penalizing 
non-IDE users.)

I think ideal style requires as many of virtual/override/final as apply.  
Virtual for easy reading.  Override to be clear when it occurs.  And final when 
that's what you want.  (Re dholbert's subtle point: |virtual void foo() 
final| loses strictness, but -Woverloaded-virtual gives it back.)

 * It will allow us to remove NS_IMETHODIMP, and use NS_IMETHOD instead.

A better alternative, that exposes standard C++ and macro-hides non-standard 
gunk, would be to use NS_IMETHOD for everything and directly use |virtual| in 
declarations.  This point is no justification for changing style rules.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Usefulness of JS strict warnings? (Branched from: Is anyone still using JS strict warnings?)

2014-12-26 Thread Jeff Walden
On 12/26/2014 07:24 AM, Wesley Hardman wrote:
 s = Some text;
 console.log(s);
 
 With javascript.options.strict set to true, it outputs ReferenceError: 
 assignment to undeclared variable s.  Are there any advantages to actually 
 fixing it?  The code obviously works just fine either way.

It's trivial to put a var in front of this.  Declaring global variables may 
make code run faster.  In this super-limited example, maybe not.  But teaching 
when it's faster and when it's not is far less easy than a simple blanket rule 
to create global variables using declarations.  In general that's how most of 
the extra-warnings warnings work -- don't do it this way, because this way can 
be bad, although we're not going to conclusively say it's not bad in this exact 
instance because that requires expert human confirmation.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is anyone still using JS strict warnings?

2014-12-26 Thread Jeff Walden
On 12/22/2014 08:08 AM, Paolo Amadini wrote:
 Maybe we could make available
 an opt-in code linting solution - and these typos would become errors,
 not warnings,

One substantial problem from a JS engine point of view is that we're 
implementing a specification.  That specification says these behaviors are not 
errors (let alone early errors that prevent script execution overall).  
Introducing our own proprietary version of JS with these as errors creates a 
purely self-inflicted language, that we are not going to sufficiently document 
to actively support it -- if we even wanted to support it, which I don't think 
we do because this isn't The Open Web.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2014-12-23 Thread Jeff Walden
On 12/23/2014 10:48 AM, L. David Baron wrote:
 Our convention has always been to pass raw pointers, generally with
 the assumption that the caller is expected to ensure the pointer
 lives across the function call.

Like Eric, I would like us to move away from this convention over time, or at 
least stay away from it in the places that are fresher and better-adjusted.

 Passing references to smart pointers doesn't work if you need
 implicit conversions from derived class to base class (e.g., if the
 function takes Base*, and the caller has nsRefPtrDerived), which
 is pretty important in many areas of our codebase.

Passing references to smart pointers is also mildly more expensive because you 
have an extra dereference operation, in the general case.  But that almost 
certainly far outweighs not addrefing/releasing through even a well-predicted 
vtable, as you go on to say.

Really, tho, I don't think we have to forego a smart pointer argument type.  We 
just need a separate smart pointer type ArgumentPtrT (name not seriously 
proposed, just to have a handle on the concept), implicitly constructible from 
the various smart pointer types of U, where U derives from T.

And concerning the binary compatibility argument -- which I don't think really 
matters any more, and if it does a11y could have its own MSCOM smart pointer 
class for its own special-snowflake uses -- it seems fairly likely to me that 
there's a smart pointer class that could preserve that binary compatibility.  
Although I haven't investigated deeply to conclude so for sure.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Support for Visual C++ 2010 has been dropped

2014-12-17 Thread Jeff Walden
On 12/17/2014 08:31 AM, Nathan Froyd wrote:
 It's worth noting that enums with explicit types don't work in all cases
 with our current B2G compiler, see bug 1058561.

Also gcc will wrongly warn (and you can't turn off the warning, even in -Werror 
builds) if you use |enum Foo : T| as the type of a bit-field.  See 
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=51242.  The 
JS_ENUM_HEADER/JS_ENUM_FOOTER uses in js/public/Value.h derive from this bug.

Other than these two issues (one clear and easily observed, one that seems 
B2G-only), |enum Foo : T| is fair game to use.  We have at least one fully 
unguarded use of this in js/src/vm/Shape.h for the js::Shape::SlotInfo enum, 
and the sky hasn't fallen yet.  :-)

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using c++11 right angle bracket in template code?

2014-10-01 Thread Jeff Walden
On 10/01/2014 12:57 AM, Kan-Ru Chen (陳侃如) wrote:
 It seems all the compilers we use support the c++11  in template,
 could we start using it in new code?

Go ahead.  It's already in pretty wide use, actually, but old habits (and 
caution) die hard with a lot of people.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using c++11 right angle bracket in template code?

2014-10-01 Thread Jeff Walden
Bah, stupid broken newsgroup threading meaning I thought nobody'd replied to 
the original message yet.  :-(

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ES6 lexical temporal dead zone has landed on central

2014-09-19 Thread Jeff Walden
On 09/18/2014 04:18 PM, Kent James wrote:
 Substituting var for let does not result in code that behaves identically to 
 previous code, so I can't see why that would be proposed.

The proposal would be to perform the substitution only for let at body level 
of a function.  (And for global let -- which requires its own semantic changes, 
see bug 589199, that are in progress but not complete yet.)  So this:

let s = valid;
{ let s = invalid;}
dump(s);

would be converted to this:

var s = valid;
{ let s = invalid;}
dump(s);

with no semantic change.

This substitution is absolutely not a good idea in the long run.  In the short 
run, for a release or two...maybe.  But that's a vry hesitant maybe.  I'm 
leery of introducing these deviations into the language, such that people write 
code expecting the standard semantics and are surprised to find the code 
doesn't work as expected in other contexts.  How should addons that inject code 
into pages behave, for that code?  Is this something that an addon author could 
easily predict?  I could well imagine it working either way.  I'd really rather 
not go down this path if we can help it.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Non-backward compatible changes to JS 'let' semantics

2014-08-15 Thread Jeff Walden
On 08/15/2014 12:03 AM, Gijs Kruitbosch wrote:
 Last I checked, AMO's review flagging tools were all regex-based. I doubt 
 even issue 1 is easily regex-able. :-(

Pretty sure they do syntax checks, at least, I think using Reflect.parse, or at 
least with an embedded SpiderMonkey, unsure which.  Either one would complain 
mightily about this, if a properly modified SpiderMonkey were used for the test.

Part 2 is also doable with Reflect, but it'd be much harder, of course.  I 
suspect some hacky approximations might at least lint a bunch of these.  Also 
worth noting, at least for the use-before-declaration/assignment case, where 
that's done now it evaluates to a mostly-unuseful |undefined|.  Code that 
relies on that is probably somewhat of an unusual case.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Non-backward compatible changes to JS 'let' semantics

2014-08-14 Thread Jeff Walden
On 08/14/2014 10:57 AM, Chris Peterson wrote:
 Does chrome and add-on JS code default to JSVERSION_1_8?

It defaults to the latest version, whatever that is, so warning doesn't end up 
being reasonable.

I do not think the SpiderMonkey team is smart enough to have implemented both 
the old semantics, and the new ones, side-by-side; supporting both; and 
understanding both sufficiently well to flag all the pitfalls and issues of 
having both modes, when reviewing patches.  Double the fuzzing to support both 
modes of operation is also entirely unappealing, from an 
infrastructure/fuzzer-people-hours point of view.  And I don't think developers 
(particularly ones straddling both sides) are going to have much fun at all 
dealing with these semantic differences as 'let' moves onto the web.

I think our best bet is probably to evangelize the change hard, update AMO 
linters to flag the issue, and (gulp) wait for, and assist wherever possible, 
addon authors to update their code.  Part #1, shouldn't be too bad because 
they're syntax errors easily observed (and linted by AMO scripts).  Part #2 is 
harder, but 1) it's probably somewhat uncommon to use/assign 'let' before 
declaration, and 2) the *potential* for problems is still syntactically 
recognizable by AMO linters, which can forward concerns to AMO reviewers and 
addon authors.

We've made quite breaking changes to chrome JS before, and addons have adapted. 
 We can make this one as well.  It'll just take some effort from everyone to 
make it happen.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2014-08-13 Thread Jeff Walden
On 08/12/2014 10:06 AM, Ehsan Akhgari wrote:
 It could also be solved with making operator T*() explicit, but neither of 
 these options are something that we can use in our code base today.

So at risk of adding yet another flavor of thing: why not introduce an 
already_AddRefedT sort of struct that *does* own an addref, *will* release on 
destruction if not nulled out, and does *not* explicitly convert to T*?  Give 
nsCOMPtr move constructors and move assignment operators from that class, that 
null out this other pointer when called, to transfer ownership at no cost.  (At 
risk of overplaying my hand, I think I've just described what WebBlink calls 
PassRefPtr, except for that pertaining to concrete classes whereas nsCOMPtrT 
is for abstract T classes.)

I recognize this introduces yet another way to do things.  But it enables 
incremental transition to a completely safe system that prohibits assignment of 
a method call to a raw pointer, and yet never has problems with leaks.  If we 
work hard at enforcing this on the way in, it can quickly become second nature, 
I think.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2014-08-13 Thread Jeff Walden
On 08/13/2014 07:44 AM, Ehsan Akhgari wrote:
 If that is the goal, then I don't agree that is a useful outcome at all.  I
 *do* wish that there were better *and* safer ways of doing more things
 automatically but ownership transfers are inherently unsafe operations that
 are expressed using various different kinds of conventions in C++
 (already_AddRefed being one such convention.)  In my experience, when
 people are not aware of how they're transferring ownership, bugs creep in.

The very point of move semantics is that they're built into the fabric of the 
language, and you can't evade them except with casts (including through 
explicit use of Move(), which is sufficiently explicit as to even be 
acceptable).  Having ownership transfers take advantage of this makes them 
inherently safer.  If a mistake is made (by ignoring a return value or 
similar), the consequence is usually an extra addref/release, not a leak.

I lack sufficient knowledge/experience with Rust to speak too forcefully about 
it.  But it's worth emphasizing that, by my understanding, every value in Rust 
is passed using move semantics.  (And you need something like reference 
counting tacked on to evade that.)  That speaks volumes for the safety of move 
semantics, in my mind.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


MakeUniqueT(..., nullptr, ...) will break *only* the b2g-ics build

2014-08-06 Thread Jeff Walden
An issue with MakeUnique that I forgot to mention: you can't pass literal 
nullptr to MakeUnique.

This problem happens because gcc  4.6 doesn't support true nullptr, and that 
interacts poorly with perfect forwarding.  See the long comment in mfbt/Move.h 
for details.  *Only* b2g-ics will break if you screw this up.  Tryservering 
other platforms won't point out this footgun.

The only workaround is to pass a null pointer value that has the actual type of 
the argument: static_castT*(nullptr), say.

In short: if you break on a seemingly innocuous MakeUnique only on b2g-ics, 
it's probably this, and you should just fix the problem rather than back out 
the entire thing for this one weird mistake.

Dropping support for gcc  4.6 cannot come soon enough.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Introducing mozilla::UniquePtr and mozilla::MakeUnique for |new T| and |new T[]| resources

2014-08-05 Thread Jeff Walden
On 08/05/2014 07:02 AM, David Rajchenbach-Teller wrote:
 If this obsoletes some of mfbt/Scoped.h, can you remove/document this?

I already replaced the file-description comment with

/* DEPRECATED: Use UniquePtr.h instead. */

and repeated the same thing at the very start of class-level documentation.  
Not sure what else there is to do on this front.

I suppose I could remove *all* the documentation and leave *only* deprecation 
notices, but it seems slightly better to keep it until the remaining users have 
been removed.  (ScopedDeleteArray has already been removed, for starters.)

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Introducing mozilla::UniquePtr and mozilla::MakeUnique for |new T| and |new T[]| resources

2014-08-01 Thread Jeff Walden
On 08/01/2014 03:48 AM, Neil wrote:
 Only UniquePtr's own copy and assignment operators should take UniquePtr. 
 Other call sites should either take const UniquePtr (if they will not take 
 ownership of the pointer), UniquePtr (if they may or may not need to take 
 ownership of the pointer) or UniquePtr (if they will take ownership of the 
 pointer).

Oops, you're largely right.  There are perhaps rare cases where it might make 
sense to accept UniquePtr, but it's pretty uncommon.  I somewhat meant to 
elaborate on this in the blog post, then forgot.  Probably I won't bother at 
this point.

But, largely what I would have said is encapsulated in this Stack Overflow 
question/answer:

http://stackoverflow.com/questions/8114276/how-do-i-pass-a-unique-ptr-argument-to-a-constructor-or-a-function

This discussion is much less intricate and technical than the rvalue reference 
series I linked in the post as subtle, so it's worth a read if you're 
interested and won't take up that much time to digest.  At the same time, it's 
largely an expansion of what NeilAway said, so feel free not to read it if you 
read his post.  :-)

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Introducing mozilla::UniquePtr and mozilla::MakeUnique for |new T| and |new T[]| resources

2014-07-31 Thread Jeff Walden
Hey all,

mfbt recently picked up mozilla::UniquePtrT or T[], a smart pointer for 
objects created using |new| or |new[]|.  UniquePtr will automatically |delete| 
or |delete[]| its pointer when it's destroyed.  (You can also define custom 
deletion policies if you want.)

Additionally, you can safely embed UniquePtr in properly-written container 
classes.  This is dangerous or impossible with the current alternatives, 
because they consider the act of copying to *move* the pointer out of the 
source object (mutating it), into the sink object.

mfbt also gained MakeUnique, which works like UniquePtr(new T(...)) or 
UniquePtr(new T[n]()).  With MakeUnique there's no point at which it's possible 
for the new object to be forgotten, leaked, double-deleted, etc. without 
mistakes elsewhere.  I recommend considering MakeUnique calls as idiomatic, 
such that the pointer type is obvious and isn't repeated twice.  If you do 
this, |auto| nicely simplifies such declarations:

  auto memoryPage = MakeUniqueuint8_t[](4096); // 4096-byte array
  auto i = MakeUniqueint8_t(17); // new int(17)

More details and examples of how to use UniquePtr and MakeUnique are here:

http://whereswalden.com/2014/07/31/mfbt-now-has-uniqueptr-and-makeunique-for-managing-singly-owned-resources/

As always, the mfbt header itself includes copious documentation and examples 
of how to use it:

http://mxr.mozilla.org/mozilla-central/source/mfbt/UniquePtr.h

One last note.  UniquePtr is patterned on std::unique_ptr, the C++11 standard 
version of this idiom.  The current mozilla::Scoped class is another recent 
attempt to address largely the same needs.  The older nsAutoPtr and 
nsAutoArrayPtr classes are even older attempts at a solution, that suffer from 
the (T) non-copy-constructor problem mentioned before.  Scoped, nsAutoPtr, and 
nsAutoArrayPtr shouldn't be used now that UniquePtr is available, and the 
former class is now deprecated (and is slowly being removed).  (I consider 
nsAutoPtr and nsAutoArrayPtr deprecated as well, but it's not within my power 
to authoritatively declare it so.  Nonetheless, you should treat them as such.)

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Introducing mozilla::UniquePtr and mozilla::MakeUnique for |new T| and |new T[]| resources

2014-07-31 Thread Jeff Walden
On 07/31/2014 04:52 PM, Robert O'Callahan wrote:
 Is there anything blocking us from mass-removing nsAutoPtr/nsAutoArrayPtr
 in favour of UniquePtr?

I'm not unsure what the state of mass-rewriting tools are these days, to start. 
 But assuming they're up to snuff, there might or might not be.  There are a 
variety of little differences that rewriting would have to carefully consider:

* nsAutoPtrT implicitly converts to T*, UniquePtr requires get()
* nsAutoPtrT has one of the bad (T) constructors that's probably in 
ubiquitous use (and those can't safely be changed except manually, unless you 
want to pretend all existing code is correct)
* nsAutoPtr can be assigned T*, UniquePtr can only be assigned nullptr or a 
temporary/moved UniquePtr

And probably more beyond that, I was just skimming.

Additionally, UniquePtr is best used by threading it through interfaces, beyond 
just immediate use sites.  Rather than returning a raw pointer, code should 
return UniquePtr instead.  Rather than extracting a raw pointer to pass to a 
method, code should pass Move(ptr) instead, and that argument should be a 
UniquePtr.  These changes can't be mechanized.

 I think it's more important to remove old stuff than add new stuff, so the
 sooner we can mass-remove nsAutoPtr/nsAutoArrayPtr the better.

In principle, perhaps.  But JS wanted UniquePtr as much as gfx (among others, I 
believe) did, and nsAutoPtr wouldn't cut it there, so it had to happen anyway.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Policing dead/zombie code in m-c

2014-05-01 Thread Jeff Walden
On 04/28/2014 05:59 AM, Henri Sivonen wrote:
 Hopefully we didn't remove collation rules, since that's the part we
 are supposed to be using ICU for! :-)

I'm uncertain exactly what the terminology means necessarily for us, but 
intl/icu-patches/genrb-omitCollationRules.diff exists because a flag passed in 
the ICU build process to omit collation rules was broken in 52, and we had to 
fix it ourselves.  (Well, technically I wrote a patch-fix, submitted it 
upstream, then they rewrote it to be sane and I backported it to our tree.  :-) 
)  So we do omit collation-related data, that was claimed in the code and 
elsewhere to be collation rules, that's not being used by us.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Policing dead/zombie code in m-c

2014-04-26 Thread Jeff Walden
On 04/24/2014 05:49 AM, Till Schneidereit wrote:
 Questions:
  * Are we building and shipping dead code in ICU on B2G?
 
 I don't know the state of ICU on B2G, but if we have it enabled there, then 
 almost certainly, yes.

There's doubtless *some* dead code in the ICU we build, yes.  (I don't remember 
whether we build ICU on b2g yet, tho.)  Various ICU components were removed as 
part of building it into Mozilla (I'd have to check Makefile.in/moz.build to 
get a complete list).  It's possible there are more, but I thought Norbert 
picked off the low-hanging fruit at the very least.  Someone might be able to 
further split up ICU into subcomponents and pick off more, with enough 
investment of time in it.

  * The bug about building useless code in ICU has been open since
 November. Whose responsibility is it to make sure we stop building
 stuff that we don't use in ICU?
 
 I guess that this isn't entirely clear.

In practice it's largely no one's.  I've been the one to update ICU the one 
time it's happened so far (we're due for another at this point, I just need to 
find time to do it).  But that's just porting our patch set forward, plus 
debugging upstream issues.  I have no particular expertise (beyond that of any 
random Mozillian) in tackling this sort of thing in ICU.

  * Do we have any mechanisms in place for preventing stuff like the
 ICU encoding converters becoming part of the building the future? When
 people propose to import third-party code, do reviewers typically ask
 if we are importing more than we intend to use? Clearly, considering
 that it is hard to get people to remove unused code from the build
 after the code has landed, we shouldn't have allowed code like the ICU
 encoding converters to become part of the build in the first place?

I agree in spirit, but this is kind of pedantic.  I really don't think 
clearly is so obviously true, given that by nature of how we use ICU, we're 
only feeding it UChar/jschar/char16_t UTF-16 strings, so it's totally 
non-obvious how ICU encoders would ever be triggered.  Not ideal, sure.  But as 
we use it, encoders simply don't come into things, at least not in any 
user-visible way.  (Conceivably they might be invoked in the back end when 
reading locale data from the file system, but that's presumably correctly 
encoded not as UTF-7 or so.  It only really matters if the encoders are visible 
on the web, which I don't believe they are.)

 I remember that back when Norbert first presented his Intl implementation 
 work at the JS work week in 2012, we discussed the impact ICU would have on 
 download size. One idea was to slim down the ICU build and get rid of some 
 things not needed for Intl. I might very well be mistaken, but I'm not aware 
 of this having happened.

It happened.  We removed a number of things like collation rules, some of the 
ICU components, and more beyond that.

 We looked at what other vendors were doing and saw that Google essentially 
 just bundle all of ICU and swallow the increase in download size.

Well, that was justification for shipping ICU.  Not so much justification for 
shipping more of it than needed.

 Also, properly integrating ICU into our build system was a major undertaking 
 in itself, so getting to a point where we could ship it at all might have 
 taken precedence.

That's reasonably accurate, combined with the basic steps to prune size and 
built code having already been taken.

If there's more of ICU that's easily pruned, I'm all for it.  If pruning harder 
requires splitting ICU up into more components, that's fine, too.  But someone 
has to do it, and my skills and knowledge lie elsewhere.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Policing dead/zombie code in m-c

2014-04-26 Thread Jeff Walden
On 04/25/2014 02:10 AM, Henri Sivonen wrote:
 Different means old, right? Having an old version is a correctness
 problem, if the CLDR has changed since.

Depends what's considered correctness.  The ECMA Internationalization API 
doesn't specify behavior, so older just means not-as-good, not incorrect.  
(Technically you could expose the Intl API without *any* locale data.  It'd 
defeat the purpose of Intl, but it would not be incorrect.)  I don't know to 
what extent older data is a correctness issue for anything else using ICU.  I'd 
be kind of astonished if slightly-old data ever constituted a black-letter 
correctness issue.  Specs don't/can't move fast enough for the difference 
between ICU/CLDR 51/52 to matter, correctness-wise.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::NullptrT on B2G ICS vs. everything else (incl. B2G JB and Android ICS)

2014-04-01 Thread Jeff Walden
On 04/01/2014 09:44 AM, Trevor Saunders wrote:
 b2g jb using gcc 4.7 was what I guessed as well, but
 http://ftp.mozilla.org/pub/mozilla.org/b2g/tinderbox-builds/mozilla-central-hamachi/1396314004/b2g_mozilla-central_hamachi_dep-bm72-build1-build10.txt.gz
 seems to provide some evidence to the contrary. configure says it finds
 gcc 4.4, but b2g 'helpfully' runs make with -s so we can't know what
 actually happens...

Judging by the tinderbox behavior I've observed with nullptr stuff in the past 
-- including patches that dumped the compiler version in tbpl logs and then 
aborted the build -- I think the compiler on tinderbox for at least some kinds 
of b2g stuff is some weird amalgam that doesn't map to the gcc version 
numbering the world uses, with some backported patches for various things, or 
something.  Which seems crazy, but everything I hear says attempting to 
seriously complain and get us moved to a modern gcc there is useless, so I 
haven't bothered.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::Atomic considered harmful

2014-04-01 Thread Jeff Walden
On 04/01/2014 02:32 PM, Ehsan Akhgari wrote:
 What do people feel about my proposal?  Do you think it improves writing
 and reviewing thread safe code to be less error prone?

As I said in the bug, not particularly.  I don't think you can program with 
atomics in any sort of brain-off way, and I don't think the boilerplate 
difference of += versus fetch-and-add or whatever really affects that.  To the 
extent things should be done differently, it should be that *template* 
functions that deal with atomic/non-atomic versions of the same algorithm 
deserve extra review and special care, and perhaps even should be implemented 
twice, rather than sharing a single implementation.  And I think the cases in 
question here are flavors of approximately a single issue, and we do not have a 
fundamental problem here to be solved by making the API more obtuse in practice.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Warning about mutating the [[Prototype]] of an object ?

2014-03-28 Thread Jeff Walden
On 03/28/2014 02:01 PM, Andrew Sutherland wrote:
 It's my understanding that the existing idiom used throughout the Thunderbird 
 tree is still okay to do since the prototype chain is created at object 
 initialization time and so there's no actual mutation of the chain:
 
 function Application() {
 }
 Application.prototype = {
   __proto__: extApplication.prototype,
 };

At present we only warn for direct property-sets not part of object literals.  
This was because property-sets are more visibly, obviously, clearly wrong in 
this regard, while *in theory* the other form could be made not so bad.

But right now, that pattern above is just as bad in SpiderMonkey.  And, 
honestly, I suspect we'll have other performance work and features to implement 
for a long time to come.  So really you shouldn't do this pattern, either.  But 
we don't warn about it, just yet, to minimize warning fatigue.

The right fix -- judging by the spew/code earlier in this thread -- is probably 
to port the patch from https://bugzilla.mozilla.org/show_bug.cgi?id=985742.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-07 Thread Jeff Walden
On 01/07/2014 02:20 AM, Ms2ger wrote:
 Based on the discussion in #jsapi yesterday, I'm not sure that most JS 
 hackers is necessarily accurate.

I think there's a rough consensus.  And I'd note only a few of us were really 
active in that conversation, and I'm going somewhat off memory of what others 
have said in the past.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A proposal to reduce the number of styles in Mozilla code

2014-01-06 Thread Jeff Walden
On 01/06/2014 04:27 PM, Robert O'Callahan wrote:
 That's just not true, sorry. If some module owner decides to keep using
 NULL or PRUnichar, or invent their own string class, they will be corrected.

At least the former isn't true, as far as I remember.  It took convincing to 
get JS to move over from NULL to nullptr; it was not something that could just 
be done, without any asking at all.

I'm not as sure about PRUnichar, just because there's so little strongly-owned 
code that everyone more or less always used PRUnichar.

The third, I would be astounded if we don't have minor string classes floating 
around somewhere in the tree.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A proposal to reduce the number of styles in Mozilla code

2014-01-06 Thread Jeff Walden
On 01/06/2014 04:46 PM, Jason Orendorff wrote:
 I'm fine with changing SpiderMonkey in whatever way is necessary to stop
 these threads forever.

Pretty sure I've said the same thing in other places, that we should have 
something everyone hates and use it everywhere.  Definitely I've said it to 
people in real life.  I am fine with everyone on the same page, here and in 
mfbt and everywhere else.

Which is not to say I like the current Mozilla style.  It's largely tolerable.  
There are a few things that find pretty irksome, tho.  I'll start a new thread 
for those things.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Mozilla style guide issues, from a JS point of view

2014-01-06 Thread Jeff Walden
I'm writing this list, so obviously I'm choosing what I think is on it.  But I 
think there's rough consensus on most of these among JS hackers.

JS widely uses 99ch line lengths (allows a line-wrap character in 100ch 
terminals).  Given C++ symbol names, especially with templates, get pretty 
long, it's a huge loss to revert to 80ch because of how much has to wrap.  Is 
there a reason Mozilla couldn't increase to 99 or 100?  Viewability on-screen 
seems pretty weak in this era of generally large screens.  Printability's a 
better argument, but it's unclear to me files are printed often enough for this 
to matter.  I do it one or two times a year, myself, these days.

I don't think most JS hackers care for abuse of Hungarian notation for 
scope-based (or const) naming.  Every member/argument having a capital letter 
in it surely makes typing slower.  And extra noise in every name but locals 
seems worse for new-contributor readability.  Personally this doesn't bother me 
much (although aCx will always be painful compared to cx as two no-cap 
letters, I'm sure), but others are much more bothered.

JS people have long worked without bracing single-liners.  With any style 
guide's indentation requirements, they're a visually redundant waste of space.  
Any style checker that checks both indentation and bracing (of course we'll 
have one, right?), will warn twice for the error single-line bracing prevents.  
I think most of us would discount the value of being able to add more to a 
single-line block without changing the condition line.  So I'm pretty sure 
we're all dim on this one.

Skimming the rest of the current list, I don't see anything that would 
obviously, definitely, be on the short list of complaints for SpiderMonkey 
hackers.  Other SpiderMonkey hackers should feel free to point out anything 
else they see, that I might have missed.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-06 Thread Jeff Walden
On 01/06/2014 08:35 PM, Joshua Cranmer  wrote:
 And a '_' at the end of member names requires less typing than 'm' + capital 
 letter?

This started, and still largely is, an Ion convention.  Lots of existing code 
doesn't use it, and I haven't much worked on code that does.  But as I said, 
I'm not one of the SpiderMonkey hackers who strongly cares about prefixes or 
suffixes (although I think -Wshadow is worth using, somehow).  So mFoo versus 
foo_ is not significantly different to me.

But it does seem to me that mFoo *is* slightly more trouble than _ at end.  _ 
at end doesn't affect any of the rest of the name.  mFoo in contrast forces the 
first real letter to be capitalized, and often that's a floating capital in 
the middle of lowercase letters.  When typing mFoo, the F is a 
precision-strike capital letter.  Don't touch too early, don't touch too late, 
or you mistype.  With _ you can sometimes get away holding it slightly long 
because it doesn't bleed into the rest of the identifier.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improvements to the Sphinx in-tree documentation generation

2013-12-17 Thread Jeff Walden
On 12/16/2013 03:09 PM, Gregory Szorc wrote:
 Perhaps Reflect.parse() could grow a new option to expose comment nodes or 
 could attach comment metadata to specific node types?

It's really not possible to do the latter.  Comments don't appertain to 
specific nodes at all.  They're just random non-token things interspersed in 
the token stream.  Note that the Esprima stuff mentioned in the other email 
doesn't attempt to put comments in the parse tree -- comment text and locations 
are stored in a side-along array.  The user has to re-associate the comments 
with particular nodes after the fact.  Fundamentally, comments are a 
token-level feature, not a parse tree-level feature.

We sort of can do whatever we want with the parser API.  Except for Esprima 
compatibility for one.  And also except that our parser doesn't track comments 
at all, so we'd have to spend extra time and memory to record locations and 
such, beyond what we do now.  The side-along Esprima approach is probably the 
only way this can work sensibly, given just how many places comments can be 
put; I don't think any API we might have should attempt associations itself.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improvements to the Sphinx in-tree documentation generation

2013-12-16 Thread Jeff Walden
On 12/16/2013 01:17 PM, Gregory Szorc wrote:
 Does SpiderMonkey expose documentation blocks to the AST? If not, should it?

No, and probably not.  Comments are not tokens, so they're not in the AST.  
Right now SpiderMonkey pretty much just throws them away (except to the extent 
the comment includes a line break, in which case /*\n*/ and similar represent a 
line break for semicolon-insertion rules).  And it'd be really unfortunate to 
have to include them -- then things like a DoWhileNode would have to be gunked 
up a bunch to store information like in |do { } /* fnord */ while /* quoi */ 
(true);|.

Maybe there's some sort of intelligent exposure that could nonetheless be done. 
 But I doubt it's a good idea to build it atop a parser designed for executing 
code, and not for exactly faithfully representing it.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: build fail @ xulrunner-sdk/include/mozilla/Atomics.h when using xulrunner-sdk = v25.0, on linux/64 with gcc 4.7x/4.8x

2013-11-15 Thread Jeff Walden
On 11/15/2013 03:01 PM, Jeff Walden wrote:
 You need to compile as C++11 to use sufficiently Mozilla headers.

...sufficiently-recent, that is.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: build fail @ xulrunner-sdk/include/mozilla/Atomics.h when using xulrunner-sdk = v25.0, on linux/64 with gcc 4.7x/4.8x

2013-11-15 Thread Jeff Walden
You need to compile as C++11 to use sufficiently Mozilla headers.  If you look 
at the error you're getting, it's clear your compiler isn't treating 
|static_assert| as referring to the built-in construct in C++11.  Adding 
-std=c++0x or -std=gnu++0x to your commandline should get you past this error, 
at the very least.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla development bootcamp

2013-11-05 Thread Jeff Walden
On 11/02/2013 05:24 PM, bbo...@gmail.com wrote:
   2.0 Creating a .mozconfig file

Make that creating a mozconfig file.  Both names are supported, but mozconfig 
is preferable: it's visible in directory listings and such, and it's easier to 
create on Windows.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cost of ICU data

2013-10-16 Thread Jeff Walden
On 10/16/2013 12:45 AM, Karl Tomlinson wrote:
 When sync I/O is performed to read in-binary-object data, how is
 that better?
 
 Just readahead?

Readahead, it being part of the binary/libxul/whatever so already one coherent 
file to load, etc.  I'm not aware that you can reasonably predict adjacency 
predictions from the OS if you use separate files.  But I could be mistaken 
about that.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cost of ICU data

2013-10-15 Thread Jeff Walden
On 10/15/2013 06:06 PM, Benjamin Smedberg wrote:
 Do we need this data for any language other than the language Firefox ships 
 in? Can we just include the relevant language data in each localized build of 
 Firefox, and allow users to get other language data via downloadable language 
 packs, similarly to how dictionaries are handled?
 
 Is it possible that some of this data (the collation tables?) should be in 
 all Firefox locales, but other data (currency and timezone names) is not as 
 important and we can ship it only in one language?

It seems a fairly bad thing to me for us to get into the habit of prioritizing 
certain languages above others.

Technically, if the data is compiled into the code, this would mean language 
repacks would...not be repacks any more.  If you had sidealong data files 
everywhere, then you could perhaps repack still.  This might require some 
repacking adjustments, possibly.  ICU provides a udata_setCommonData function 
that lets you load data from anywhere, so there's some flexibility here.

It's worth noting we currently have no central hook to insert this call 
before ICU's ever used.  We init ICU at startup, but that init-call is fast.  
Presumably this new call can't be so fast, because you have to page in all the 
ICU data.  And if you can't delay that til ICU is used, there's really no 
difference between the current setup and a setup that calls udata_setCommonData 
at startup.  Of course, this is all just software.  :-)

 As far as I can tell, the spec allows user agents to ship whatever languages 
 they need; the real question is what users and site authors actually need and 
 expect out of the API. (I'm reading the spec out of 
 http://wiki.ecmascript.org/doku.php?id=globalization:specification_drafts)

Grunging through v8's code, I...think...they cull locale lists for stuff to 
some degree.  Maybe to the language set they ship.  I'm looking at 
https://code.google.com/p/chromium/codesearch#chromium/src/third_party/icu/README.chromium
 and honestly don't understand enough about ICU to fully grok the substantial 
set of changes they've made.

 Also, we are currently duplicating the data tables on mac universal builds, 
 because they are compiled-in symbols.

That means sync I/O on the main thread, and not well-optimized because it won't 
be part of the binary.  Just to note.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What platform features can we kill?

2013-10-10 Thread Jeff Walden
On 10/10/2013 02:27 PM, Axel Hecht wrote:
 I agree with the sentiment, but not on the eample.
 
 Having been a peer of the XSLT module back in the days, we started with a 
 rather js DOM like implementation, and moved over to a pure nsIContent etc 
 impl, and each step there won us an order of magnitude in perf.

But do we actually care about the perf of sites that use XSLT now, as long as 
perf isn't completely abysmal?  A utility company showing billing statements, I 
think we can slow down without feeling guilty.  But if, say, Google Maps or 
whichever used XSLT (I seem to remember *something* Google used it, forcing 
Presto to implement XSLT, back in the day -- maybe they've switched now, blink 
thread might say if I checked it), we might care.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ Standards Committee meeting next week

2013-10-09 Thread Jeff Walden
On 10/02/2013 06:06 PM, Botond Ballo wrote:
 Having to repeat 'expr' is rather unfortunate, and C++14
 fixes that. You can now write:
 
   auto function(A a, B b)
   {
   return expr;
   }
 
 The only restriction is 
 that if there are multiple return expressions, they
 must all have the same type.

Same type as in the std::is_same sense?  Or same type in the a?b:c sense, where 
b/c can be different types, and one expression must convert to the other's 
type?  I assume you mean the former, but I can imagine cases (think pointers 
and nullptr) where the latter would definitely be nice.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ style question: virtual annotations on methods

2013-09-04 Thread Jeff Walden
On 09/04/2013 05:24 AM, Benjamin Smedberg wrote:
 MOZ_OVERRIDE implies virtual, you get a compile error when you put 
 MOZ_OVERRIDE on a non virtual
 It does? That surprises me (it certainly wasn't the original intent of 
 NS_OVERRIDE). There are certainly cases where we want to override non-virtual 
 methods with an identical signature.

C++11 override functionality only works on virtual methods.  Agreed it's 
somewhat unfortunate it's only on virtual methods -- else I'd have used it on 
all the deleted methods in js/src/vm/String.h -- but that's how the language 
works.  We could of course add a new macro, plugging into static analysis, if 
we wanted inherited-signature-matching functionality that worked on 
non-virtuals.

Making virtual mandatory, and MOZ_OVERRIDE mandatory when applicable (to 
slightly expand this thread's scope), sounds good to me.

The only edge case is if you have a template class inheriting from a 
template-selected base, and the template class has a method that's virtual 
depending on the selected base class:

templateclass Base
struct Foo : public Base
{
  void fun() {}
};

struct B1 { virtual void fun(); };
struct B2 { void fun(); };

FooB1().fun(); // virtual
FooB2().fun(); // non-virtual

There's probably a use case for this.  SecurityWrapper is probably close but no 
cigar; nothing closer springs immediately to mind.  This case would preclude 
adding virtual or MOZ_OVERRIDE to Foo::fun's declaration -- probably a bridge 
to cross if we get to it, of course.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Stop #including jsapi.h everywhere!

2013-08-20 Thread Jeff Walden
On 08/20/2013 09:01 AM, Boris Zbarsky wrote:
 DOMJSClass.h only needs various forward-declarations, mostly.  The exceptions 
 are:
 
 2)  It needs the definition of JSClass, for a member of the DOMJSClass struct 
 and the DOMIfaceAndProtoJSClass struct.  Unfortunately, this is defined in 
 jsapi.h and defining the DOMJSClass struct is the main point of this header 
 file.

We can/should add js/public/Class.h and move all the class gunk into that.  I 
think the only dependencies of JSClass are various JSCLASS_* macros and the 
function typedefs, and those only depend on the rooting types and JSAPI structs 
that we can forward-declare.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-08-05 Thread Jeff Walden
On 08/02/2013 08:09 PM, Brian Smith wrote:
 2. It is reasonable to expect that std::whatever works as the C++ standard
 says it should. It isn't reasonable to expect mozilla::Whatever to work
 exactly like std::whatever. And, often, mozilla::Whatever isn't actually
 the same as std::whatever.

Why is this a problem?  People should never assume mozilla::* is the same as 
std::*, unless they verify in the documentation that it is.  Sometimes our 
documentation makes promises of compatibility (or approximate compatibility, or 
common-case compatibility).  Usually it doesn't.  Caveat lector.

Reading documentation, it seems to me, has to be a baseline requirement for all 
Mozilla contributors.  If a contributor isn't reading documentation about the 
classes/structures/algorithms he uses in the patches he writes, I'm scared.

(Which is not to say that doc-reading is always the solution, as sometimes our 
docs are incoherent, scattered, or non-existent.  But for this new stuff, the 
docs come with the code, and are a prerequisite to landing.  I think we've held 
that line pretty well.  So that's a legacy problem not relevant here.)

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-08-01 Thread Jeff Walden
On 07/31/2013 01:25 AM, Brian Smith wrote:
 Anyway, it would be easier to swallow the dependency on MFBT if it wasn't
 so large (over 100 files now), if it tried to be (just) a polyfill for
 missing standard library features, and if it could easily be used
 independently of the Gecko build system. But, none of those constraints is
 reasonable to place on MFBT, so that means MFBT isn't a good choice for
 most things that need to also be able to be built independently of Gecko.

I disagree about independently-usable being an unreasonable constraint -- it 
seems totally reasonable for it to be a mini-embeddable thing, or something.  
But making it exactly that, without some of the hacks we have now for things 
like header-installation, etc., requires 1) time, 2) build-fu, and 3) 
understanding of the requirements of small little embeddable things, and I lack 
all these.

On 07/31/2013 03:34 AM, Mike Hommey wrote:
 I am of the opinion that anything that is not a header file under MFBT
 should be moved into mozglue. The end result would be the same (MFBT is
 actually built into mozglue, except for js standalone builds, for which
 this would require some changes), but it would allow MFBT to just be
 used independently.

Truly I don't care about the naming, but I've always envisioned mfbt as being 
headers and some compiled-into-objects files both, i.e. as the union of what 
you consider mfbt, and the compiled-in bits of it.  Is there a good reason to 
have mozglue and what you consider mfbt to be two different things?  Why not 
have both as a single thing, and make the whole thing more easily embeddable if 
necessary?

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using C++0x auto

2013-07-21 Thread Jeff Walden
On 07/21/2013 11:44 AM, Justin Lebar wrote:
 That's a fair point.  Maybe we should call ours mozilla::Move and
 mozilla::Forward instead.  That way we can still easily change them to
 std::move and std::forward, but there's no possibility of a collision.

This is pretty much always what we've done with the type traits stuff, so this 
is what we'd do here, I think.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: review stop-energy (was 24hour review)

2013-07-11 Thread Jeff Walden
On 07/11/2013 03:09 AM, Nicholas Nethercote wrote:
 On Thu, Jul 11, 2013 at 7:48 AM, Jeff Walden jwalden+...@mit.edu wrote:

 Establishing one-day turnaround time on reviews, or on requests, would 
 require a lot more polling on my review queue.
 
 You poll your review queue?  Like, by visiting your Bugzilla
 dashboard, or something like that?  That's *awful*.
 
 I personally use a push notification system called email with
 filters.  Well, strictly speaking it's poll-like because I have to
 check my high priority bugs folder, but I do that anyway multiple
 times per day so I'm unlikely to take more than an hour or two (while
 working) to notice a review request.

I have 
https://bugzilla.mozilla.org/request.cgi?requestee=jwalden%2Bbmo%40mit.edudo_union=1group=typeaction=queue
 open in a browser tab and check it from time to time.  I don't see how that's 
any different from a mail-filtering folder except in terms of the UI.  They're 
both polling, as you note.  :-)

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: review stop-energy (was 24hour review)

2013-07-10 Thread Jeff Walden
On 07/09/2013 07:17 PM, Joshua Cranmer  wrote:
 I've said this before, not sure it's written in wiki-stone, maybe it should 
 be: if you get a review request, respond same-day either with the actual 
 review, or an ETA or promise to review by a certain date.
 
 For reviewers who are not Mozilla employees but rather do this only in spare 
 time, responding within 24 hours can be difficult. I typically end up doing 
 most reviews in a weekly-ish batch on the weekends, for example.

I find myself wondering: why 24h, exactly?  Why is a day the limit on the time 
to wait?

Reviewer-side: I've lately tried to minimize my review turnaround time such 
that I get things done, roughly FIFO, before I get a review-nag mail.  I can 
approximately do that, while keeping up with most of my coding.  Establishing 
one-day turnaround time on reviews, or on requests, would require a lot more 
polling on my review queue.  And I don't think that's a good thing, 
context-switching from coding tasks being pretty expensive for any developer.

Reviewee-side: maybe I'm just weird, or maybe JS is just different, but I 
always have half a dozen to a dozen different things I'm doing, or could be 
doing, at any time.  If I find myself waiting for forward progress one one -- 
waiting for try results, waiting for a review, waiting for my clock to overlap 
with that of someone in Europe, or anything else -- I can work on something 
else, and not feel obligated to interrupt whoever/whatever I'm waiting on to 
ask for prioritization.  Outside of must-fix-for-security-uplift-this-week 
situations, I can't think of a case where 24h turnaround (even to provide 
expectations) is something I've ever wanted so strongly that I'd force every 
reviewer to drop enough things to provide it.

Or maybe I'm just weird.  But 24h, for much other than new-reviewer requests, 
seems way shorter than I think the maximum turnaround time should be.  If it's 
a new contributor, okay, 24h or so (modulo weekends, vacations, etc.) is a bit 
short, but perhaps doable.  But for regular contributors, with many things to 
do while waiting for any particular review, I don't see why the delay shouldn't 
be a few days, to a week, to maybe slightly over a week.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


New mfbt header: mozilla/PodOperations.h

2013-04-26 Thread Jeff Walden
For anyone who's not reading planet (or hasn't read it in the last fifteen 
minutes ;-) ), I recently landed a new mfbt header that exposes slightly safer 
versions of memset(.., 0, ...), memcpy, and memcmp for use on C++ types, 
particularly ones where sizeof(T)  1.

http://mxr.mozilla.org/mozilla-central/source/mfbt/PodOperations.h

The problem is that it's easy to forget to multiply the appropriate parameter 
in these methods by sizeof(T) when necessary.  Doing so can lead to various 
issues like incompletely-initialized values, issues causing security 
vulnerabilities in the past.  PodOperations.h throws some C++ template methods 
at the problem, to eliminate the need to remember to add sizeof(T).  This 
stuff's been used by the JS engine for awhile -- I just moved it out of there 
and to a centralized place for everyone to use.

More details here if you want them:

http://whereswalden.com/2013/04/26/mozillapodoperations-h-functions-for-zeroing-assigning-to-copying-and-comparing-plain-old-data-objects/

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Removing writable [Replaceable] properties

2012-12-20 Thread Jeff Walden
Gecko (not specs) has a concept of writable [Replaceable] attributes.  It uses 
them for these properties:

  window.innerWidth
  window.innerHeight
  window.outerWidth
  window.outerHeight
  window.screenX
  window.screenY
  window.opener
  window.status
  window.name

In our implementation, a writable [Replaceable] property lives on the 
[[Prototype]] and has getter/setter-ish semantics associated with it, although 
(bug) you can't use Object.getOwnPropertyDescriptor to tell, as they use the 
old bindings code.  Getting such a property gets a value.  Setting it creates a 
shadowing property on window itself, using the exact value passed in.

  // Current Gecko
  alert(typeof window.screenX); // number
  window.screenX = 123;
  alert(typeof window.screenX); // string, not number

I'm making some changes near the code that implements this.  These are very 
desirable changes, but they make it complicated to preserve these bizarre 
semantics.

The easiest change is to get rid of writable [Replaceable] entirely, and make 
all these accessor-y properties: getting them gets whatever, setting them 
invokes the setter, getting them again gets whatever value the setter coerced 
it to.  

  // Proposed Gecko
  alert(typeof window.screenX); // number
  window.screenX = 123;
  alert(typeof window.screenX); // number (setter coerces string-number, or 
the setter silently did nothing)

This is compatible with IE9 (standards or quirks mode -- and not in IE9 compat 
mode, and it seems not in older IE modes; I don't have IE10 to test) and Opera 
behaviors.  IE9 puts the accessor on Window.prototype, and Opera puts it on 
window, but there's no visible difference in behavior for getting or setting.

But there's still a supposed compatibility bugaboo to changing, because sites 
might have relied on the old behavior (notwithstanding that browsers are all 
over the map here), and other browsers aren't quite the same.

WebKit gives the old behavior, but only for some of the listed properties -- it 
doesn't for status or name.  (And it provides writable-[Replaceable] behavior 
for other properties beyond these: 
window.locationbar/menubar/personalbar/scrollbars/statusbar/toolbar/navigator/clientInformation/screenLeft/screenTop/scrollX/scrollY/devicePixelRatio/console/performance
 definitely.  It might also for 
window.event/length/self/window/frames/opener/parent/top, but there's some 
extra complexity to these, so I'm unsure how they look.)

Anyway.  I think the compatibility bugaboo is just that -- we have no hard 
evidence of dependence on our current bizarre semantics, and we have 
other-implementation evidence that the proposed behavior is web-shippable.  
Thus I think we clearly should switch as proposed, aligning us with IE going 
forward and with Opera, and with the specs as well.  Thoughts?

Jeff

P.S. -- The SunSpider test that used an undeclared name variable no longer 
does in SunSpider 1.0, so concerns about window.name particularly are 
unfounded.  (Not to mention SunSpider is increasingly obsolete as a benchmark, 
and all the JS engines are doing their best to kill/ignore it.)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing writable [Replaceable] properties

2012-12-20 Thread Jeff Walden
On 12/20/2012 03:38 PM, Boris Zbarsky wrote:
 On 12/20/12 12:14 PM, Jeff Walden wrote:
  Setting it creates a shadowing property on window itself, using the exact 
 value passed in.

// Current Gecko
alert(typeof window.screenX); // number
window.screenX = 123;
alert(typeof window.screenX); // string, not number
 
 Er...  no.  If you do that, you get number for the second alert, because you 
 did a qualified set.  That's the whole complication with these properties: 
 they behave differently for qualified and unqualified sets.

Oops, sorry, you're right.

I suppose I should mention for clarity that what I'm doing is trying to remove 
this difference.  There is no spec way to distinguish between a qualified and 
an unqualified access to a variable, to get or set it.  The [[GetP]] and 
[[SetP]] hooks that are part of ES6 (which is slightly reformulating this), and 
the [[GetOwnProperty]] and [[DefineOwnProperty]] hooks that are part of ES5, do 
not allow passing along qualification.  In the case of getting a property, all 
you get is the object, and the property name being looked up.  In the case of 
setting, you also get the value that goes along.  And if you're defining, you 
get a descriptor with a value and some set of property attributes.  The other 
operations, like [[HasProperty]] and [[HasOwnProperty]], also all take just the 
object and a property name/key.

 And the proposal here is to align with new IE and Opera?

Yes.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed style guide modification: using declarations and nested namespaces

2012-11-12 Thread Jeff Walden
On 11/12/2012 10:44 AM, Robert O'Callahan wrote:
 On Mon, Nov 12, 2012 at 9:37 AM, Zack Weinberg za...@panix.com wrote:
 
 The scenario I'm concerned with is when a .cpp file does 'using namespace
 A;' and then goes on to define a bunch of its *own* symbols; later someone
 adds a symbol to namespace A, and gets an unexpected break possibly miles
 from their own code.  I see *avoiding* this phenomenon as the primary
 *benefit* of namespaces, and it's totally lost if you do 'using namespace'
 even in .cpp files.
 
 I've never ever seen that happen, though.

It's happening.  We're trying to add a mozilla::Range struct to mfbt that would 
lightly encapsulate a pointer and a length, providing assertions whenever you 
index into it, extract start or end pointers, and so on.  It would have been 
simple to add, but SpiderMonkey has a js::ion::Range struct already.  And as a 
semi-convenience we had this in one place:

namespace js { using namespace mozilla; }

SpiderMonkey currently does |using namespace js;| all over the place, though, 
so in Ion code there was suddenly an ambiguity between mozilla::Range and 
js::ion::Range -- even for code written inside the ion namespace, I believe.  
Ordinarily the innermost Range would win, but once you start opening 
namespaces, that rule goes out the door.

We ended up removing the nested |using| above and making all SpiderMonkey 
headers qualify everything with mozilla::.  We use few enough things from 
mozilla:: so far that we switched to |using mozilla::RangedPtr| and so on in 
.cpp files and didn't think twice.  The patch itself was pretty big and 
annoying, but at least we won't have these issues in the future.

Concerning the DOM nsDocument case, one possible solution might be to refer to 
those symbols with a still-qualified but not fully-qualified name.  Type 
inference code uses types:: without much pain; Ion code uses ion:: (inside the 
js namespace) without much pain, either.

For another data point, WebKit's WTF headers include |using WTF::Vector| and 
such at the bottom of them, and I have seen grumbling and/or regrets about this 
decision, as it's caused issues with name conflicts with some of Google's gtest 
framework.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Type of nullptr on Mac OS X 10.7 debug

2012-09-23 Thread Jeff Walden
On 09/21/2012 05:07 AM, Henri Sivonen wrote:
 What's the type of nullptr on Mac OS X 10.7 debug build? On tryserver,
 the compiler complains that there’s no nullptr_t in namespace std when
 using std::nullptr_t. Including cstddef does not help.
 MOZ_HAVE_CXX11_NULLPTR is defined, however.

The nullptr macro we define (or not, as the case may be) doesn't include 
std::nullptr_t emulation right now.

Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform