Re: New Bugzilla component for keeping comm-central in sync with mozilla-central

2020-07-07 Thread Joshua Cranmer

On 7/7/2020 10:11 AM, Tom Ritter wrote:

Hey Geoff - what sorts of things would be appropriate to file there?
Or perhaps as a more basic question - what *is* comm-central? Is it
'mozilla-central with constantly-rebased Thunderbird patches on top?'
Is it an old fork of mozilla-central where a lot (or very few) patches
are copied across? Some hybrid? I like Thunderbird, and I'd like to
make things easier on you, but truthfully I know very little about how
Thunderbird is made relative to my work on Firefox (and I couldn't
find a document online).


Way back in the days of CVS, all of the source code for all of the 
Mozilla products was in one repository, happily living side-by-side with 
each other. When mozilla-central was created, only those directories 
related to building Firefox was created. Those directories needed for 
non-Firefox, Gecko-based projects (i.e., Thunderbird, SeaMonkey [1], and 
Sunbird [2]) were gathered into a separate repository called 
comm-central. You need both the mozilla-central and comm-central 
repositories to build Thunderbird.


Structurally speaking, building Thunderbird is the same as building all 
of mozilla-central, without the browser/ directory (and maybe a few 
other directories), and adding in the comm-central mail/ and mailnews/ 
directories.


[1] If you're not familiar, SeaMonkey is basically the continuation of 
the old Netscape suite that contains both web browsing and email 
capabilities in one single product.


[2] This is probably extremely obscure at this point, but Sunbird is the 
old stand-alone calendar application. It does not exist as a separate 
product anymore.




-tom


On Tue, Jul 7, 2020 at 3:45 AM Geoff Lankow  wrote:

Hi everybody

Many changes to mozilla-central code require complementary changes to
comm-central code. Communication about this hasn't always been
effective, which leads to breakages that could have been prevented, and
wasted developer time.

We now have a dedicated Bugzilla component for alerting Thunderbird's
developers about such things: Thunderbird - Upstream Synchronization.
Please use it to keep us informed of impending changes that we'll need
to deal with.

Thank you to those who do keep us informed. I hope having a dedicated
component makes your life easier as well as ours.

GL
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How best to do async functions and XPCOM?

2019-12-09 Thread Joshua Cranmer

On 12/5/2019 6:33 PM, Gerald Squelart wrote:

On Friday, December 6, 2019 at 9:20:21 AM UTC+11, Geoff Lankow wrote:

Hi all

I'm redesigning a bunch of Thunderbird things to be asynchronous. I'd
like to use Promises but a lot of the time I'll be far from a JS context
so that doesn't really seem like an option. The best alternative I've
come up with is to create some sort of listener object and pass it to
the async function:

interface nsIFooOperationListener : nsISupports {
void onOperationComplete(
  in nsresult status,
  [optional] in string errorMessage
);
};

...

void fooFunction(..., in nsIFooOperationListener listener);

This works fine but I wonder if there's a better way, or if there's some
established prior art I can use/borrow rather than find out the pitfalls
myself.

TIA,
GL

We have mozilla::MozPromise [0], similar to mozilla::dom::Promise but it 
doesn't rely on JS at all.

It can be a bit tricky to use, the simplest way (to start) is probably to do 
something like InvokeAsync(work thread, code to run that resolves or rejects the 
promise)->Then(target thread, on-success follow-up, on-failure follow-up) 
(e.g., [1]).


The problem with MozPromise is that it doesn't integrate well if you use 
XPIDL interfaces, so you have this annoying issue that if you want to 
use XPIDL integration, you have to use mozilla::dom::Promise, which is 
annoying to use from C++. A third wrinkle, especially now that async 
functions has landed in Rust, is if you want to try to use 
std::future::Future in Rust, which isn't going to convert terribly well 
to either form.


It may be worth spending some time building some wrappers to integrate 
between all of our various async function frameworks...


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Upcoming C++ standards meeting in Cologne

2019-07-30 Thread Joshua Cranmer

On 7/30/2019 4:40 PM, Mike Hommey wrote:

On Tue, Jul 30, 2019 at 01:04:56PM -0400, Nathan Froyd wrote:

On Sat, Jul 27, 2019 at 1:42 PM Botond Ballo  wrote:

If you're interested in some more details about what happened at last
week's meeting, my blog post about it is now available (also on
Planet):

https://botondballo.wordpress.com/2019/07/26/trip-report-c-standards-meeting-in-cologne-july-2019/

Thanks for writing this up.  I always enjoy reading these reports.

One grotty low-level question about the new exception proposal.  Your
post states:

"it was observed that since we need to revise the calling convention
as part of this proposal anyways, perhaps we could take the
opportunity to make other improvements to it as well, such as allowing
small objects to be passed in registers, the lack of which is a pretty
unfortunate performance problem today (certainly one we’ve run into at
Mozilla multiple times). That seems intriguing."

How is revising the calling convention a C++ standards committee
issue?  Doesn't that properly belong to the underlying platform (CPU
and/or OS)?

... and aren't small objects already passed via registers already?


I wasn't at the meeting, so I can't say for sure, but I imagine the 
issue being talked about is the fact that structs/classes need to have a 
*this parameter (most notably for non-trivial constructors and 
destructors), which precludes being able to stick them in registers when 
those kick in. Watch what happens if you return a std::tuple for 
example: https://gcc.godbolt.org/z/CfbGvq (I would love to have real 
multiple return values in C++, but std::tuple still causes stack 
allocation for the return value).


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [ANN] In-tree helper crates for Rust XPCOM components

2019-03-27 Thread Joshua Cranmer

On 3/26/2019 10:27 AM, Lina Cambridge wrote:

Hi all,

Last year, Nika Layzell landed support for implementing XPCOM components in
Rust [1]. Since then, folks on different teams have been adding components,
and working out some common patterns. There are now several in-tree helper
crates that provide more idiomatic wrappers for these patterns, and I
thought I'd take the time to summarize them here.
Are there any plans to add better support for Rust<->JS integration, or 
mapping Rust futures back into the XPCOM world?-


Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: CPU core count game!

2018-03-31 Thread Joshua Cranmer

On 3/27/2018 5:02 PM, Mike Conley wrote:

Thanks for drawing attention to this, sfink.

This is likely to become more important as we continue to scale up our
parallelization with content processes and threads.


How do these counts classify SMT systems (aka Hyperthreading)? Would 4 
core * 2-way SMT show up us 4 cores or 8 cores?


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Unship: stream decoder for BinHex format

2017-10-18 Thread Joshua Cranmer

On 10/17/2017 10:45 AM, Boris Zbarsky wrote:

On 10/17/17 5:47 AM, Shih-Chiang Chien wrote:

I intent to remove all the code that handling BinHex decoding, i.e.
nsBinHex.cpp, from mozilla-central if no other project is using it.


The code was originally added for mailnews.  See 
https://bugzilla.mozilla.org/show_bug.cgi?id=81352


Please double-check that this use case does not get broken, or 
condition the code to be mailnews-only or something.


FWIW, I've considered ripping out the binhex decoding from mailnews code 
anyways.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox and clang-cl

2017-08-13 Thread Joshua Cranmer

On 8/13/2017 8:32 AM, cosinusoida...@gmail.com wrote:
Haven't you been able to do that with MinGW on Linux since about 1998? 


MinGW doesn't follow the MSVC ABI, as I recall, which makes any MS 
interface that uses C++ unusable. I believe this causes issues in places 
like accessibility or graphics.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: refcounting [WAS: More Rust code]

2017-08-02 Thread Joshua Cranmer

On 8/2/2017 6:37 AM, Enrico Weigelt, metux IT consult wrote:

On 31.07.2017 13:53, smaug wrote:




Reference counting is needed always if both JS and C++ can have a
pointer to the object.


By the way - just curious:

Anybody already thought about garbage collection ?


Yes. About a decade ago, Mozilla invested some resources in being able 
to automatically rewrite the codebase to use GC instead of reference 
counting: <https://wiki.mozilla.org/XPCOMGC>.


Some conclusions:
1. Automated rewriting of C++ code is possible (this is when the only 
significant open-source C++ compiler relied on horribly inaccurate 
position tracking, so it actually was a big deal).
2. Converting from reference-counting to conservative GC is barely 
possible. (I recall bsmedberg saying that the resulting build could go 
for a few minutes before crashing)
3. It's not the performance win you think it is. The main performance 
wins are a compacting GC, which minimizes memory use over time and makes 
heap allocation basically a pointer bump. On the other hand, now you 
have to have threadsafe reference counting on every object.



That wouldn't have the problem w/ circular graphs, and should make the
whole code smaller and faster.


See the results on the linked page. It was neither smaller, nor faster.

We already have a cycle-collector, which is basically an opt-in garbage 
collector (it only looks at a subset of the total ownership graph). The 
main difficulty in using it is having to annotate classes in cycles, but 
this is exactly the sort of thing that's easy to write in a #[derive()] 
attr in Rust.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Announcing MozillaBuild 3.0 Release

2017-07-24 Thread Joshua Cranmer

On 7/24/2017 7:20 PM, Enrico Weigelt, metux IT consult wrote:

On 24.07.2017 23:04, Mike Hommey wrote:


It looks like you're doing a lot of work that is completely out of scope
for creating packages for Debian/Devuan,


Not quite. Of course, I don't wanna compile-in things that aren't
necessary here (eg. the media stuff). But that lead to lots of problems,
so I'm now getting my hands dirty and try to fix at the root.


Trying to build by disabling lots of flags in general leads to lots of 
frustration with broken builds. A decade of experience at Mozilla has 
shown that configurations not built on standard automation tend to be 
quickly broken. The onus of maintenance will be entirely on you, even if 
the changes are upstreamed--and they are unlikely to be accepted 
upstream without justification (which "I don't wanna have these" is not).


FWIW, building in odd configurations generally disqualifies you from 
being able to use Mozilla trademarks on the resulting product.





and that is work that sounds
like should be discussed with the thunderbird crowd.


Nope, they directed me to this list, as these things aren't in tbird's
own tree, but generic mozilla.


I directed you to this list because you were asking "how do I modify 
media/ to stop doing this stuff?", which is plainly out of scope for mdat.



When I'm done w/ that, I'll start w/ things I've been planning for
quite some time, eg. moving mailbox handling to external upas service,
all credential related stuff to factotum, move contact handling to
external programs, etc, etc.

But before I can start with that, I first need a clean working base.


If you believe that maintaining your own custom pared-down ersatz build 
is a necessary precondition for adding new functionality, you will have 
rather little time to implement other functionality.


As Mike Hommey says, you are looking to build a fork of Thunderbird at 
this point. It's not entirely clear what you're proposing, but the vague 
language suggests very heavily that you're intending to delete our 
present code for unknown external libraries, which is likely not in the 
vision of Thunderbird's future and therefore is unlikely to be accepted 
upstream.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Announcing MozillaBuild 3.0 Release

2017-07-24 Thread Joshua Cranmer

On 7/24/2017 3:25 PM, Enrico Weigelt, metux IT consult wrote:
That's been true for some time now; while we still support 32-bit 
systems,

> for example, you can't build Firefox on 32-bit systems at all due to
> memory constraints,

This raises the question: why does it take up so much memory ?


Release builds on Windows use LTO, which requires essentially keeping 
both the final object file and the entire internal IR in memory at the 
same time.

Not sure, whether an 4core i7 w/ 8GB RAM already counts as "old", but
it's really slow on my box. I've got the impression there's stil a lot
of room for optimizations. For example, I wonder why lots of .cpp-files
are #include'd into one.
In that example, undoing that slows down your build. (Parsing C++ header 
files take a lot of time in the aggregate, and you spend less time 
linking when there's no duplicates of inlined methods).


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-11 Thread Joshua Cranmer

On 7/10/17 5:29 AM, Nicholas Nethercote wrote:

- Interop with existing components can be difficult. IPDL codegen rust
bindings could be a big help.


Rust's C++ interop story is absolutely atrocious. The FFI basically runs 
on C ABI, even though Rust and C++ have some similar concepts that could 
be exposed more cleanly (e.g., this parameters, even mapping  or 
&[T] to appropriate semantics) without forcing such a step. Bindgen 
works a little, but really only for calling C++ from Rust, and only if 
the C++ code is simple enough--if the code includes headers of 
complexity doom (hi, std::string!), it really ends up being a game of 
"how can I force bindgen to ignore enough that I get access to what I need."


XPIDL and WebIDL express only a subset of functionality, so they're 
theoretically easier to support than "generic C++17." However, XPIDL is 
profoundly unnatural in C++ code (the array syntax is horrendous, 
particularly if you want to start passing string arrays), as well as 
being limited in some of its vocabulary. WebIDL is more generic, but the 
bindings produces source code, which means that the ABI isn't exactly 
standardized for easy cross-language calling.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving visibility of compiler warnings

2017-05-25 Thread Joshua Cranmer

On 5/25/17 6:11 PM, Eric Rahm wrote:

I think we disable it for local builds because we don't control what
versions of tools folks use. So clang vFOO might spit out errors we don't
see in clang vBAR and it would be a huge pain if those failed locally even
though they'd be fine in automation.


It should be possible to check the compiler and version and enable it by 
default if it's the same version as the ones on our check-in infrastructure.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla Charset Detectors

2017-05-23 Thread Joshua Cranmer

On 5/23/17 2:58 AM, Gabriel Sandor wrote:

Hello Henri,

I was afraid this might be the case, so the library really is deprecated.

The project i'm working on implies multi-lingual environment, users, and
files, so yes, having a good encoding detector is important. Thanks for the
alternate recommendations, i see that they are C/C++ libraries but in
theory they can be wrapped into a managed C++.NET assembly and consumed by
a C# project. I haven't seen yet any existing C# ports that also handle
charset detection.


You only need charset detection if you can't get reliable charsets 
passed around. Most word processing formats embed the charset they use 
in the document (or just use UTF-8 unconditionally), so you only need 
charset detection if you're getting lots of multilingual plain text (or 
plain text-ish formats like markdown or HTML), and even then, only if 
you expect the charset information to be unreliable. It's also worth 
pointing out that letting users override the charset information on a 
per-file basis goes a very long way to avoiding the need for charset 
detection.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Switching to async/await from Task.jsm/yield

2017-04-03 Thread Joshua Cranmer

On 3/16/2017 5:29 PM, Dave Townsend wrote:

For a long time now we've been writing JS code that waits for promises
using Task.jsm and generator functions. Recently though the JS team added
support for the JS standard way of doing this, async/await:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function

Writing code in standard JS is always better for the web, makes it easier
to onboard new engineers and allows for better support in developer tools.
So I'd like to propose that we switch to the standard way of writing these
functions immediately. New code should use async/await instead of Task.jsm
going forwards.

Florian has some rough plans to automatically rewrite existing usages of
Task.jsm to the standard JS forms so for now don't worry much about going
and submitting patches to fix up existing code. Once that is done we can
remove Task.jsm from the tree.

Does anyone object to any of this?


Is it possible to make those scripts public so as to be able to run them 
on comm-central?


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecating XUL in new UI

2017-01-16 Thread Joshua Cranmer

On 1/16/2017 2:43 PM, Dave Townsend wrote:

One of the things I've been investigating since moving back to the desktop
team is how we can remove XUL from the application as much as possible. The
benefits for doing this are varied, some obvious examples:

* XUL is a proprietary standard and we barely maintain it.
* Shallower learning curve for new contributors who may already know and
use HTML.
* HTML rendering is more optimized in the platform than XUL.
* Further integration of Servo code may require dropping XUL altogether.

The necessary first step of reducing XUL use is to stop adding any more UI
that uses XUL and here I'm talking about wholly new UI, additions to
browser.xul and other existing UI are a separate concern. What do folks
think about making this a rule for new projects in the near future?

Of course there are some features missing from HTML that make this
impossible in some cases right now. Some that I've already found:

* HTML has no support for popup panels like XUL does. The devtools team
have been working around this but they are still dependent on XUL right now.
* iframe elements don't have the same capabilities that the XUL browser
element does and we use that in some UI.
* Top level menus on OSX are currently only able to be defined with XUL
elements. This only affects UI that uses top-level windows and most of our
new UI is in-content so may be less of an issue.

What other features do we depend on in XUL that I haven't listed?


XUL trees are probably the most complex feature that HTML doesn't have. 
Some of its features that I consider important include the use of an 
nsITreeView-like generative interface, advanced styling capabilities 
(you can style rows/cells based on content, effectively), lazy loading 
(in particular, data for child elements isn't asked for until their 
parents are expanded). There's also probably performance aspects and 
accessibility factors that don't normally feature in the minds of 
developers.


XUL overlays and XBL widgets are also things that are likely to be 
missed, although Web Components probably largely covers the same feature 
space (I don't know enough to know what is missing).


The final point I would make is that we probably need to pick a standard 
widget toolkit. I believe in the past, we were shipping 4 different 
versions of jQuery because every little frontend silo was importing it 
locally for their own needs. Particularly if we need to reimplement 
major widgets like , it makes much more sense to have one 
shared implementation that can be collaboratively improved. And put it 
in toolkit/, please, not browser/. :-)



--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: So, what's the point of Cu.import, these days?

2016-09-26 Thread Joshua Cranmer

On 9/24/2016 5:13 PM, David Teller wrote:

Which begs the question: what's the point of `Cu.import` these days?


One major difference between Cu.import and ES6/require-style modules is 
that only one version of the script is created with Cu.import. This 
allows you to make databases using Cu.import--every code that calls that 
Cu.import file, whether a chrome JS file or an XPCOM component 
implementation, will be guaranteed to see the same objects once the call 
is made. There are definitely modules that rely on this.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Moving FirefoxOS into Tier 3 support

2016-01-25 Thread Joshua Cranmer

On 1/25/2016 11:30 AM, Ehsan Akhgari wrote:
For example, for a long time b2g partners held back our minimum 
supported gcc.  Now that there are no such partner requirements, 
perhaps we can consider bumping up the minimum to gcc 4.8?  (bug 1175546)


Strictly speaking, I would advocate for 4.8.1, since that gets us ref 
qualifiers on methods (or will, once we get VS 2015 as the minimum 
requirement on Windows).


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to stop revving UUIDs when changing XPIDL interfaces

2016-01-15 Thread Joshua Cranmer

On 1/15/2016 1:21 PM, Bobby Holley wrote:

Has anyone measured recently whether there's still a significant perf win
to making IIDs 32-bit? If we stop using them as a versioning tool, we could
potentially relax our uniqueness requirements, and save a lot of
comparisons on each QI. Addon-compat would be tricky, but is potentially
solvable.


Are we still using nsISupports in a way that we expect it to be 
ABI-compatible with IUnknown?


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: I was trying to implement the XPConnect with JS-Ctypes, is that would be a good idea?

2016-01-04 Thread Joshua Cranmer

On 1/4/2016 9:24 AM, 罗勇刚(Yonggang Luo) wrote:

1、I was not trying implement new things in xpcom, our company(Kingsoft) are
maintaining a fork of thunderbird, and at the current time
We have to re-use existing XPCOM components that already exists in  the
thunderbid gecko world, beyond pure html
things, there is too much things we have to re-use(xpcom things), and we
are facing  performance problems,
the mork-db and the mime-parse, they are all working in synchronous way, so
I have to figure out a way to calling these components
in a worker directly, so that they would not cause UI-lag in main-thread.
That's all the reason why I was trying to re-implement XPConnect with
js-ctypes. So  that I can calling
the exist components in the worker. And free the main-thread.


Mork, by design, can't be used off main-thread. So even if you're trying 
to subvert it by using JS-ctypes and etc., it's not going to work very 
well, let alone the problems you have with trying to maintain a 
pseudo-reimplementation of xpconnect.

3、 There is an advantage of XPCOM, WebIDL seems merely for Javascript, but
XPCOM seems more language-neutral, we could be able to
use xpcom in Java/Python and other languages, that's looks like a advantage
of XPCOM.
XPIDL is effectively a fork of an old version of IDL. Its interfaces are 
incapable of cleanly representing union types or array types very well, 
something that WebIDL does far better, as WebIDL is partly a fork of a 
newer version of IDL. I believe there already exists WebIDL bindings for 
C++, JS, and Rust, and extending it to Java or Python would not be a 
challenging task. The only complexity is that the WebIDL bindings does 
not use a grand-central dispatch mechanism like XPConnect, but that 
merely means that adding new bindings requires writing a code generator 
and feeding all the interfaces through it instead of implementing 
several customized dispatch mechanisms. Not that PyXPCOM or JavaXPCOM 
have worked for the past several years.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: I was trying to implement the XPConnect with JS-Ctypes, is that would be a good idea?

2016-01-03 Thread Joshua Cranmer

On 1/3/2016 10:24 AM, 罗勇刚(Yonggang Luo) wrote:

So that we could be able to access xpcom in worker.
And we could be able  to implement thunderbird new msg protocol in pure
javascript


I will point out that Thunderbird developers are already looking into 
replacing the xpcom use of message protocols, so if that is the primary 
goal, then you are wasting your time, I am afraid.


I will also point out that both JavaScript and C++ have moved on from 
the time xpconnect was developed to the point that use of xpconnect 
requires designing APIs that are uncomfortable to use from C++ or 
JavaScript (or even both!), so it is a much better investment of time to 
move APIs to newer paradigms than it is to try to develop a system that 
almost no one really understands.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving blame in Mercurial

2015-12-11 Thread Joshua Cranmer

On 12/11/2015 5:17 PM, Gregory Szorc wrote:

If you have ideas for making the blame/annotate functionality better,
please capture them at https://www.mercurial-scm.org/wiki/BlamePlan or let
me know by replying to this message. Your feedback will be used to drive
what improvements Mercurial makes.


A "reverse blame" feature that shows when a line in an old revision was 
deleted or changed in a newer revision is something I've desperately wanted.

(Relatedly, I know a lot of you want a Mercurial repo with CVS history to
facilitate archeology. I hope to have that formally established in Q1. Stay
tuned.)


Are you planning on letting comm-central attach to the CVS history as well?

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Move mailnews/intl back into gecko-dev and use options to choose disable/enable it.

2015-12-07 Thread Joshua Cranmer

On 12/7/2015 11:38 AM, 罗勇刚(Yonggang Luo) wrote:

Maintain intl in comm source tree is a big burden, and cause
The comm source are tightly coupled with gecko-dev
source tree.

By providing the necessary functional for comm, we can step forward
to removing the building dependencies of comm from gecko-dev source tree.

The Encoding part and the Crypt part is the most complicated part, and not
be able
to move into comm source tree completely, so we may choose an alternative
way to do that,
use prefs to disable it or enable it.


The code was removed from mozilla-central because mozilla-central 
explicitly does not want to support UTF-7. I rather suspect that Henri 
Sivonen would outright reject (and I would agree with said rejection!) 
any patch to attempt to move the code back to mozilla-central.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: ISO-2022-JP-2 support in the ISO-2022-JP decoder

2015-11-30 Thread Joshua Cranmer

On 11/30/2015 1:02 PM, Andrew Sutherland wrote:

On Mon, Nov 30, 2015, at 01:24 PM, Adam Roach wrote:

Does this mean it might interact with webmail services as well? Or do
they tend to do server-side transcoding from the received encoding to
something like UTF8?

They do server-side decoding.  It would take a tremendous amount of
effort to try and expose the underlying character set directly to the
browser given that the MIME part also has transport-encoding occurring
(base64 or quoted-printable), may have higher level things like
format=flowed going on, and may need multipart/related cid-protocol
transforms going on.


Additionally, declared mail charsets are sufficiently often a lie that 
it is much easier to control the decoding process by converting to UTF-8 
server-side, which also evades inconsistencies in browser decoding of 
charsets.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Merging comm-central into mozilla-central

2015-11-06 Thread Joshua Cranmer

On 11/6/2015 12:38 PM, Doug Turner wrote:

I would have rather done this in a private email, but some replied and said I 
wasn’t clear.


-> Do not merge comm-central into mozilla-central <-


1) I think merging comm-central is a bad idea as it will basically tax all 
gecko + firefox developers forever.

2) It isn’t clear that Thunderbird is a supported product anymore.  MoCo 
certainly isn’t responsible for it.  I don’t think MoFo does anything for it.


I know that Thunderbird has been in talks with the Mozilla Foundation 
about being officially supported by them, and I believe the only thing 
left is to sign the ink on some papers for that. There are others who 
were involved in those talks who could give more specific details.



3) We’re spending $ and time in Release on this project.  I would rather not 
have to do that given (2).

4) This sets a bad precedent.  I don’t think we want every application built on 
top of gecko to be in mozilla-central.


I've explained why this isn't really a precedent several times.



We don’t have all of the time and resources in the world.  We have to be very 
deliberate about what we work on.  And Thunderbird — as it is now — isn’t 
something MoCo is focusing on.  Because of this, I really doubt anyone in moco 
Release is going to futz with it.


Except the release engineers in moco are already spending a good deal of 
time on managing the Thunderbird release engineering--exactly as Mozilla 
promised they would back in 2012.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Merging comm-central into mozilla-central

2015-11-05 Thread Joshua Cranmer
This thread has quieted down for a while, but I don't want to let it die 
out without a clear consensus being reached.


What I want to know is whether or not there is sufficient consensus for 
the merge to happen that I can start planning with release engineering 
and automation on getting merged comm-central builds working, with an 
eye to actually committing the merge in Q4 or Q1 (the master bug for 
this work will be bug 787208).


On 10/23/2015 2:57 AM, Mike Hommey wrote:

Hi,

This has been discussed in the past, to no avail. I would like to reopen
the discussion.

Acknowledgment: this is heavily inspired from a list compiled by Joshua
Cranmer, but please consider this *also* coming from me, with my build
system peer hat on.

What:

Let's first summarize what this is about. This is about moving the
development of Seamonkey, Thunderbird, and Lightning in the same
repository as Firefox, by merging all their code and history from
comm-central into mozilla-central.

Seamonkey and Thunderbird share a lot, so comm-central without
Seamonkey wouldn't make a significant difference. Lightning is, AIUI, both
standalone and an addon shipped with Thunderbird, so in practice, it
can be considered as being part of Thunderbird.

Why:

- The interaction between the build system in mozilla-central and the
   build system in comm-central is complex, and has been a never stopping
   cause of all kinds of problems sometimes blocking changes in the
   mozilla-central build system, sometimes making them unnecessarily more
   complex.

- The interaction between both build systems and automation is complex,
   and got even more twisted with mozharness now being in
   mozilla-central, because of the chicken-and-egg problem it introduces,
   making integration with mozharness hard.

- Likewise with taskcluster.

- Subsequently, with mozilla-central now relying on mozharness and soon
   switching away from buildbot, the differences in setup with
   comm-central keep increasing, and makes maintaining those builds a
   large hurdle.

- Historically, the contents of comm-central used to be in the same
   repository as everything else, and the build system has never really
   copped with the separation. Arguably, this was in the CVS days.
   It's a testament to our build and release engineers that the cobbled
   together result has held up for as long as it has, but it's really
   not sustainable anymore.

- mozilla-central and comm-central are as tied as top-level
   mozilla-central and js/ are. Imagine what development would look like
   if js/ was in a separate repository.

- Relatedly, many codebase-wise changes (e.g. refactorings), or core API
   changes tend to break comm-central. While it can be argued that it
   shouldn't be platform engineers' burden to care about those, the fact
   is that even if they do care, the complexity of testing those changes
   on try or locally doesn't even slightly encourage them to actually do
   the work.

- TTBOMK, Thunderbird is Mozilla's second largest project in terms of
   number of users, behind Firefox, and before Firefox for Android and
   Firefox OS.  Many of those users may legitimately want to contribute
   to Thunderbird, and the bar to entry is made much higher by the
   multi-repository setup and the extra complexity it entails. Mozilla is
   actively making the bar to entry for Firefox/Firefox for
   Android/Firefox OS contributions lower, at the expense of Thunderbird
   contributors. This is a sad state of affairs.

Why not, and counter-counter-arguments:

- It would increase mozilla-central significantly.
 Well, first, it's true, for some value of "significant".
 comm-central is about 131M of .hg/ data, while is about 2309M as of
 writing. That's a 5.7% increase in size of the repository. On the
 other hand, 131M is less than the size mozilla-central grew in the
 last 3 months.

- It sets a bad precedent, other Gecko-based projects might want to
   merge.
   - mobile/ set the precedent half a decade ago.
   - as mentioned above, historically, everything was in the same
 repository, and the split can be argued to be the oddity here
   - there are barely any Gecko-based projects left that are not in
 comm-central.

- It adds burden to developers, needing to support those projects
   merged from comm-central.
 Just look around in mozilla-central at all the optional things
 that are not visible on treeherder and break regularly. The
 situation wouldn't be different in that sense. But the people
 that do care about those projects will have a better experience
 supporting them.

Considering all the above, are there still objections to this
happening, and can we finally allow Joshua to go ahead with his merge
plan? (CCing bsmedberg, who, iirc, had Brendan's delegation to have the
final word on this)

Cheers,

Mike



--
Joshua Cranmer
Thunderbird and DXR developer
Source code ar

Re: Merging comm-central into mozilla-central

2015-10-27 Thread Joshua Cranmer

On 10/27/2015 2:50 PM, Boris Zbarsky wrote:

On 10/27/15 3:17 PM, Joshua Cranmer  wrote:

[1] An example from just this morning is the emasculation of
nsIDOMWindow. It's clear at this point that all of our binary code has
to be linked into libxul


Why can you not use nsPIDOMWindow?  If there are particular APIs it's 
missing that you need, please file bugs and we can put them there, 
just like we did for APIs that various parts of Gecko needed.


We did replace our uses with nsPIDOMWindow, but it's an example of an 
API that can be used external to libxul being replaced with one that 
can't be.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Spare Checkouts (Was: Merging comm-central into mozilla-central)

2015-10-26 Thread Joshua Cranmer

On 10/26/2015 4:16 PM, Bobby Holley wrote:

Question: Would we actually need sparse checkouts? What if c-c was 
just a branch in the repo with extra stuff, which periodically took 
merges from m-c? 


That makes bisecting to find m-c-induced failures harder, and it also 
makes atomic commits (even for c-c contributors who want to make changes 
to m-c that affect both, such as myself) impossible still.


Obviously, I'm biased, but I still think that even that change would not 
ease up the difficulty of attracting new contributors, nor would it 
really solve the apparent goal of making c-c code invisible to m-c 
developers, since you'd see it if you accidentally checked out the 
default branch when tip was c-c and not m-c.



--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Merging comm-central into mozilla-central

2015-10-26 Thread Joshua Cranmer

On 10/23/2015 8:25 PM, Mitchell Baker wrote:

Yes, this is a good topic and I agree i'm a necessary party here.
Is there some way of getting a good sense of the work that we're 
talking about?


I'm not sure which work you're referring to here, but I will try to 
answer to the best of my abilities.


The work required to actually do the merge is nothing more than a script 
I've already written and tested, as well as a few days with a release 
engineer to set up new configs. Ongoing maintenance would hopefully be 
minimal, as the difference would hopefully just be using one config file 
for Thunderbird and one for Firefox that differ only in things like 
choosing which mozconfig to use or choosing which directory to upload 
to, but I don't have a good enough insight into our build configuration 
to know for sure. Ongoing build system maintenance would probably be 
nil, excepting to-be-deprecated constructs which are used only in c-c 
(which I do try to keep on top of anyways, so that shouldn't be an issue).


As for the work required in build system to maintain the current 
state... There's a lot of hacks in the m-c build system to support 
Thunderbird, particularly the --external-top-srcdir configure option, 
the existence of multiple topsrcdirs, and checks for mozilla/ in several 
places (yeah, don't add a new mozilla/ source directory to the build 
system, things will break). The situation in release engineering is 
worse, since at this point we're using completely different build 
techniques, and it's hard or impossible for us to migrate to the 
mozharness-based builds, since mozharness is in mozilla-central and 
comm-central needs to do some build logic to figure out which version of 
mozilla-central (particularly on the Try server) to support. The build 
system support for the latter case also requires retaining partial 
duplication of some functionality in comm-central, resulting in a 
veritable Frankenbuild scenario.



On 10/23/15 6:15 PM, Doug Turner wrote:
Thunderbird is under supported and potentially harmful (as Brian 
Smith pointed out on the mozilla-dev-security-policy back in Sept).  
Before merging c-c into m-c, I think we should have agreement on what 
kind of support the mozilla project and foundation is going to give 
to Thunderbird.


FWIW, when Brian Smith made his comments on mozilla.dev.security.policy, 
I did try to find a bug detailing what he was talking about... and I 
couldn't find what he was talking about, which means that our security 
team is finding problems in Thunderbird and not properly notifying any 
Thunderbird developers of them.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Merging comm-central into mozilla-central

2015-10-23 Thread Joshua Cranmer

On 10/23/2015 12:22 PM, Benjamin Smedberg wrote:
I support going back to a giant monolithic repository if we can 
cleanly delineate the code for various projects.


We know that the searchability and readability of our code is a major 
barrier to some kinds of participation. We should continue to optimize 
ourselves around that workflow.


Does this proposal come with a plan to check out subsets of the code? 
In particular, I want to express the following as something inbetween 
"serious concerns" and "requirements":


 * The default view of dxr.mozilla.org should not include non-Firefox 
code

 * The default checkout should not include non-Firefox code. (Note:
   this is about the working tree: I don't think the space in the .hg
   directory matters enough to worry about).


It's a relatively easy matter to fix the first; the second is harder to 
do for all contributors. I've been told it's a coming feature, but I've 
been told this for a while.


I also wonder why you have a peculiar insistence that comm-central code 
must not appear to any contributor, given the continued existence of 
"stuff that Firefox doesn't care about" in mozilla-central, such as 
support for tier-3 platforms (do we still have QT code in the tree) or 
xulrunner. The mere presence of code in a codebase has proven to be 
horribly insufficient to guarantee that people care about maintaining 
it--history has time and time and time again shown that any code that 
doesn't impact Treeherder results *WILL* get broken. (Easiest case in 
point: try building without unified files.)


I'm sorry that it makes you sad, but Mozilla has explicitly decided to 
prioritize the bar to entry for Firefox development, and the speed of 
development of Firefox, at the expense of Thunderbird (and seamonkey). 
And as Firefox development moves faster toward things such as stopping 
supporting XUL addons, removing support for heavyweight themes, and 
even cutting XUL altogether, we should all expect the impedance 
mismatch to become worse. We are not only saying that you don't have 
to fix comm-central apps: we're also saying that we don't *want* core 
contributors to spend time on comm-central.


Except that to demand contributors don't care about comm-central would 
be to demand of your employees that they should be jerks to the wider 
open-source community. Merging comm-central into mozilla-central, with 
the exception of the time spent doing the actual merge work, would 
reduce the amount of time that core contributors would have to spend 
worrying about comm-central in the short and medium-terms for sure.


From my perspective, your insistence on the bar to entry for Firefox 
development (or, rather, explicitly deprioritizing Thunderbird and 
Seamonkey) seems like a weak ad-hoc justification. How can you justify 
letting core contributors take the time to review patches for systems 
that Mozilla will never officially support--mingw, OpenBSD, iOS--while 
saying that they shouldn't be taking the time to review patches for 
systems that Mozilla officially supports?


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Merging comm-central into mozilla-central

2015-10-23 Thread Joshua Cranmer

On 10/23/2015 3:43 AM, Gregory Szorc wrote:

IMO this is one of the two serious concerns. However, I /think/ it will
only add an incremental burden. If nothing else, perhaps this will force us
to better invest in tools that automatically handle refactorings.

The other serious concern is impact to automation. Are we going to close
trees for Thunderbird failures? Are we going to run Thunderbird jobs on
every push? (Do we do this now?)


The automation aspects are open to some debate as to the most reasonable 
way to implement them (caveated on what our infrastructure can support). 
Our buildbot infrastructure currently supports triggering builds if only 
certain files are changed--so changing SeaMonkey-only code, for example, 
doesn't trigger a Thunderbird build and vice versa.


I think it's reasonable to expect that Thunderbird failures do not close 
trees (and I've heard that one of the design goals of Treeherder is/was 
to make project-specific "tier 1" views that would make this easier). I 
also think it's reasonable to not build Thunderbird on every m-i checkin 
or every try push. The main model I've envisioned is to build on every 
m-{c,a,b,esr} checkin and only build on m-i if a comm-central file is 
changed (correspondingly not building FF if only TB-specific code is 
touched), with try handled via an extra -a "app" option, although other 
models (e.g., retaining c-c as a project branch like build-system or 
fx-team) are plausible. I'd like to invite release engineers and 
sheriffs to suggest easier models if they can, since they have much more 
experience here.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Merging comm-central into mozilla-central

2015-10-23 Thread Joshua Cranmer

On 10/23/2015 11:56 AM, Fabrice Desré wrote:

On Fri, 23 Oct 2015 11:18:32 +0200, Ms2ger wrote:
  

On the plus side, it could make it easier to share code between
thunderbird and the b2g email code.

Not really since the b2g email app is on github and doesn't share code
with thunderbird for now.


Actually, the b2g email app does reuse JSMime (or at least will be shortly).

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Merging comm-central into mozilla-central

2015-10-23 Thread Joshua Cranmer

On 10/23/2015 4:42 PM, Ehsan Akhgari wrote:
Let me rephrase.  Are Thunderbird and SeaMonkey committed towards long 
term maintenance of their code, should it be moved into 
mozilla-central?  That is the bare minimum necessary (but not 
sufficient) condition for having this conversation.


From what I have seen in the aforementioned forum, it seems like at 
least on the Thunderbird side things are pretty much unclear at this 
point, so the answer to the above question cannot be yes.


I don't know why the hell you think the answer to that question is 
anything other than yes. Kent is a pessimist, and he sees the current 
signs as that Mozilla is basically trying to throw us under the bus (and 
this current thread suggests that there a few here paying the bus driver 
to run over the corpse a few times), so he is highly motivated to find 
alternative long-term plans to *continue* the maintenance of 
Thunderbird. Indeed, the very fact that we're angling for this change to 
happen, despite the rather intense political fight that is ensuing, is 
itself a loud voice of commitment to maintaining the code.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Merging comm-central into mozilla-central

2015-10-23 Thread Joshua Cranmer

On 10/23/2015 3:21 PM, Ehsan Akhgari wrote:

Except that to demand contributors don't care about comm-central would
be to demand of your employees that they should be jerks to the wider
open-source community.


As pointed out by others, this is completely untrue, and I personally 
think that framing the problem like this isn't the most helpful.
Even if the module owners didn't actively write the patches themselves, 
they'd still have to deal with the inquiries and reviews by comm-central 
authors, which, given the asymmetry of knowledge, is likely to take as 
much if not more of their time in total. If they aren't to spend time on 
comm-central, then that's saying nothing less than they shouldn't even 
talk to us and ignore any patches we request--in other words, saying 
that they should be jerks.


Please note that even if we move the code into m-c, we will continue 
to break it (unintentionally) so Thunderbird will still see 
regressions caused by "upstream" changes that they need to deal with.


While I'd like that this not be the case, I've long ago accepted that 
this will continue to be the case. Nothing about this proposal was ever 
intended to change this.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Merging comm-central into mozilla-central

2015-10-23 Thread Joshua Cranmer

On 10/23/2015 5:39 PM, Jonas Sicking wrote:

Can this be solved without migrating c-c into m-c?

Would it be possible to create a thunderbird build system which simply
takes the output of a firefox build and grabs the files that it needs,
and builds the additions that thunderbird needs.


Not without reversing major decisions made in the past 7 years. Back in 
2008, there was an idea of making a common platform runtime that Firefox 
and TB could easily both shared--you may know this runtime better as 
XULRunner, and its fate has been to die a miserable death. The split 
tree aspect was annoying but tolerable until about the time certain 
XPCOM changes required us to link all binary code into libxul (5 years 
ago?). The moz.build rewrites have required increasingly major 
contortions to keep working. Patches to make a mozilla-central work 
better in this platform-regard have been rejected in the past.

The goal of putting seamonkey and thunderbird in separate trees has
always been to make firefox development easier, not harder. That
should include the build system.


And the point of this thread is that it hasn't, and I can't emphasis 
this point enough. The current split is causing extreme pain on both 
sides of the divide, and I fear that the people who object to this have 
no conception of just how bad of a problem this has been. It's a case of 
grievously wounding those who make silent, heroic efforts against the 
theoretical pricks of someone who may not even exist.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Alternative to Bonsai?

2015-09-15 Thread Joshua Cranmer

On 9/15/2015 10:53 AM, Boris Zbarsky wrote:

On 9/15/15 11:11 AM, Ben Hearsum wrote:

I'm pretty sure https://github.com/mozilla/gecko-dev has full history.


Though note that it doesn't have working blame for a lot of files in 
our source tree (and especially the ones you'd _want_ to get blame 
for, in my experience), so it's of pretty limited use if you're trying 
to do the sorts of things you used to be able to do with bonsai.


I believe gps is working on standing up a web front end for the CVS 
repo blame to replace bonsai...


FWIW, I did try to import something using what appears to be the best 
quality CVS exporter (cvs-fast-export), only to run into a problem that 
we apparently landed some files on CVS branches that got merged into 
mainline, which causes the generated revision graph to be cyclic (which, 
as the author to said project confessed, was both known and extremely 
hard to fix). Well, and I had to bump several "maximum repository 
complexity" defines :-) .


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Alternative to Bonsai?

2015-09-15 Thread Joshua Cranmer

On 9/15/2015 10:11 AM, Ben Hearsum wrote:

On 2015-09-15 11:08 AM, Philip Chee wrote:

The Bonsai server +infra is gone. Is there an alternative?

Is there a mercurial repository that has a unified history of
mozilla-central plus cvs history? Bonus if it also includes comm-central.

Phil


I'm pretty sure https://github.com/mozilla/gecko-dev has full history.
Eg: I see that https://github.com/mozilla/gecko-dev/blob/master/LICENSE
has an initial commit in 1998.


There is no git or mercurial repository that contains the full history 
of mozilla CVS. Slightly unsurprising, since the full history of mozilla 
CVS actually breaks most conversion tools.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of 'auto'

2015-08-02 Thread Joshua Cranmer

On 8/2/2015 10:21 AM, Boris Zbarsky wrote:

On 8/2/15 7:34 AM, Hubert Figuière wrote:

This is also part of why I'd suggest having an construction method that
will return a smart pointer - preventing the use of raw pointers.


Returning an already_AddRefed would prevent the use of raw pointers, 
but would leak if the caller used auto, right?


Returning an nsRefPtr would not prevent the use of raw pointers, 
allowing a caller to write:


I've discussed this several times, but with we added operator T*()  = 
delete;, that would prevent the scenario you're talking about here.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-14 Thread Joshua Cranmer

On 7/14/2015 1:39 AM, Thomas Zimmermann wrote:
When writing code, I consider it good style to not write into anything 
that starts with an 'a' prefix, except result arguments.


You should never write into something with an 'a' prefix except when 
you should, if you simplify it. I've actually avoided using the a 
prefix for outparams precisely because it feels more consistent to never 
assign to a variable with an a value (and also because it distinguishes 
between Foo *aInArray and Foo *outparam), yet I did see someone upthread 
praising that it helped you see which values were outparams.


Makes the code cleaner, more readable, and often gives it a clear 
structure. When reading the code later on, it's easy to spot the parts 
of a the code that directly depend on external parameters by looking 
for 'a' and 'm' prefixes.


This, I feel, is an aspiration which is not supported by any of the code 
I work on (which admittedly is heavily COMtaminated). Any intuition 
about a difference between aFoo and foo in terms of relies on 
arguments is bound to be wrong.



Given that the aFoo rule is one of the least adhered-to portions of our 
style guide, and has been for as long as I've worked on Mozilla code; 
that the ancillary rule of don't assign to an argument has also been 
ignored on quite a few occasions; and that there hasn't been any real 
history of people complaining about the lack of adherence to this style 
guide point, I rather suspect that whatever people might say in how 
useful the 'a' prefix is, they get along quite fine without it.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Switch to Google C++ Style Wholesale (was Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++)

2015-07-14 Thread Joshua Cranmer

On 7/14/2015 10:23 AM, Benjamin Smedberg wrote:
Given that premise, we shouldn't just change aArgument; we should 
adopt the Google C++ style guide wholesale:


* names_with_underscores


The biggest problem here is that WebIDL and XPIDL codegen are heavily 
geared towards camelCase names, as the IDL convention is camelCase.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Largest chunks of code that are likely to be removable?

2015-06-30 Thread Joshua Cranmer

On 6/30/2015 6:01 AM, Axel Hecht wrote:

On 6/30/15 9:13 AM, Mike Hommey wrote:

On Mon, Jun 29, 2015 at 11:19:08PM -0700, Nicholas Nethercote wrote:

Hi,

I'm wondering what the largest chunks of code there are in the
codebase that are candidates for removal, i.e. probably with a bit of
work but not too much.

One that comes to mind is rdf/ (see
https://bugzilla.mozilla.org/show_bug.cgi?id=1176160#c5) though I
don't have a good understanding of how much stuff depends on it, even
having seen https://bugzilla.mozilla.org/show_bug.cgi?id=420506.


See the dependencies of bug 833098.

Mike



Note, that bug has the dependencies to move rdf/ from mozilla-central 
into comm-central. mail has many more dependencies on RDF, I think.


The mailnews catch-all bug for removing RDF is 
https://bugzilla.mozilla.org/show_bug.cgi?id=mail-killrdf. Off the top 
of my head, the biggest use is as a hashtable for folder URIs to folder 
objects (which could easily be replaced in a pinch), but it's also used 
in a minor way in the account creation dialog, as well as the backing 
store for RSS. The RDF templating widget feature, though, is used in 
about four places in Thunderbird and much, much more in SeaMonkey.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Voting in BMO

2015-06-11 Thread Joshua Cranmer

On 6/11/2015 3:57 PM, L. David Baron wrote:

For what it's worth, I'd pay more attention to votes if I could see
the graph of how vote counts changed over time.


I explicitly want to call out attention to this. In my experience, it's 
not the absolute vote count that matters but rather the vote velocity.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Voting in BMO

2015-06-09 Thread Joshua Cranmer

On 6/9/2015 4:09 PM, Mark Côté wrote:

To that end, I'd like to consider the voting feature.  While it is
enabled on a quite a few products, anecdotally I have heard
many times that it isn't actually useful, that is, votes aren't really
being used to prioritize features  fixes.  If your team uses voting,
I'd like to talk about your use case and see if, in general, it makes
sense to continue to support this feature.


I weakly object to removing the feature. I've used voting in the past to 
avoid CC spam and more recently to get different email notification 
levels. Actually, my biggest problem with using votes in queries is that 
I don't care about the actual number of votes so much as I care about 
the vote rate: A bug filed last month with 5 votes is something that 
requires prompt attention while a bug filed 15 years ago with 20 votes 
typically means this is a hard-to-implement feature for a rare case or 
some other similar rationale that makes it not worth including in list 
priorities.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of 'auto'

2015-06-02 Thread Joshua Cranmer

On 6/2/2015 2:58 PM, smaug wrote:

Hi all,


there was some discussion in #developers about use of 'auto' earlier 
today.

Some people seem to like it, and some, like I, don't.

The reasons why I try to avoid using it and usually ask to replace it 
with the actual type when I'm

reviewing a patch using it are:
- It makes the code harder to read
  * one needs to explicitly check what kind of type is assigned to the 
variable
to see how the variable is supposed to be used. Very important for 
example
when dealing with refcounted objects, and even more important when 
dealing with raw pointers.

- It makes the code possibly error prone if the type is later changed.
  * Say, you have a method nsRefPtrFoo Foo(); (I know, silly 
example, but you get the point)

Now auto foo = Foo(); makes sure foo is kept alive.
But then someone decides to change the return value to Foo*.
Everything still compiles just fine, but use of foo becomes risky
and may lead to UAF.


There are a few use cases for auto, in rough order that their utility is 
most evident:

* auto x = []() {}; Lambdas don't even have a name that you can write down.
* STL iterator names, e.g., auto it = map.find(stuff); The type name is 
simultaneously obvious and obnoxious.
* auto x = static_castFoo*(bar); You typed the name once, why should 
you have to type it again.

* for (auto  x : vec).

The case which I am personally very much on the fence is integral types. 
On the one hand, sometimes the type just doesn't matter and you want to 
make sure that you have the same type. On the other hand, I have been 
very burned before by getting the signedness wrong and having code blow up.


I think that the first three cases are cases where auto not only should 
be permitted but be required by the style guide; otherwise, I think auto 
should be permitted (to reviewer/module owner's taste) but generally 
strongly advised against. In particular, use of auto in lieu of things 
like T* or nsRefPtrT should be forbidden except in special 
circumstances (automatically-generated code being an example), where its 
use would be carefully checked for correctness.


Just my opinion :-)

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Updated mozilla-central code coverage

2015-05-26 Thread Joshua Cranmer

On 5/26/2015 10:20 PM, Shih-Chiang Chien wrote:

Thanks for the explanation. IIRC content process is closed by SIGKILL in
Gecko. Looks like we'll have to tweak the timing.


A SIGKILL would definitely not trigger the information to be dumped.

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Updated mozilla-central code coverage

2015-05-26 Thread Joshua Cranmer
I've posted updated code coverage information for mozilla-central to 
https://www.tjhsst.edu/~jcranmer/m-ccov/. This data is accurate as of 
yesterday. For those of you who are unaware, I periodically run these 
code coverage statistics by use of the try server and instrumented runs. 
This has been made easier over the years by standardization of some of 
the hacks, such that you can now push to linux64-cc and get most of the 
same information.


Notable changes I've made since the last upload:
1. I dropped debug builds, so all the information comes from Linux opt, 
both 32 and 64-bit.
2. Test names now derive from builder names directly, removing the need 
for a very long hardcoded list of M-bc means mochitest-browser-chrome.
2a. This means that what was once mochitest-1, mochitest-2, etc. is now 
condensed into mochitest. Mochitest-e10s-browser-chrome, etc., remain 
split out.
3. Minor changes in the UI frontend to help deal with the fact that my 
hosting webserver changed to forcing https.
4. I can now generate the ultimate combined .info file without needing 
manual post-processing, for the first time ever.


The marionette and web-platform tests remain unaccounted for in coverage 
(Mn, Mn-e10s, Wr, W-* in treeherder lingo), and the new Ld 
(luciddream?) appears to be broken as well.


On the possibility of expanding code coverage information to different 
platforms, languages, and tests:
1. OS X still has a link error and/or fail-to-run issue. I suspect a 
newer clang would help, but I lack a local OS X instance with which to 
do any detailed tests. I've never tested the ability of my scripts to 
adequately collect clang code coverage data, and I suspect they would 
themselves need some modification to do so.
2. Android builds work and can report back code coverage data, but so 
intermittently that I didn't bother to try including them. In my try run 
that I used to generate these results, mochitest-2 reported back data 
but mochitest-6 did not, yet both testsuites reported back success. The 
reason for this is not clear, so any help people could give in debugging 
issues would be most appreciated.
3. B2G desktop builds and Mulet builds on Linux appeared to work. 
However, the builds didn't appear to upload the gcno package for unknown 
reasons, and taskcluster uses such different mechanisms to upload the 
files that my scripts are of no use in collecting the gcda packages.
4. Windows is... special when it comes to code coverage. This is the 
last platform I would look at tackling.
5. JS code coverage is of course a hole I'd like to see rectified, but I 
don't have the time to invest in solving it myself.

6. Are we actually using Rust yet on our tryserver builds?
7. Android Java coverage is deferred until after I can get reliable 
Android code coverage in the first place.
8. I'd have to look into modifying mozharness to run code coverage on 
marionette et al builds. It shouldn't be hard, but it is annoying to 
have to hook into so many places to insert code coverage.

9. Ditto for Android xpcshell and cppunit tests.

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Updated mozilla-central code coverage

2015-05-26 Thread Joshua Cranmer

On 5/26/2015 3:21 PM, kgu...@mozilla.com wrote:

Does this coverage info also include gtests? From a quick glance it looks like 
not.


The code coverage includes all tests run on Linux opt or Linux-64 opt 
excluding those run under check, marionette, web-platform tests, or 
luciddream. If gtests are being run under Linux opt cppunittests, then 
they should be included.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Generate Linux64 gcov C++ code coverage data on try

2015-05-21 Thread Joshua Cranmer

On 5/21/2015 11:38 AM, Andrew Halberstadt wrote:

If someone figures out how to generate code coverage data on
Windows/OSX, I'd be happy to take care of the scheduling pieces. Beyond
that, I personally don't have any plans to work further on code
coverage. If you want to take a shot at generating reports yourself, one
possibility is lcov, a graphical frontend to gcov. I believe sledru
and/or jcranmer may also have some scripts for doing it.


We run tests on a dozen platforms and you consider Windows/OS X/Linux 64 
sufficient? :-)


Actually, as far as I know:
1. Linux-32 works exactly the same as Linux-64 (export 
CFLAGS/CXXFLAGS/LDFLAGS/HOST_LDFLAGS='--coverage' in the mozconfig)
2. Android should work the same, but it needs modification to harnesses 
to extract the code-coverage to/from the device.
3. OS X builds use clang, which should interoperate with gcc's method, 
but it does cause either a link error (in opt) or a runtime loader issue 
(in debug). I think a newer version of clang would fix the issues, but 
I've been unable to figure out how to test a newer version of clang on 
the try servers.


I'm the one who has the scripts to generate the data, and I will 
probably look into modifying my setup to accommodate linux64-cc.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: New C++11 features made available by dropping support for gcc-4.6 in Gecko 38

2015-05-11 Thread Joshua Cranmer

On 5/11/2015 2:29 PM, Ehsan Akhgari wrote:

On 2015-04-30 7:57 AM, Xidorn Quan wrote:
On Thu, Apr 30, 2015 at 10:14 PM, Robert O'Callahan 
rob...@ocallahan.org

wrote:


On Sat, Mar 21, 2015 at 4:14 AM, bo...@mozilla.com wrote:


* member initializers



Should we have any rules around these, or should we use them
indiscriminately? I wonder particularly about initializers which are
complicated expressions.



I guess we probably should forbid using any expression with side 
effect for

member initializers.


Hmm, why should we restrict them more than what can appear in the 
constructor initializer list?


I believe MSVC does have some problems with complex member initializers, 
but I don't recall details offhand.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A question about do_QueryInterface()

2015-04-30 Thread Joshua Cranmer

On 4/30/2015 1:25 PM, ISHIKAWA, Chiaki wrote:

*   787   nsCOMPtr nsIInputStream inboxInputStream =
do_QueryInterface(m_outFileStream);
 788   rv = MsgReopenFileStream(m_tmpDownloadFile, inboxInputStream);

Before, as in the current release, m_outFileStream is not buffered.
And the code on line 787 produces non-null inboxInputStream.

However, once m_outFileStream is turned into a buffered output stream
using, say,

   m_outFileStream = NS_BufferOutputStream(m_outFileStream, 64 * 1024 );

the code on line 787 produces nullptr.

Is this to be expected?


In short, yes. What happens is that the original m_outFileStream happens 
to be of type nsFileStreams (or something like that), which inherits 
from both nsIInputStream and nsIOutputStream. When you wrap that in a 
buffered output stream, the resulting type of m_outFileStream is of 
nsBufferedOutputStream, which does not inherit nsIInputStream; therefore 
the cast to nsIInputStream fails.


Up until now, I thought of do_QueryInterface() as mere sugar-coating for
certain type-mutation or something. But I now know I am wrong.


do_QueryInterface is the equivalent of a type-checked downcast, e.g. 
(ClassName)foo in Java.  (Regular C++ downcasts are not dynamically 
type-checked).


I read a page about do_QueryInterface() but it does not
explain the principle very much.

Is the reason of failure something like as follows.
I am using a very general class hierarchy.


  A   base class
  |
  +---+---+
  B   C B and C are derived from base class A
  |
--+--+
 |
 D D is derived further from Class D.

Let's say Class B and C are derived from Class A.
Class D is further derived from Class C.
Let us assume there are corresponding XPCOM class/object A', B', C', D'.

By using do_QueryInterface() on objects,
 we can follow the path of  direct derives relation
  B' = do_QueryInterface (A') (or is it the other way round?)

 and maybe between B' and C' (? Not sure about this.)

 but we can NOT follow the direction of
   B' = do_QueryInterface (D')
 That is
X = do_QeuryInterface(Y) is possible only when X is the direct or
indirect  descendant of Y?


No, you are incorrect. The issue is the dynamic type of the object (if 
you have A *x = new B, the static type of x is A whereas the dynamic 
type is B). In the pre-modified code, the dynamic type of 
m_outFileStream supported the interface in question, but your 
modification changed the dynamic type to one that did not support the 
interface.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Joshua Cranmer

On 4/14/2015 4:59 PM, northrupthebandg...@gmail.com wrote:
The article assumes that when folks connect to something via SSH and   something changes - causing MITM-attack warnings and a refusal to  
connect - folks default to just removing the existing entry in  
~/.ssh/known_hosts without actually questioning anything.  This  
conveniently ignores the fact that - when people do this - it's  
because they already know there's been a change (usually due to a  
server replacement); most folks (that I've encountered at least)  
*will* stop and think before editing their known_hosts if it's an  
unexpected change.
I've had an offending key at least 5 times. Only once did I seriously 
think to consider what specifically had changed to cause the ssh key to 
change. The other times, I assumed there was a good reason and deleted it.


This illustrates a very, very, very important fact about UX: the more 
often people see a dialog, the more routine it becomes to deal with 
it--you stop considering whether or not it applies, because it's always 
applied and it's just yet another step you have to go through to do it.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-13 Thread Joshua Cranmer

On 4/13/2015 3:29 PM, stu...@testtrack4.com wrote:

HTTP should remain optional and fully-functional, for the purposes of 
prototyping and diagnostics. I shouldn't need to set up a TLS layer to access a 
development server running on my local machine, or to debug which point before 
hitting the TLS layer is corrupting requests.


If you actually go to read the details of the proposal rather than 
relying only on the headline, you'd find that there is an intent to 
actually let you continue to use http for, e.g., localhost. The exact 
boundary between secure HTTP and insecure HTTP is being actively 
discussed in other forums.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: mozilla::Pair is now a little more flexible

2015-03-15 Thread Joshua Cranmer

On 3/15/2015 2:33 PM, Seth Fowler wrote:
I don’t really care what we do - keep in mind, I had nothing to do   with introducing mozilla::Pair - but I think that we should 
recommend  the use of one thing, either std::pair or mozilla::Pair. If 
we choose  to prefer std::pair, we should probably remove mozilla::Pair.
The reason why we have mozilla::Pair is that we needed a pair type that 
was sizeof(T1) if T2 was empty (for mozilla::UniquePtr). I suggested 
that such a utility might be more widely valuable and thus that it 
should be split out as a separate mozilla:: type rather than a 
mozillla::detail:: type. std::pair is required to have the two elements 
be listed as members by the specification, although I think std::tuple 
may similarly have the empty-types-take-no-space optimization 
(mozilla::Pair was added before MSVC 2013 requirement and thus before 
variadic templates).


In general, std::pair should be preferred over mozilla::Pair unless you 
need the empty type optimization.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to not fix: Building with gcc-4.6 for Fx38+

2015-03-11 Thread Joshua Cranmer

On 3/11/2015 1:13 PM, Gregory Szorc wrote:

So I guess we trend towards supporting 2 build modes: Mozilla's official
build environment via containers/chroots (preferred) or host native (for
the people who insist on using it). Only host native exists today and it
is a PITA.


Using docker containers for most new contributor builds is likely to be 
a poorer experience for contributors than not doing so--most 
contributors are likely to want to run the built product on their local, 
host side of things rather than within the container, and if the runtime 
dependencies mismatch, the end result will be very painful.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla Engineering Update 38.3

2015-03-03 Thread Joshua Cranmer

On 3/3/2015 2:46 AM, Chris Peterson wrote:
IndexedDB performance work will also land soon: bug 866846 will enable 
SQLite’s WAL journal and bug 1112702 will change transactions to be 
non-durable. These SQLite options favor performance over durability 
like Chrome and IE do. They do not increase the risk of database 
corruption.


Is/will there be options to add the durability back in?

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: JavaScript code coverage

2015-01-20 Thread Joshua Cranmer

On 1/20/2015 4:37 AM, Nicolas B. Pierron wrote:
This general design is a pragmatic approach to help people implement 
different variant of taint-analysis without having to implement taint 
analysis in SpiderMonkey.  Identically for code-coverage, how much 
time do you want to spend at doing code-coverage vs. running code?  
This is part of the implementation design of the analysis.


Seeing that the code coverage runs on try already risk timing out (in 
--coverage -g -Owhateverweuse builds), the instrumentation costs need to 
be pretty low. Post-processing is already necessary to capture scripts 
never run, so as long as stuff is output in a recoverable manner, that's 
sufficient.

Is there any prospect for this sort of stuff getting done this year?



AFAIK, no.

Maybe some potential users will show up and mention that they are 
willing to get their hand dirty if we were to implement an Analysis 
API as discussed back in June.  In which case we might be able to 
raise again the question about scheduling this work.


That's a real shame. I've been without JS code coverage since 2012 or 
2013 when the PC counts was removed, and it's disappointing that Mozilla 
is encouraging browser development in JS but failing to provide 
effective tooling to support that development.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


JavaScript code coverage

2015-01-19 Thread Joshua Cranmer
Hi, you may know me as the guy who produces the code coverage results 
occasionally: http://www.tjhsst.edu/~jcranmer/m-ccov/.


One persistent failure of producing code coverage is the inability to 
record code coverage in half of our codebase, the half that is written 
in JavaScript. There are several existing JS code coverage solutions, 
but all of them would fail to work for a few broad reasons:
1. Mozilla aggressively uses ES6 (and some non-standard) features in its 
codebase, and code coverage infrastructure at best lags. Of the half 
dozen tools I looked at, only 2 mentioned any support for ES6 (and one 
using an ES6-to-ES5 translator, which is unsettling).
2. Importing the coverage database, updating it, and reporting it is 
fraught with peril, because we have at least 6 different scopes which 
require at least 3 different mechanisms: JS modules, chrome windows, 
content windows, workers, JS shell, xpcshell. Some of our code will be 
run in multiple scopes 
(https://dxr.mozilla.org/comm-central/source/mozilla/toolkit/components/osfile/osfile.jsm 
can run in both JS modules and chrome workers).
3. Getting a static list of all of our JS code is extremely non-trivial, 
since JS can also crop up in non-JS files like XBL files or HTML files.
4. Our scripts sometimes share the same global, sometimes they don't. In 
contrast, the target environments of web browsers and node.js always use 
one or the other.


If you can't tell from my list of points, I'm highly skeptical that any 
instrumentation-based approach will work. However, since we use the same 
JS engine for all of our code, if code coverage support is added to 
SpiderMonkey, than it is a relatively easy step to add support for JS 
code coverage to my periodic code coverage runs. Getting good code 
coverage (line and branch coverage) ultimately requires fine-grained 
instrumentation (ideally bytecode-level) not presented by the current 
Debugger.


I've seen people bring up supporting this sort of stuff in the past, 
which usually tends to generate a flurry of +1 this would be 
wonderful! but ultimately everything peters out before anything gets 
done. Some of this may due to be trying to create an overly-general 
design that solves all the problems™.


Is there any prospect for this sort of stuff getting done this year?

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Jemalloc 3 is now on by default

2015-01-12 Thread Joshua Cranmer

On 1/12/2015 9:44 PM, Mike Hommey wrote:

Aaand as usual with such changes, it didn't stick.


Does that mean I should assume that whenever someone makes this sort of 
announcement, I should assume they really mean this will take effect 
tomorrow or the day after, when I've figured out what went wrong? :-)


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Support for building with Windows SDK 8.1 removed from Gecko 37

2015-01-09 Thread Joshua Cranmer

On 1/9/2015 1:17 PM, Mike Hoye wrote:
I'm curious what the real disk space minimums are for Linux? Those 
numbers haven't been updated in a while, looks like.




The most disk-heavy build configuration uses 7-10GB of disk space for 
srcdir + objdir; the least probably 2-3GB.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: Support for building with gcc 4.6 has been removed

2015-01-08 Thread Joshua Cranmer

On 1/6/2015 3:33 PM, Ehsan Akhgari wrote:

I just landed bug  to remove support for building with Visual C++ 2012 as
per the previous dev-platform thread.


Trevor Saunders has just landed the patch to de-support gcc 4.4 and 4.5 
on mozilla-inbound, and it should move to mozilla-central shortly. In 
addition to the C++11 features enabled by are very recent landing of 
min-MSVC 2013 (reproduced afterwards), this admits the possibility to 
use the following features:

* Lambdas
* nullptr/std::nullptr_t
* Forward declaration of enums
* Explicit operator conversion (most notably, explicit operator bool())
* Raw string literals
* Range-based for loops
* Local structs as template parameters

I have not yet updated the Using C++ in mozilla code page yet, since I 
need to ascertain what the bugs are in gcc 4.6 and gcc 4.7 that may 
preclude certain uses of these features.


Or, if it tickles your fancy, it may be simpler at this point to list 
the C++11 features not yet usable:

- ref qualifiers on methods
- member initializers
- templated aliasing
- C++11 attributes (although many of the important ones are already in 
some macro somewhere)

- constexpr
- alignof/alignas
- delegated constructors
- inherited constructors
- char16_t [ but we polyfill this anyways]
- user-defined literals
- unrestricted unions
- override/final (already polyfilled)
- thread-local storage (ditto)

As a reminder, the ability to use C++11 features doesn't necessarily 
mean that doing so is kosher per our style guidelines.

This will make the following C++11 features available to use in Mozilla
code:

* variadic templates
* initializer lists
* =delete (we can probably remove MOZ_DELETE now)
* =default
* function template default arguments


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Support for building with Windows SDK 8.1 removed from Gecko 37

2015-01-08 Thread Joshua Cranmer

On 1/8/2015 10:05 AM, Mike Hoye wrote:
I'm revisiting our docs in light of this to figure out what our real 
minimum hardware/ram/disk requirements are. The temptation to start 
adding If you try to build Firefox with 2 gigs of RAM, you're gonna 
have a bad time memes to the docs is severe.


One of my machines is a 2-core, 2GB machine. I've built Firefox on it 
somewhat recently (~months), but that is without optimization or debug 
symbols. IIRC, it failed to build in debug + debug symbols, but that was 
last tried before we supported debug symbol fission or unified file 
compilation. Note that buildable doesn't mean convenient--if you do 
something else while building, such as have a web browser open, you will 
meet the OOM killer. This suggests that 2GB of RAM is the absolute 
minimum requirement for building.


In practice, if you're building infrequently (say, provisioning a VM for 
Windows or Linux for occasional builds, not primary development), then 4 
cores and 4GB of RAM appear to suffice (I've used 4GB for a Linux VM on 
my laptop and 8GB for a Windows VM on my desktop). For primary 
development that involves frequent invocation of mach build binaries, 
8GB of RAM or more would be recommended.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Support for Visual C++ 2010 has been dropped

2014-12-17 Thread Joshua Cranmer

On 12/17/2014 10:10 AM, Ehsan Akhgari wrote:

Note that this is not a change in our in-production compiler for Windows
(MSVC 2013), it just disables building with MSVC 2010 locally.  MSVC 2012
and 2013 can still be used to build Firefox on Windows.


Is the plan/intent to keep MSVC 2012 working or not? With 2013, we do 
get a few cool features (variadic templates, initializer lists among them).


The context is curating this page: 
https://developer.mozilla.org/en-US/docs/Using_CXX_in_Mozilla_code.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The worst piece of Mozilla code

2014-10-16 Thread Joshua Cranmer

On 10/16/2014 7:32 AM, Nicholas Nethercote wrote:

Hi,

I was wondering what people think is the worst piece of code in the
entire Mozilla codebase. I'll leave the exact meanings of worst and
piece of code unspecified...


http://dxr.mozilla.org/comm-central/source/mailnews/mime/src/mimedrft.cpp. 
C code masquerading as C++ that use XPCOM classes directly. Manual 
memory allocation up the wazoo. Cleans temporary files on error but not 
success.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Moratorium on new XUL features

2014-10-14 Thread Joshua Cranmer

On 10/14/2014 5:12 PM, Robert O'Callahan wrote:

On Tue, Oct 14, 2014 at 4:56 PM, Joshua Cranmer  pidgeo...@gmail.com
wrote:


 From another point of view: Mozilla, for over a decade, provided a
relatively featureful toolkit for building UIs known as XUL. If the
argument is that we should be using HTML instead of XUL, then wouldn't it
make sense to provide an at-least-as-featureful HTML toolkit to make
migration easy and relatively painless?


I already said I'm not proposing a wholesale migration here. Please stop
misconstruing me.


Nor am I proposing a wholesale migration.

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Moratorium on new XUL features

2014-10-13 Thread Joshua Cranmer

On 10/13/2014 10:10 PM, Andrew Sutherland wrote:

On 10/13/2014 07:06 PM, Joshua Cranmer  wrote:

I nominally agree with this sentiment, but there are a few caveats:
1. nsITreeView and xul:tree exist and are usable in Mozilla code 
today. No HTML-based alternative to these are so easily usable.


There are many lazy-rendering infinite tree/table/infinite list 
implementations out there:


I found far fewer when searching, but I suppose I'm just bad at coming 
up with search terms.

e) already existed and were generally maintained in toolkit/

This is a weird, NIH-ish requirement.  Why should Mozilla create and 
maintain an HTML tree widget when there are so many open source 
implementations that already exist?


I suppose the requirement I really meant was does not require a massive 
toolkit to work properly. Taken to the extreme, we'd end up with a half 
a dozen large JS toolkits being installed when we install Firefox--see 
the current thread about Firefox installer size pondering. Also, I feel 
that a Mozilla-maintained (or at least Mozilla-blessed) toolkit is far 
more likely to solve issues that aren't normally in the thoughts of web 
developers, e.g., accessibility.


From another point of view: Mozilla, for over a decade, provided a 
relatively featureful toolkit for building UIs known as XUL. If the 
argument is that we should be using HTML instead of XUL, then wouldn't 
it make sense to provide an at-least-as-featureful HTML toolkit to make 
migration easy and relatively painless?


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using c++11 right angle bracket in template code?

2014-10-01 Thread Joshua Cranmer

On 10/1/2014 4:23 AM, Nicholas Nethercote wrote:

On Wed, Oct 1, 2014 at 1:08 AM, Cameron McCormack c...@mcc.id.au wrote:

On 01/10/14 17:57, Kan-Ru Chen (陳侃如) wrote:

It seems all the compilers we use support the c++11  in template,
could we start using it in new code?

Yes we have some uses of that already.  It's also mentioned in
https://developer.mozilla.org/en-US/docs/Using_CXX_in_Mozilla_code.

Note the large, red warning at the top of that page This page is a
draft for expository and exploratory purposes. Do not trust the
information listed here.

I don't know why that page exists with such an authoritative-looking URL.


The warning was in part because I never got confirmation on our minimum 
supported versions, particularly minimum clang version, and in part 
because the C++11 library portion was never well organized.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Documenting uses of Github at Mozilla

2014-09-30 Thread Joshua Cranmer

On 9/30/2014 4:44 PM, Eric Shepherd wrote:
Last week the idea came up that it would be helpful to create a list 
on MDN of the Mozilla projects that are on GitHub, with links to those 
sites. I have two questions:


1. Do we already have such a list anywhere?

2. If you have a project (or projects) on Github, please let me know! 
I'd like to make sure people know where it is if they're looking for it.




http://github.com/mozilla-comm/ contains a few projects maintained by 
Gaia Productivity and Thunderbird/Lightning folks.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Running mozharness locally and be able to reach private files

2014-09-11 Thread Joshua Cranmer

On 9/11/2014 7:58 AM, Armen Zambrano G. wrote:

What would people want to see in the long term to make mozharness easier
for you?


A Dockerfile (or a container image) that produces a Ubuntu64 test slave.

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: WebCrypto API

2014-09-09 Thread Joshua Cranmer

On 9/9/2014 5:38 AM, Tim Taubert wrote:

helpcrypto helpcrypto wrote:

I'll love to know if Mozilla/Firefox is going to provide something (even
out-of-standard) to make possible using PKCS#11/NSS with Webcrypto.

The WebCrypto API basically exposes PKCS#11/NSS functionality with a DOM
API.


The current specification only provides encryption/decryption 
primitives, to my knowledge. Support for hardware tokens (getKeysByName, 
I think it was?) was pushed into a later draft, and I think it's this 
feature that the poster was asking for.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: ./mach build subdirectory doesn't work reliably any longer

2014-09-03 Thread Joshua Cranmer

On 9/3/2014 11:45 PM, Boris Zbarsky wrote:
I mean, if I add a new virtual function to nsINode and then only 
compile the subset of files that call the new function, I _know_ the 
resulting build if I linked libxul is busted: different parts of it 
think the vtable looks different.  But this is still a useful thing to 
be able to do as I iterate on my API addition!


It sounds to me like what you really want is support for a red squiggly 
line in your IDE, or the nearest equivalent to it in your development 
environment. This effectively requires being able to say, for any source 
file, the exact command and arguments needed to make it compile, plus 
appropriate hookups to your IDE. Being able to have moz.build spit this 
out has been an aspiration of mine for some time, and I believe we are 
capable of making this possible by the end of the year.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Switching to Visual Studio 2013

2014-08-26 Thread Joshua Cranmer

On 8/26/2014 10:09 AM, Ted Mielczarek wrote:

On 8/26/2014 11:03 AM, Ehsan Akhgari wrote:

I would like us to update the minimum supported MSVC version to 2012
as soon as possible.  That will give us access to the following C++
features which are all supported on gcc 4.4 (aka our Vintage Compiler)
and MSVC starting from 2012:

* Variadic templates

This is 2013, actually...

* Strongly typed enums
* Initializer lists

... as is this.

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Switching to Visual Studio 2013

2014-08-26 Thread Joshua Cranmer

On 8/26/2014 10:37 AM, Ehsan Akhgari wrote:

On 2014-08-26, 11:29 AM, Joshua Cranmer  wrote:

On 8/26/2014 10:09 AM, Ted Mielczarek wrote:

On 8/26/2014 11:03 AM, Ehsan Akhgari wrote:

I would like us to update the minimum supported MSVC version to 2012
as soon as possible.  That will give us access to the following C++
features which are all supported on gcc 4.4 (aka our Vintage Compiler)
and MSVC starting from 2012:

* Variadic templates

This is 2013, actually...

* Strongly typed enums
* Initializer lists

... as is this.


Really?  I was quoting from 
http://wiki.apache.org/stdcxx/C++0xCompilerSupport...




I just tried using variadic templates in my 2012 (non-CTP) install a 
week or so ago and it blew up in my face. The lines have (nov'12) which 
indicate the November CTP, not the standard install.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Switching to Visual Studio 2013

2014-08-26 Thread Joshua Cranmer

On 8/26/2014 6:20 PM, Neil wrote:

Ehsan Akhgari wrote:

I was talking about MSVC2012 + the November CTP.  We absolutely don't 
want to support older versions of 2012 (or 2013 for that matter.)


What does that mean and why isn't it mentioned on MDN?



The MSVC development team announced in 2012 that they would be working 
on bringing new C++11 features to Visual Studio faster via out-of-band 
Consumer Technology Previews (CTPs for short). I hadn't bothered to list 
CTP as minimum features because:
1. My understanding is that the CTP is basically intended to be 
alpha-quality releases.
2. The official guides on CTP explicitly advise against relying on them 
for production purposes.
3. They are not as easy to get installed as VS (they don't autoinstall 
like service packs, e.g.)

4. I thought it would make the page overly complicated.
5. When MSVS announced more frequent releases, I assumed that the need 
to worry about CTPs was minimal.


FWIW, I'm not entirely sure that a minimum dependency specifically on a 
CTP is a terribly good idea.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Running mochitests from a copy of the objdir?

2014-08-20 Thread Joshua Cranmer

On 8/20/2014 12:22 PM, L. David Baron wrote:

(I estimated that it was going to be faster to get that working than
to try to figure out how to use the packaged tests, since it was
possible to reverse-engineer from mochitest run inside mach, though
if there had been instructions on how to use packaged tests that
somebody had actually used before I'd likely have gone the other
way.)


Building packaged tests is easy (make package for the installer, make 
package-tests for the tests); running them is a little harder since you 
have to build the python runtests.py command line yourself. Or you can 
open up a tbpl log and grab the exact command line there. Certainly far 
easier than trying to work out how to run mozharness on a local system...


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Experiment with running debug tests less often on mozilla-inbound the week of August 25

2014-08-19 Thread Joshua Cranmer

On 8/19/2014 5:25 PM, Ehsan Akhgari wrote:
Yep, the debug tests indeed take more time, mostly because they run 
more checks.


Actually, the bigger cause in the slowdown is probably that debug tests 
don't have any optimizations, not more checks. An atomic increment on a 
debug build invokes something like a hundred instructions (including 
several call instructions) whereas the equivalent operation on an opt 
build is just one.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2014-08-12 Thread Joshua Cranmer

On 8/12/2014 9:59 AM, Benjamin Smedberg wrote:
Just reading bug 1052477, and I'm wondering about what are intentions 
are for already_AddRefed.


In that bug it's proposed to change the return type of NS_NewAtom from 
already_AddRefed to nsCOMPtr. I don't think that actually saves any 
addref/release pairs if done properly, since you'd typically .forget() 
into the return value anyway. But it does make it slightly safer at 
callsites, because the compiler will guarantee that the return value 
is always released instead of us relying on every already_AddRefed 
being saved into a nsCOMPtr.


But now that nsCOMPtr/nsRefPtr support proper move constructors, is 
there any reason for already_AddRefed to exist at all in our codebase? 
Could we replace every already_AddRefed return value with a nsCOMPtr?


The rationale for why we still had it was that:
nsIFoo *foobar = ReturnsACOMPtr();

silently leaks. I've pointed out before that we could fix this by adding 
a nsCOMPtrT::operator T*()  = delete; operator, but that's a gcc 
4.8.1/msvc 2013 November CTP minimum requirement.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2014-08-12 Thread Joshua Cranmer

On 8/12/2014 11:12 AM, Vladimir Vukicevic wrote:

It's unfortunate that we can't create a nsCOMPtr that will disallow assignment to a bare 
pointer without an explicit .get(), but will still allow conversion to a bare pointer for arg passing 
purposes.  (Or can we? I admit my C++-fu is not that strong in this area...)  It would definitely be 
nice to get rid of already_AddRefed (not least because the spelling of Refed 
always grates when I see it :).


The use of a method like
  operator T*()  = delete;

causes the conversion to fail if the nsCOMPtr is an rvalue (most 
temporaries). It still allows T *foo = localVariable; (there's no easy 
way around that).


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2014-08-12 Thread Joshua Cranmer

On 8/12/2014 11:40 AM, Aryeh Gregor wrote:

On Tue, Aug 12, 2014 at 7:37 PM, Benjamin Smedberg
benja...@smedbergs.us wrote:

On 8/12/2014 12:28 PM, Joshua Cranmer  wrote:

The rationale for why we still had it was that:
nsIFoo *foobar = ReturnsACOMPtr();

silently leaks.

Really? I thought that in this case there would be no leak because the
(temporary-returned) nsCOMPtr destructor would properly free the object. The
problem is that `foobar` potentially points to an object which has already
been released.

Correct.  I assume that's what he meant.

Er, yes. I remembered there was a problem, I forgot the actual problem. :-[

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Build system changes to how library dependencies are declared

2014-07-23 Thread Joshua Cranmer

On 7/23/2014 12:59 AM, Mike Hommey wrote:

I just landed bug 1036894 and related bugs on mozilla-inbound. The short
story is that things should now be less cumbersome.


I would like to thank you for taking the time to post this information 
on m.d.platform, a courtesy which I fear many other developers do not 
bother to show these days.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Transition from TBPL to Treeherder

2014-07-22 Thread Joshua Cranmer

On 7/22/2014 8:01 PM, Jonathan Eads wrote:

We’ve got lots of plans for useful bells and whistles in future releases, but 
the first step is reaching full feature parity with TBPL. We need to make sure 
sheriffs and developers can carry out business as usual.


I'd love to play with it for my uses, but bug 1035222 makes it 
impossible for me to use it and potentially find even more bugs.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-07-17 Thread Joshua Cranmer

On 7/17/2014 9:18 PM, Botond Ballo wrote:

std::shared_ptr is mostly unusable in Gecko code because there's no way to
specify whether you need thread-safety or not (usually you don't). There
should be a way to specify whether you want to pay the cost of thread
safety when using it.

I would like to see this as well.

I talked to Jonathan Wakely (a libstdc++ maintainer) about this, and
he said that this has been discussed but rejected, for two reasons.
First, it encourages brittle code, where instances that don't need
thread safety at one point in time come to need it as the codebase
evolves. Second, it adds complexity to the library and diminshes its
teachability to have two different flavours of std::shared_ptr.


I know std::shared_ptr works in conjunction with 
std::enable_shared_from_this to provide some form of intrusive reference 
counting. I think it's unlikely that Mozilla will switch to vanilla 
std::enable_shared_from_this in part because our reference counting has 
better features (the non-atomic reference counting, and, in debug 
builds, we log reference counting to provide leak detection and logging, 
as well as also detecting violation of thread safety), and also in part 
because we have more complicated reference counting scenarios--cycle 
collection, for instance. I wonder if the committee would be open to 
having a smart pointer (maybe by overloading shared_ptr, maybe a new 
smart pointer) that allows for user-defined reference counting. This 
could also be useful for, e.g., wrapping gobject references in smart 
pointers.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox/Thunderbird and GRE/XRE/XULRunner

2014-07-08 Thread Joshua Cranmer

On 7/8/2014 1:51 PM, Tobias Besemer wrote:

As far as I can remember, at the beginning when GRE was build, there was the 
try that Firefox/Thunderbird/Mozilla-Suite will use this ...

After Thunderbird is now back, a Mozilla Update Service exist and the Crash 
Reporter needs to be re-done, but keeps a standalone App for startup problems of 
Firefox/Thunderbird ...
Is there a chance, that Firefox  Thunderbird will share files (runtime 
environment / framework) together again on Windows ???
That chance is basically 0, even if you assume to be using FF and TB on 
the same version (say betas of both). The absolute minimum requirement 
would be being able to share the same libxul, which both Firefox 
developers and Thunderbird developers have had very little inclination 
to work towards making possible.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Try-based code coverage results

2014-07-07 Thread Joshua Cranmer

On 7/7/2014 11:39 AM, Jonathan Griffin wrote:

Hey Joshua,

That's awesome!

How long does the try run take that generated this data?  We should 
consider scheduling a periodic job to collect this data and track it 
over time.


Well, it depends on how overloaded try is at the moment. ^_^

The builds take an hour themselves, and the longest-running tests on 
debug builds can run long enough to encroach the hard (?) 2 hour limit 
for tests. Post-processing of the try data can take another several 
hours (a large part of which is limited by the time it takes to download 
~3.5GB of data).


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Try-based code coverage results

2014-07-07 Thread Joshua Cranmer

On 7/7/2014 1:11 PM, Jonathan Griffin wrote:
I guess a related question is, if we could run this periodically on 
TBPL, what would be the right frequency?


Several years ago, I did a project where I ran code-coverage on roughly 
every nightly build of Thunderbird [1] (and I still have those 
results!). When I talked about this issue back then, people seemed to 
think that weekly was a good metric. I think Christian Holler was doing 
builds roughly monthly a few years ago based on an earlier version of my 
code-coverage-on-try technique until those builds fell apart [2].


[1] Brief aside: if you thought building mozilla code was hard, try 
building Mozilla code from two years ago (I was building 2008-era code 
in 2010)...
[2] I used to dump the code coverage data to stdout and have scripts to 
extract them from the tbpl logs. That stopped working when mochitest-1 
logs grew way too long, and it wasn't until blobber was up and running 
that anyone re-attempted the project.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Try-based code coverage results

2014-07-07 Thread Joshua Cranmer

On 7/7/2014 5:25 PM, Jonathan Griffin wrote:
Filed https://bugzilla.mozilla.org/show_bug.cgi?id=1035464 for those 
that would like to follow along.


Perhaps bug 890116 is a better measure of tracking.

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Try-based code coverage results

2014-07-06 Thread Joshua Cranmer
I don't know how many people follow code-coverage updates in general, 
but I've produced relatively up-to-date code coverage results based on 
http://hg.mozilla.org/mozilla-central/rev/81691a55e60f, and they may 
be found here: http://www.tjhsst.edu/~jcranmer/m-ccov/.


In contrast to earlier versions of my work, you can actually explore the 
coverage as delineated by specific tests, as identified by their TBPL 
identifier. Christian's persistent requests for me to limit the depth of 
the treemap view are still unresolved, because, well, at 2 AM in the 
morning, I just wanted to push a version that worked.


The test data was generated by pushing modified configs to try and using 
blobber features to grab the resulting coverage data. Only Linux32/64 is 
used, and only opt builds are represented (it's a --disable-optimize 
--disable-debug kind of build), the latter because I wanted to push a 
version out tonight and the debug .gcda tarballs are taking way too long 
to finish downloading.


Effectively, only xpcshell tests, and the M, M-e10s, and R groups are 
represented in the output data. M-e10s is slightly borked: only 
M-e10s(1) [I think] is shown, because, well, treeherder didn't 
distinguish between the five of them. A similar problem with the debug 
M(dt1/dt2/dt3) test suites will arise when I incorporate that data. C++ 
unit tests are not present because blobber doesn't run on C++ unit tests 
for some reason, and Jit-tests, jetpack tests, and Marionette tests 
await me hooking in the upload scripts to those testsuites (and 
Jit-tests would suffer a similar numbering problems). The individual 
testsuites within M-oth may be mislabeled because I can't sort names 
properly.


There's a final, separate issue with treeherder not recording the 
blobber upload artifacts for a few of the runs (e.g., Linux32 opt X), 
even though it finished without errors and tbpl records those artifacts. 
So coverage data is missing for the affected run. It's also worth noting 
that a few test runs are mired with timeouts and excessive failures, the 
worst culprit being Linux32 debug where half the testsuites either had 
some failures or buildbot timeouts (and no data at all).


If you want the underlying raw data (the .info files I prepare from 
every individual run's info), I can provide that on request, but the 
data is rather large (~2.5G uncompressed).


In short:
* I have up-to-date code-coverage on Linux 32-bit and Linux 64-bit. Opt 
is up right now; debug will be uploaded hopefully within 24 hours.

* Per-test [TBPL run] level of detail is visible.
* Treeherder seems to be having a bit of an ontology issue...

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Where is the document for jar.mn file format

2014-07-02 Thread Joshua Cranmer

On 7/2/2014 11:18 AM, Gregory Szorc wrote:
I find the current state extremely frustrating. I had big plans for 
the in-tree docs, including capturing JavaScript docs and having JSM 
APIs automatically published to MDN so we don't have to write docs 
twice. If anyone is in a position to nudge this project forward, I'd 
really appreciate the assist. We're mostly blocked on MDN accommodations.


The problem I always had was the lack of a JS documentation tool that 
could actually process Mozilla code...


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Where is the document for jar.mn file format

2014-07-02 Thread Joshua Cranmer

On 7/2/2014 12:01 PM, Gregory Szorc wrote:

On 7/2/14, 9:48 AM, Gijs Kruitbosch wrote:

On 02/07/2014 17:46, Joshua Cranmer  wrote:

On 7/2/2014 11:18 AM, Gregory Szorc wrote:

I find the current state extremely frustrating. I had big plans for
the in-tree docs, including capturing JavaScript docs and having JSM
APIs automatically published to MDN so we don't have to write docs
twice. If anyone is in a position to nudge this project forward, I'd
really appreciate the assist. We're mostly blocked on MDN
accommodations.


The problem I always had was the lack of a JS documentation tool that
could actually process Mozilla code...



Ditto. It might be nice to move the build docs to MDN, but most of our
code is not the build docs. Without good support for (our) JS and C++,
this is significantly less useful.


I'm not sure what exactly you mean by less useful. I think you mean 
because we don't have JS and C++ docs it is less useful.


Sphinx can capture C++ docs. I don't have it enabled because Doxygen 
is super slow. (I'm still waiting for someone to leverage Clang's 
superior tooling to replace Doxygen or at least output its XML format 
so Doxygen's Perl-based C++ parser can DIAF.)


Doxygen can leverage libclang (set CLANG_ASSISTED_PARSING to YES). 
There's still the not-insignificant problem with documentation that all 
of our build files have a crazy, insane, inconsistent set of command 
lines that make the -I and -D state unpredictable, and I don't think 
Doxygen can tolerate per-file command lines.


(And Doxygen's parser isn't Perl-based, it's yacc-based.) The problem is 
that Doxygen does a very good job of turning comments into 
documentation, it is just extremely lousy at handling C++ code (the 
author, for example, has repeatedly refused to support the GCC 
__attribute__(()) productions...). It doesn't look like there's an easy 
way to bypass the parser and just have it operate on comments :-(


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Are you interested in doing dynamic analysis of JS code?

2014-06-29 Thread Joshua Cranmer

On 6/27/2014 5:38 PM, Sylvestre Ledru wrote:

On 25/06/2014 08:15, Jason Orendorff wrote:

We're considering building a JavaScript API for dynamic analysis of JS
code.
Here's the sort of thing you could do with it:

   - Gather code coverage information (useful for testing/release mgmt?)

Yes, I confirm that we would be happy to get a clean and efficient way
to instrument the Firefox and Firefox OS Javascript code.

Ultimately, in order to aggregate the result with the C/C++ coverage,
we would also need this API to be able to export the coverage results
into a standard format (gcna/gcdo).


Ew, no. Don't use gcna/gcdo format--they're incredibly wonky and 
difficult to read independently. Use something like LCOV's files instead.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Are you interested in doing dynamic analysis of JS code?

2014-06-25 Thread Joshua Cranmer

On 6/25/2014 10:15 AM, Jason Orendorff wrote:
We're considering building a JavaScript API for dynamic analysis of JS 
code.

Here's the sort of thing you could do with it:

  - Gather code coverage information (useful for testing/release mgmt?)


I've begged this several times, and, as I mentioned in another recent 
thread, I've grown skeptical of any code coverage approach not based on 
the JS runtime engine itself.


If you add only one new feature, this is the one you should add.

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-22 Thread Joshua Cranmer

On 6/22/2014 5:51 PM, Botond Ballo wrote:

- Original Message -

From: Joshua Cranmer  pidgeo...@gmail.com

Is the status quo really that bad?

I think the fact that we're not seeing a proliferation of non-{header-only}
C++ libraries - that is, that people still view C as the go-to language
for ABI stability - is evidence that the status quo is bad.


I suspect you're misdiagnosing the problem. There are several reasons 
why people might use C instead of C++ for public APIs:

1. Their library is in C [because C++ is complex/slow/foreign/etc.]
2. C is more ABI stable than C++
3. Want to easily differentiate between a stable C interface and an 
unstable C++ interface.

4. C hooks into other languages' FFIs much more easily.

I can name several projects that use C++ public interfaces (QT, 
SpiderMonkey, ICU, Boost, several GObject-based libraries also have C++ 
bindings). I think the real problem for ABIs is the difficulty of 
getting a working FFI for other languages--and the proposed draft does 
absolutely nothing to solve that problem. Standardizing even a small 
subset of C++ in terms of an equivalent C ABI (standard layout structs, 
basic vtables, and related name manglings) would go a long way to making 
it more usable. And it also wouldn't require redefining an extern abi 
value [1]
In the absence of such a mechanism, we would essentially be requiring 
that all compiler vendors drop their current ABI (if different from 
the platform ABI), and target the platform ABI (which, recall, is 
supposed to be stable for the platform's lifetime). This would simply 
be impractical. As Herb describes in the paper, compilers today have a 
plethora of switches that affect ABI. These exist for a reason, and 
won't simply go away overnight. This is why we need a mechanism to say 
for this part of the code, ignore all these switches, and just target 
the stable platform ABI. In particular, one would expect authors of 
separately-compiled libraries to mark their library interfaces in this 
way. Without this, the key benefit of the proposal - on any given 
platform, being able to combine binary components whose interface is 
marked in this way, no matter what compiler, compiler version, and 
compiler switches they have been compiled with - will not be realized 
in practice


I think this is overstated. As I recall, the only compiler with 
significant issues is MSVC, and part of that is the instability of the 
standard library and part of that is the pointer-to-member-function in 
the MSVC ABI being completely braindead (it's completely dependent on 
how the class is defined, so in lieu of a definition, there are compiler 
options to reorganize it). If instead of trying to define the complete 
ABI, you focused on a subset--and you defined that ABI in terms of a C 
ABI--you get much of the desiderata without requiring too much pain on 
either compiler writers or on library writers.


What exactly would we be stuck with for decades? 


A useless std::abi.

[1] I know ELF makes providing multiple symbol names relatively easy, 
and I think Mach-O and PE/COFF have similar functionality, so supporting 
multiple name manglings for the same function is not difficult. There's 
a minor stumbling block in how MSVC does vtable layout when overloading 
is involved, but otherwise, most of the basic layout is pretty sane.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Code coverage [was: Re: Javascript code coverage ?]

2014-06-20 Thread Joshua Cranmer

On 6/20/2014 4:25 AM, Sylvestre Ledru wrote:

It takes around 26 hours on my workstation to run all the tests
and about 4 days on (old?) Macbook pro.
I haven't work on improving this yet.


I am mildly distrustful of results that aren't running on as close to 
the same configuration as our builders as we can get--which is my latest 
efforts focused on trying to get magic try runs to output the data. In 
particular, after looking at some of the data, it definitely seems to me 
that part of your configuration is missing something (there is 0% 
coverage of angle).


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-20 Thread Joshua Cranmer

On 6/20/2014 4:44 AM, Botond Ballo wrote:

Why object to this proposal, then? Even if it will, in practice, take
a very long time for some projects to adopt extern abi and std::abi,
this seems better than the status quo.


Is the status quo really that bad? MSVC can publish its ABI as is, and 
with the Itanium ABI published as well, that's effectively equivalent to 
saying that the platform ABIs are published. It doesn't necessarily 
solve gcc/msvc compatibility issues, but that's for Mingw to work out. 
If gcc is fixing its std::string without std::abi, then it's not clear 
that std::abi is useful to make ABI guarantees in practice to the degree 
that it's possible. Libc++ appears to do a decent job of making two 
distinct standard C++ libraries operate at an ABI level anyways.


Part of my contention is that backwards compatibility isn't as valuable 
as it appears to be, particular when you're stuck with it for decades. 
It's not clear to me that today is bad enough to warrant making 
something that we'll be stuck with for decades and doesn't really solve 
the problem.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-19 Thread Joshua Cranmer

On 6/19/2014 5:55 PM, Botond Ballo wrote:

Are you saying that gcc - assuming that for some platforms, it is
considered the platform vendor, and therefore the provider of std::abi -
would likely ship their non-conforming std::string as std::abi::string
in order to maintain ABI compatibility between the two?


No. What I'm saying is that an implicit goal of this paper is to help 
gcc changes its non-conforming std::string. And them I'm saying that the 
proposal doesn't actually solve that goal: gcc can't change the ABI 
because it would break existing programs and code, and adding a new 
explicitly-ABI-compatible interface won't work because existing programs 
won't use it yet.


Not sure what the point here is. If the ABI is published, people won't 
have to reverse engineer things. That's surely an improvement


There are two points I wanted to make:
1. Mandating that you need to publish something doesn't mean it will 
actually get published. C++ requires that compilers publish the 
implementation-defined behavior decisions they make. I don't see any 
documents for that for MSVC [which only has C] or Clang.
2. A published specification isn't necessarily sufficient for 
interoperability. For example, in the OOXML specification, there's an 
attribute whose full documentation is basically emulate the behavior of 
MS Word 95 on this text layout matter, without any description of what 
that behavior actually is.
This proposal, if accepted, would require Microsoft to create a stable 
ABI and document it publicly in order to be conforming. (This aspect 
of the proposal is viewed as a Good Thing even by people in the 
committee who have objections to other aspects of the proposal.


See above.

It is out of scope of the Standard to make requirements related to 
intellectual property. However, Herb said - and I believe - that it is 
in the spirit of the proposal that platform vendors do not place IP 
hurdles in front of third parties implementing their ABI


My point is that it's not necessarily the lack of official ABIs that 
block interoperability.


Do you have in mind a roadmap to an ABI that is portable across 
implementations on a given platform, that does not suffer from these 
issues


Sadly, no. I'm not sure such a thing can even exist.

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Javascript code coverage ?

2014-06-16 Thread Joshua Cranmer

On 6/16/2014 12:23 PM, Sylvestre Ledru wrote:

Hello,

I am working on providing weekly code coverage of Firefox code.
For now, I am able to do that for C/C++ code.

I would like to know if anyone tried to generate code coverage recently
on the Javascript code of Firefox (or Firefox OS)?


Define recently? :-)

I've done at least three different abortive attempts to do JS code 
coverage. The really hard part is that Mozilla uses new (and 
non-standard) syntax fairly aggressively in its code--when I first 
started poking at it, the inability to process E4X was actually a hard 
block for me [1]. I also tried to do some poking to figure out how to 
get it working on inline scripts in our XUL or XBL code.


My first attempt was using jscoverage, which worked poorly even back in 
2010 and 2011: it was based on an earlier version of SpiderMonkey's APIs 
and upgrading to newer parse APIs was a pain in the butt. I tried again 
at some point using the Reflect.parse APIs, but shied away from that 
because I didn't have the time to maintain a functional decompiler from 
the AST let alone a variant that added the instrumentation to that. When 
the SpiderMonkey PC counts API was added, I actually managed to build a 
working system, but then I was told that IonMonkey had broken that 
functionality before I could ever get it truly ready. I tried once again 
when the debugger API was added, but that again didn't work for some 
reason (I've forgotten why long ago... probably something to do with 
insufficiently exposing interesting globals?).


Over the years, I've come to the conclusion that inserting 
instrumentation into the source code is not a viable path to achieving 
JS code coverage metrics. Maintaining a functioning decompiler for the 
AST that works reasonably well on several million lines of JS code, some 
of which uses dialects not commonly found on the web, is a difficult 
task by itself. Adding on top of that the insanity of how JS code must 
be expressed (including nasty things like you can't instrument prefs.js 
or the presence of inline JS) means that you have to spend more time on 
maintaining an engine beyond what most others would find sufficient for 
their uses. On top of that, there is the not-insignificant problem that 
there's no standard way to do I/O in JS that lets you save that 
information somewhere: especially daunting given the presence of XPCOM 
components, chrome workers, content workers, chrome and content windows, 
specifically sandboxed source files, and builtin JS code, to name the 
types I'm aware of.


[1] I concern myself more with Thunderbird's code coverage than with 
Firefox's, and we used E4X in one place before it was removed, and 
Lightning used it in another place.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-15 Thread Joshua Cranmer

On 6/9/2014 2:31 PM, Botond Ballo wrote:

Portable C++ ABI:
   http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4028.pdf



Perhaps a bit late to be saying this, but after reading this paper, I've 
come to object to it.


A portable ABI can mean one of several things:
1. A well-defined, predictable interface that can be used for, e.g., 
FFI-type libraries.
2. An interface that allows different compilers to compile some of the 
object files that make up a library and still have everything come out 
the same (up to code generation/optimization differences).
3. Binary compatibility (to use the term in Java's spec) for class 
implementations, which allows you can define what can and can't be 
changed without breaking ABI compatibility.


The problem is that this paper achieves none of these. The main 
proposals appear to boil down to the following:
1. Compilers must publish an official language ABI. Using this requires 
extern abi linkage.
2. Declare a std::abi::* set of classes/functions/etc. This is 
guaranteed ABI-compatible, and std::* is not guaranteed.


It explicitly mentions the issue that gcc can't make std::string 
conforming to C++11 because doing so would require breaking ABI 
compatibility, and it opines that making this move would allow gcc to 
break the compatibility. This is a complete fallacy: breaking binary 
compatibility is as much about the informal compatibility guarantees as 
the formal ones. In Mozilla code, we explicitly stopped all binary 
compatibility guarantees in Gecko 4 (early 2011). Even then, we decided 
to make nsILocalFile an empty interface for a little while to avoid 
breaking extensions... and 2 years, 2 months, 10 days later, we still 
have 486 uses of this interface in mozilla-central alone. Our plans to 
kill off XPIDL interfaces for the now-WebIDL-based DOM don't even extend 
to nsIDOMElement and friends specifically for compatibility reasons. 
Looking at the factors that block gcc's use of a conforming std::string, 
this approach wouldn't expedite it at all.


Publishing official ABIs is rather meaningless: note that some details 
of the Win64 C ABI are not officially published and had to be reverse 
engineered (I think). It's also worth noting that MSVC's C++ ABI does 
not appear to have a formal design or internal documentation, judging by 
the presence of clear bugs in name mangling. It even may not be stable: 
a few name manglings require computing hashCode(something). What blocks 
interoperability with Win32's exception handling isn't the lack of a 
specification but rather the existence of a patent [which I believe 
expires this year].


The final--and biggest flaw--is that the envisioned mode of ABI 
compatibility is to create a subset of the code that guarantees ABI 
compatibility and make that subset grow steadily largely over time. I 
don't think this works (witness the failure of the 8-bit MIME scheme in 
this regard). extern abi can't be used by existing projects because it 
doesn't exist yet; it can't be changed to in a future version if it 
causes an ABI break because it breaks their compatibility. And if it 
doesn't change ABI, then there's no point to it in the first place. New 
projects can use it, but at moderate risk to compatibility with 
pre-existing projects (depending on exact wording of specifications). 
And so long as projects that don't use it exists, tools that want to do 
FFI still need to worry about the unstable ABI. std::abi has the same 
issues, with particularly stronger emphasis on the compatibility 
mismatch between projects that use it and projects that don't.


So what we have, in short, is a paper that proposes an underdefined ABI 
guarantee that, even were it fully defined, wouldn't be sufficient to be 
usable by the people who most would want to use it.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standardized assertion methods

2014-06-03 Thread Joshua Cranmer

On 6/3/2014 8:39 AM, Gijs Kruitbosch wrote:

On 03/06/2014 14:16, Mike de Boer wrote:
Indeed, I’m used to the NodeJS/ Mocha flow of writing tests as fast, 
or even faster, as writing the implementation of a feature. I could 
group tests, isolate one, hook in a debugger at any point and more. 
This is something I miss while working on Fx and to be honest: I 
still crave for more than a year later.



So I'm not used to the NodeJS/Mocha 'flow' of writing tests. Can you 
explain what the big difference there is? I find it hard to believe 
the names of the assertion functions are the one big thing making 
tests take much longer to write...


I'm used to xpcshell tests more than mochitests, and the biggest 
difference by far between xpcshell and mocha that I'm aware of is that 
mocha counts tests at finer granularity: xpcshell tests work on a 
file-by-file basis, whereas mocha tests work at the level of individual 
test('Name', function() {}) calls. With the right framework support, 
this makes it much easier to debug and diagnose single failures when 
you're testing a parser function, since you can enable and test only a 
single instance instead of setting a breakpoint and continue'ing twelve 
times to get to the one you want [fwiw, I have 822 tests in just 6 files 
for one of my suites, although most of those are defined in giant array 
comparisons].


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-30 Thread Joshua Cranmer

On 5/30/2014 12:00 PM, Gervase Markham wrote:

On 28/05/14 17:49, Joshua Cranmer  wrote:

We have an excellent chance to try to rethink CA infrastructure in this
process beyond the notion of a trusted third-party CA system (which is
already more or less broken, but that's beside the point). My own views
on this matter is that the most effective notion of trust is some sort
of key pinning: using a different key is a better indicator of an attack
than having a valid certificate; under this model the CA system is
largely information about how to trust a key you've never seen before.
There is a minor gloss point here in that there are legitimate reasons
to need to re-key servers (e.g., Heartbleed or the Debian OpenSSL
entropy issue), and I don't personally have the security experience to
be able to suggest a solution here.

Forgive me, but that sounds like I'm going to propose a solution with
one glaring flaw that has always sunk it in the past, and then gloss
over that flaw by saying 'I don't have the security experience - someone
else fix it'.


Actually, that is essentially what I'm saying. I know other people at 
Mozilla have good security backgrounds and can discuss the issue, and I 
was hoping that they could weigh in with suggestions on this thread. I 
acknowledge that the re-keying is a difficult issue, but I also don't 
have the time to do the research myself on this topic, since I'm way 
backed up on a myriad of other obligations.

Doesn't the EFF's SSL Observatory already track the SSL certificates to
indicate potential MITMs?

The SSL Observatory's available data is a one-off dump from several
years ago. They are collecting more data as they go along, but it's not
public.


The EFF does things that aren't public?! :)

More seriously, are they actively attempting to detect potential MITMs, 
and would they announce if they did detect one? Andrew had in his 
proposal a note that reporting of fingerprints could be used to detect 
MITMs, and I was implying that this was duplicating work others were 
already doing.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Joshua Cranmer
 as well. ]


* We contact the trusted server, for example, 
certchecker.mozilla.org.  We tell it the domain we tried to contact, 
the IP, the port, the protocol, initial-TLS versus startTLS, and the 
certificate we got back.


Only over TLS, of course. And if that fails, the user has to execute 
$NOT_A_MERE_CLICKTHROUGH steps to fix it.



* The trusted server attempts to initiate the same connection.


Would it be feasible to ask of partners that they register at least the 
fingerprints of their certificates with the trusted server? Then we 
could require that the connection's certificate use the same fingerprint 
as it has on record, which ought to improve security in the face of 
MITMs, excepting issues with firewall MITMs which are really entirely 
separate classes of attacks. On the other hand, it would require care in 
figuring out how to handle migration of public keys cleanly.


I'd like to imagine that, when DANE support becomes available, we could 
assume an untrusted or self-signed certificate is valid if it passes 
DNSSEC on the DANE entries.


Alternatively, we could not require pre-registration of fingerprints if 
it's already published via DANE. While a do-it-now setup would still 
require a trusted server to do the DANE lookups, it would allow for an 
eventual retiring of the trusted server middleman and Single Point of 
Failure and help prod people to roll out DANE. And maybe even make DANE 
more of a priority in Mozilla's codebase? :-)



== Proposed solution for exceptions / allowing connections

There are a variety of options here, but I think one stands above the 
others.  I propose that we make TCPSocket and XHR with mozSystem take 
a dictionary that characterizes one or more certificates that should 
be accepted as valid regardless of CA validation state. Ideally we 
could allow pinning via this mechanism (by forbidding all certificates 
but those listed), but that is not essential for this use-case.  Just 
a nice side-effect that could help provide tighter security guarantees 
for those who want it.


[ Similar concerns potentially exist for S/MIME, which is mostly the 
angle I've thought about this previously. ]


I've preferred to think of the ideal solution as an introduction of a 
pinning mechanism, but this needs to take into account revocation and 
key upgrades (both gradual, crypto keys-are-now-crackable upgrades and 
Heartbleed-level emergency key upgrades). I'd propose that your 
modifications to TCPSocket et al not have the pinned certificates 
override certificates that fail a revocation check.


* Any solution that requires the user to manually verify a fingerprint 
for security seems guaranteed to not provide any security.


An unusable secure solution ought to be considered an oxymoron: 
usability and security are not orthogonal concepts.



== Other options?



I'm not sure where the best place to put these comments are, so here 
they go:


1. Any solution should try to only permit the easy certificate 
override on account configuration. This minimizes scope for potential 
MITM attacks.


2. Any solution should also recognize that re-keying of servers is going 
to need to happen at some point. I don't know if this point will be 
before a better long-term solution can be put in place or not.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Joshua Cranmer

On 5/28/2014 7:13 PM, Andrew Sutherland wrote:
My imagined rationale for why someone would use a self-signed 
certificate amounts to laziness.  (We've been unable to determine what 
the rationale is for using invalid certificates in these cases as of 
yet.)  For example, they install dovecot on Debian/Ubuntu, it 
generates a self-signed certificate, they're fine with that.  Or they 
created a self-signed certificate years ago before they were free and 
don't want to update them now. Under this model, it's very unlikely 
that there's a server farm of servers each using different self-signed 
certificates, which would be the case where we want multiple 
certificates.  (Such a multi-exception scenario would also not work 
with my proposed trusted server thing.)


Two more possible rationales:
1. The administrator is unwilling to pay for an SSL certificate and 
unaware of low-cost or free SSL certificate providers.
2. The administrator has philosophical beliefs about CAs, or the CA 
trust model in general, and is unwilling to participate in it. 
Neglecting the fact that encouraging click-through behavior of users can 
only weaken the trust model.


[ Discovered in the course of reading a few CACert root certificate 
request bugs. ]
[ Secondary note: most of my thoughts on X.509 certificates are geared 
towards its relation to S/MIME, which shares similar but not quite 
identical concerns. ]


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: using namespace

2014-05-20 Thread Joshua Cranmer

On 5/20/2014 8:37 PM, Ehsan Akhgari wrote:
FWIW, I argued against nested namespaces a few years ago (couldn't 
find a link to it through Google unfortunately) and people let me 
win that battle by allowing me to edit the coding style to prohibit 
nested namespoaces in most cases 
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style#Namespaces, 
but unfortunately we never adhered to this rule in practice, and these 
days three level nested namespaces are pretty common in the code base. 
We're just bending C++ in a way that it's not quite comfortable with 
here.


How about adding the rule all new namespaces must be approved by 
insert specific top-level superreviewer here, just like all new 
top-level directories need to be explicitly approved? If we could get 
reviewers to enforce that, it would hopefully cut down on people using 
outlandishly long namespaces.


I think there are valid reasons to have a two-level namespace (e.g., 
mozilla::mailnews [1]), but I find it deathly hard to justify going any 
deeper.


[1] Actually, I would probably prefer mozilla::comm had 
mozilla::mailnews not already had precedent when I started using C++ 
namespaces. :-)


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   >