Re: std::unique_ptr, std::move,

2013-07-31 Thread Brian Smith
On Wed, Jul 31, 2013 at 6:53 AM, Joshua Cranmer  pidgeo...@gmail.comwrote:

 On 7/30/2013 10:39 PM, Brian Smith wrote:

 Yes: Then we can use std::unique_ptr in parts of Gecko that are intended
 to
 be buildable without MFBT (see below).


 One thing I want to point out is that, while compiler features are
 relatively easy to select based on catching macro versions, the C++
 standard library is not, since compiler versions don't necessarily
 correlate with standard library versions. We basically support 4 standard
 libraries (MSVC, libstdc++, stlport, and libc++); under the right
 conditions, clang could be using any of those four versions. This means
 it's hard to tell when #include'ing a standard header will give us the
 feature or not. The C++ committee is actively working on a consensus
 solution to this issue, but it would not be rolled out to production
 compilers until 2014 at the earliest.


Basically, I'm proposing that we add std::unique_ptr, std::move,
std::forward, and some nulllptr polyfill to STLPort with the intention that
we can assume these features work. That is, if some (compiler, standard
library) combination doesn't have these features then it would be an
unsupported combination.

More generally, nobody should be reasonably expected to write code that
builds with any combination that isn't used on mozilla-central's TBPL. So,
(clang, MSVC) is not really something to consider, for example.


 One of the goals of MFBT is to bridge over the varying support of
 C++11/C++14 in current compilers, although it also includes useful data
 structures that are not necessary for C++ compatibility. Since we have an
 increasing number of semi-autonomous C++ projects in mozilla-central, it
 makes sense that we should have a smallish (header-only, if possible?)
 compatibility bridge library, but if that is not MFBT, then I don't know
 what it is or should be. As it stands, we have a fair amount of duplication
 right now.


We should be more aggressive in requiring newer compiler versions whenever
practical, and we should choose to support as few compiler/library
combinations as we can get away with. That way we can use as many C++11/14
features (not just library features, but also language features) as
possible without any portability shims, and we can save developer effort by
avoiding adding code to MFBT that duplicates standard library
functionality. The only time we should be requiring less than the latest
version of any compiler on any platform is when that compiler is the
compiler used for official builds on that platform and the latest version
doesn't work well enough.

Anyway, it would be easier to swallow the dependency on MFBT if it wasn't
so large (over 100 files now), if it tried to be (just) a polyfill for
missing standard library features, and if it could easily be used
independently of the Gecko build system. But, none of those constraints is
reasonable to place on MFBT, so that means MFBT isn't a good choice for
most things that need to also be able to be built independently of Gecko.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: XPIDL Promises

2013-07-31 Thread Paolo Amadini
On 30/07/2013 22.40, Andreas Gal wrote:
 Whats the main pain point? Whether promises are resolved immediately or
 from a future event loop iteration?

That. The migration from core/promise.js to Promise.jsm should
address consumers expecting callbacks to be called immediately.

Promises.jsm conforms to Promises/A+ in guaranteeing that then
returns before the callbacks it registers are invoked. It seems DOM
Promises guarantee that too, so any possible migration from
Promise.jsm to DOM Promises seems easier.

Also, to clarify, event loop may refer to different things. We're not
really going back to the operating system's event loop while there are
still promises to resolve, for performance reasons (for example, when
iterating over a chain of resolved promises created by Task.jsm).

We have a Promise.jsm test suite that we should run on DOM Promises
before migration, though subtle differences in the event loop model
might still be uncaught in some edge cases. I expect this to have less
impact than the current core/promise.js - Promise.jsm migration.

It's still a pity that we're not putting enough resources on the
migration from core/promise.js to Promise.jsm (see bug 881047 and
the mentioned dependencies of bug 856878). Most code that blocks
migration is just made of test cases, rather than production code.

Accelerating on bug 881047 would be great, to at least reduce our
implementations from three to two :-)

Cheers,
Paolo
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-07-31 Thread Mike Hommey
On Wed, Jul 31, 2013 at 10:25:15AM +0200, Brian Smith wrote:
 We should be more aggressive in requiring newer compiler versions
 whenever practical, and we should choose to support as few
 compiler/library combinations as we can get away with. That way we can
 use as many C++11/14 features (not just library features, but also
 language features) as possible without any portability shims, and we
 can save developer effort by avoiding adding code to MFBT that
 duplicates standard library functionality. The only time we should be
 requiring less than the latest version of any compiler on any platform
 is when that compiler is the compiler used for official builds on that
 platform and the latest version doesn't work well enough.

I strongly oppose to any requirement that would make ESR+2 (ESR31) not
build on the current Debian stable (gcc 4.7) and make ESR+1 (ESR24) not
build on the old Debian stable (gcc 4.4). We're not going to change the
requirements for the latter. And b2g still requires gcc 4.4 (with c++11)
support anyways. Until they switch to the same toolchain as android,
which is 4.7.

 Anyway, it would be easier to swallow the dependency on MFBT if it
 wasn't so large (over 100 files now), if it tried to be (just) a
 polyfill for missing standard library features, and if it could easily
 be used independently of the Gecko build system. But, none of those
 constraints is reasonable to place on MFBT, so that means MFBT isn't a
 good choice for most things that need to also be able to be built
 independently of Gecko.

I am of the opinion that anything that is not a header file under MFBT
should be moved into mozglue. The end result would be the same (MFBT is
actually built into mozglue, except for js standalone builds, for which
this would require some changes), but it would allow MFBT to just be
used independently. Note that I've been picking a few MFBT headers
without any problem to build the android linker independently. Albeit,
it's not cross platform.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-07-31 Thread Brian Smith
On Wed, Jul 31, 2013 at 12:34 PM, Mike Hommey m...@glandium.org wrote:

 I strongly oppose to any requirement that would make ESR+2 (ESR31) not
 build on the current Debian stable (gcc 4.7) and make ESR+1 (ESR24) not
 build on the old Debian stable (gcc 4.4). We're not going to change the
 requirements for the latter. And b2g still requires gcc 4.4 (with c++11)
 support anyways. Until they switch to the same toolchain as android,
 which is 4.7.


Why are you so opposed? I feel like I can give a lot of good reasons why
such constraints are a net loss for us, but I am not sure what is driving
the imposition of such constraints on us.

Cheers,
Brian
--
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking separate Mercurial repositories

2013-07-31 Thread Ben Hearsum
On 07/31/13 05:54 AM, Marco Bonardo wrote:
 On 29/07/2013 19:43, Gregory Szorc wrote:
 I'm proposing that we merge all the release repositories (central,
 aurora, beta, release, esr, and b2g) into a single Mercurial repository.
 The default branch/bookmark of this repository would be the equivalent
 of mozilla-central. At train uplift time, we create a new branch (or
 bookmark) called gecko-N (or similar) where N is the core gecko/platform
 release version. If default/central is on 25, Aurora changes land in
 gecko-24, Beta in gecko-23, etc. These could be supplemented with build
 and release tags/branches as appropriate.
 
 Now, maybe I'm wrong, but IIRC this is what we had before the rapid
 release, and we switched away from that cause:

Releases have always had their own repositories (see
https://hg.mozilla.org/releases/mozilla-1.9.1 and similar). We used to
have named branches in mozilla-central for specific Betas, but they were
quite short lived. Maybe that's what you're thinking of?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: std::unique_ptr, std::move,

2013-07-31 Thread Mike Hommey
On Wed, Jul 31, 2013 at 01:06:27PM +0200, Brian Smith wrote:
 On Wed, Jul 31, 2013 at 12:34 PM, Mike Hommey m...@glandium.org wrote:
 
  I strongly oppose to any requirement that would make ESR+2 (ESR31)
  not build on the current Debian stable (gcc 4.7) and make ESR+1
  (ESR24) not build on the old Debian stable (gcc 4.4). We're not
  going to change the requirements for the latter. And b2g still
  requires gcc 4.4 (with c++11) support anyways. Until they switch to
  the same toolchain as android, which is 4.7.
 
 
 Why are you so opposed? I feel like I can give a lot of good reasons
 why such constraints are a net loss for us, but I am not sure what is
 driving the imposition of such constraints on us.

Because Mozilla is not the only entity that builds and distributes
Gecko-derived products, including Firefox, and that we can't demand
everyone to be using the latest shiny compiler.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: mozilla/StandardInteger.h is now dead, use stdint.h

2013-07-31 Thread Philip Chee
On 31/07/2013 00:35, Ehsan Akhgari wrote:
 bug 872127

I pushed a comm-central bustage fix:
https://hg.mozilla.org/comm-central/rev/e4c4ff49ed66

Phil

-- 
Philip Chee phi...@aleytys.pc.my, philip.c...@gmail.com
http://flashblock.mozdev.org/ http://xsidebar.mozdev.org
Guard us from the she-wolf and the wolf, and guard us from the thief,
oh Night, and so be good for us to pass.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking separate Mercurial repositories

2013-07-31 Thread Ben Hearsum
On 07/31/13 08:43 AM, Marco Bonardo wrote:
 On 31/07/2013 13:14, Ben Hearsum wrote:
 On 07/31/13 05:54 AM, Marco Bonardo wrote:
 Now, maybe I'm wrong, but IIRC this is what we had before the rapid
 release, and we switched away from that cause:

 Releases have always had their own repositories (see
 https://hg.mozilla.org/releases/mozilla-1.9.1 and similar). We used to
 have named branches in mozilla-central for specific Betas, but they were
 quite short lived. Maybe that's what you're thinking of?

 
 IIRC we also used to add named branches before spawning the separate
 repositories, and periodically someone was taking care of trimming old
 named branches, cause there were too many and it was becoming confusing
 for everyone to find the right ones.
 I recall that multi-branch approach being quite a bad experience, at
 least for me.

These were named branches for every beta (eg 3.0b1). We still have
those, actually, but only in mozilla-beta, mozilla-release and ESR
repositories.

I can definitely see how this would be confusing/painful for developers.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking separate Mercurial repositories

2013-07-31 Thread Chris Peterson

On 7/31/13 2:54 AM, Marco Bonardo wrote:

- handling queue of patches for different branches is a nightmare, I
often have patches in queues for aurora, beta and central at the same time


Wouldn't switching branches in the same repo clone touch many files and 
trigger unfortunately clobber builds? Even with ccache and separate 
per-branch objdirs, this seems like a problem.


chris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking separate Mercurial repositories

2013-07-31 Thread Ehsan Akhgari

On 2013-07-29 4:07 PM, Gregory Szorc wrote:

On 7/29/13 12:49 PM, Ehsan Akhgari wrote:

On 2013-07-29 2:06 PM, Benjamin Smedberg wrote:

Given all the things that we could be doing instead, why is this
important to do now?


I share Benjamin's concern.


Legit concern. Probably low priority. I wanted to have a discussion on
it because I suspect it will be an issue down the road. e.g. it sounds
like things in Git land [1] may force the issue.


Not sure what that has to do with the Mercurial discussion.  The reason 
why I did that and why RelEng is doing this is that this is the Git Way 
of doing things.  That is not the case with Mercurial.



Also, before we can discuss this, we need to make sure that every
Mercurial command handles bookmarks sanely.  Last I checked, things such
as hg push did not do that (IIRC push just pushes everything on the
named branch you're on by default.)


If you are referring to applied mq patches, if you use [mq] secret=True
(recommended but not the default in Mercurial due to backwards
compatibility), this will set the phase of applied mq patches to
secret (as opposed to draft) which will prevent them from being
pushed. This will muck with try pushes and you'll need an extension to
work around this limitation - something I've been meaning to add to my
new Mercurial extension!

I do concede push does have some additional wacky behavior, but it's
mostly around creating new bookmarks/branches/heads. Things also get
much weirder when you start pulling from multiple repos locally, as
Mercurial will try to push all non-remote changesets unless a specific
revision is specified. I created a pushtree command [2] in my custom
Mercurial extension to make this more intuitive. But the latter isn't a
concern if the local clone mirrors the single remote.


My main concern was that somebody should go and investigate that the 
right thing happens for all commands if the hg user has never used 
bookmarks before (which are a rather recent addition to hg.)


Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking separate Mercurial repositories

2013-07-31 Thread Ehsan Akhgari

On 2013-07-31 11:49 AM, Chris Peterson wrote:

On 7/31/13 2:54 AM, Marco Bonardo wrote:

- handling queue of patches for different branches is a nightmare, I
often have patches in queues for aurora, beta and central at the same
time


Wouldn't switching branches in the same repo clone touch many files and
trigger unfortunately clobber builds? Even with ccache and separate
per-branch objdirs, this seems like a problem.


Yes.

Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: mozilla/StandardInteger.h is now dead, use stdint.h

2013-07-31 Thread Ehsan Akhgari

On 2013-07-31 8:47 AM, Philip Chee wrote:

On 31/07/2013 00:35, Ehsan Akhgari wrote:

bug 872127


I pushed a comm-central bustage fix:
https://hg.mozilla.org/comm-central/rev/e4c4ff49ed66


Thank you!  Sorry that I missed c-c.

Cheers,
Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Ship: Web Audio

2013-07-31 Thread Anne van Kesteren
On Tue, Jul 30, 2013 at 6:26 PM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:
 Please let me know if you have any questions.

I'm not at all comfortable with adding data races to the platform. Or
did we solve them in some manner?


-- 
http://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-07-31 Thread Ehsan Akhgari

On 2013-07-31 1:41 PM, Joshua Cranmer  wrote:

With all of that stated, the questions I want to pose to the community
at large are as follows:
1. How much, and where, should we be using standard C++ library
functionality in Mozilla code?


I'm not sure if it's easy to have this discussion in general without 
talking about a specific standard library feature.



2. To what degree should our custom ADTs (like nsTArray) be
interoperable with the C++ standard library?


I think some people would like to make the code more understandable to 
newer contributors, and some people would prefer to keep existing 
convention intact.



3. How should we handle bridge support for standardized features not yet
universally-implemented?


I think MFBT has been working fine so far.


4. When should we prefer our own implementations to standard library
implementations?


Usually doing our own implementation is faster, but there is a 
significant lag from contributing something upstream (if that's even 
possible) until that gets released in a toolchain that people use.  So 
perhaps we should do both in parallel when there is the option.



5. To what degree should our platform-bridging libraries
(xpcom/mfbt/necko/nspr) use or align with the C++ standard library?


This is also hard to talk about without having a concrete thing under 
consideration.



6. Where support for an API we wish to use is not universal, what is the
preferred way to mock that support?


MFBT, I believe.

Cheers,
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-07-31 Thread Justin Lebar
 1. How much, and where, should we be using standard C++ library
 functionality in Mozilla code?

We've tuned tarray, nsthashtable, strings, etc. to meet our precise
needs, and the implementations are consistent across all platforms.
I can imagine things becoming quite messy we had three or four
different implementations of these classes (in the different stdlibs),
each with their own quirks, and if we couldn't change the
implementations to meet our needs.

I definitely think that some of our APIs could use some love, but it
seems unlikely to me that replacing a complex class like nsTArray with
std::vector would be a net win for us, as compared to simply improving
nsTArray's interface.  Even performing this experiment would be an
expensive endeavor.

By way of contrast, I think it's great that we're using simple
classes, functions, and types from stdlib, such as static_assert and
stdint.  The downside here is much smaller, since we don't have to
worry about the quirks of the different implementations, and since
there's nothing we might want to tune.

-Justin

On Wed, Jul 31, 2013 at 12:08 PM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:
 On 2013-07-31 1:41 PM, Joshua Cranmer  wrote:

 With all of that stated, the questions I want to pose to the community
 at large are as follows:
 1. How much, and where, should we be using standard C++ library
 functionality in Mozilla code?


 I'm not sure if it's easy to have this discussion in general without talking
 about a specific standard library feature.


 2. To what degree should our custom ADTs (like nsTArray) be
 interoperable with the C++ standard library?


 I think some people would like to make the code more understandable to newer
 contributors, and some people would prefer to keep existing convention
 intact.


 3. How should we handle bridge support for standardized features not yet
 universally-implemented?


 I think MFBT has been working fine so far.


 4. When should we prefer our own implementations to standard library
 implementations?


 Usually doing our own implementation is faster, but there is a significant
 lag from contributing something upstream (if that's even possible) until
 that gets released in a toolchain that people use.  So perhaps we should do
 both in parallel when there is the option.


 5. To what degree should our platform-bridging libraries
 (xpcom/mfbt/necko/nspr) use or align with the C++ standard library?


 This is also hard to talk about without having a concrete thing under
 consideration.


 6. Where support for an API we wish to use is not universal, what is the
 preferred way to mock that support?


 MFBT, I believe.

 Cheers,
 Ehsan

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-07-31 Thread Joshua Cranmer 

On 7/31/2013 2:08 PM, Ehsan Akhgari wrote:

On 2013-07-31 1:41 PM, Joshua Cranmer  wrote:

With all of that stated, the questions I want to pose to the community
at large are as follows:
1. How much, and where, should we be using standard C++ library
functionality in Mozilla code?


I'm not sure if it's easy to have this discussion in general without 
talking about a specific standard library feature.


I'm most particularly referring to nsTArray, nsTHashtable and friends 
from the STL, as well as ns*String as Mozilla ADTs. In terms of all of 
the other code, I'm mostly referring to the large list of new APIs I 
provided in C++11 and C++14 as things we might want to use--std::chrono, 
std::thread and all of its friends, std::unique_ptr, std::function, 
std::tuple, and std::optional are the ones that look the most useful 
(std::function in particular could be superior to function pointers in 
our crappy EnumerateForwards-like methods). std::string_view, and the 
Filesystem and Networking draft TSs are also APIs (not yet finalized to 
any degree) that might influence potential changes we could make to our 
current non-IDL API regime.



3. How should we handle bridge support for standardized features not yet
universally-implemented?


I think MFBT has been working fine so far.


I should be more clear: I'm talking about library features, like type 
traits, that aren't available in the complete selection we have 
available. Since we generally haven't been working around incomplete 
library features (with the exception of std::atomic which is... touchy 
to say the least), we don't really have a good example of what an 
intermediate stage looks like. Basically, should we:
a) Implement mozilla::Duration to polyfill std::chrono::duration until 
it is available everywhere, then mass switch.
b) Implement mozilla::duration/mozilla::chrono::duration until available 
everywhere, then mass switch.
c) Implement mozilla::std::chrono::duration until available everywhere, 
then mass switch.
c) Implement std::chrono::duration until available everywhere, then 
delete our polyfill header.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-07-31 Thread Joshua Cranmer 

On 7/31/2013 2:38 PM, Justin Lebar wrote:

1. How much, and where, should we be using standard C++ library
functionality in Mozilla code?

We've tuned tarray, nsthashtable, strings, etc. to meet our precise
needs, and the implementations are consistent across all platforms.
I can imagine things becoming quite messy we had three or four
different implementations of these classes (in the different stdlibs),
each with their own quirks, and if we couldn't change the
implementations to meet our needs.


For what it's worth, I don't think we can tenably replace nsTArray, 
nsTHashtable, or ns*String with std:: counterparts across the entire 
tree and expect any kind of performance enhancement. We also have needs 
like sizeOfIncludingThis/sizeOfExcludingThis that can't be as easily 
satisfied with STL code. Replacing const nsA[C]String  with a 
std::string_view-esque class might provide a small performance 
enhancement, but it would need some experimentation and boils the ocean. 
It is possible to make our APIs match std::vector in usage, such that 
nsTArray and std::vector could be used fairly interchangeably, and that 
strikes me as a possibly worthwhile goal.


On the other hand, we have an increasing number of semi-autonomous C++ 
side libraries which are already using the STL. Since I don't do 
development outside of the XPCOM-riddled hallways of Mozilla code, I 
don't know to what degree the incongruous APIs cause friction at these 
margins, but I would find it hard to be believe that no friction is 
being generated.


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking separate Mercurial repositories

2013-07-31 Thread Mike Hommey
On Wed, Jul 31, 2013 at 10:28:38AM -0700, Justin Lebar wrote:
  Wouldn't switching branches in the same repo clone touch many files
  and trigger unfortunately clobber builds? Even with ccache and
  separate per-branch objdirs, this seems like a problem.
 
  Yes.
 
 Nothing about this proposal forces you to have only one clone and
 switch back and forth between aurora and central, thus clobbering your
 objdir.  You could still have two trees, one used for aurora, and one
 used for central, if you prefer that.

Sadly, mercurial doesn't support having multiple working directories
from a single clone, which would be useful to avoid wasting so much disk
space on .hg. But git does ;) (albeit, with a script in contrib/)
I guess someone could patch mercurial to support such setups, which,
independently of what we do on the servers, could be useful on the
clients (pull different branches in the same local clone, use different
working directories for each).

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking separate Mercurial repositories

2013-07-31 Thread Justin Lebar
 Sadly, mercurial doesn't support having multiple working directories
 from a single clone, which would be useful to avoid wasting so much disk
 space on .hg.

I'm not usually one to defend hg, but hg does have the |relink|
command, which gets you most of the way there in terms of saving disk
space.

On Wed, Jul 31, 2013 at 5:57 PM, Mike Hommey m...@glandium.org wrote:
 On Wed, Jul 31, 2013 at 10:28:38AM -0700, Justin Lebar wrote:
  Wouldn't switching branches in the same repo clone touch many files
  and trigger unfortunately clobber builds? Even with ccache and
  separate per-branch objdirs, this seems like a problem.
 
  Yes.

 Nothing about this proposal forces you to have only one clone and
 switch back and forth between aurora and central, thus clobbering your
 objdir.  You could still have two trees, one used for aurora, and one
 used for central, if you prefer that.

 Sadly, mercurial doesn't support having multiple working directories
 from a single clone, which would be useful to avoid wasting so much disk
 space on .hg. But git does ;) (albeit, with a script in contrib/)
 I guess someone could patch mercurial to support such setups, which,
 independently of what we do on the servers, could be useful on the
 clients (pull different branches in the same local clone, use different
 working directories for each).

 Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking separate Mercurial repositories

2013-07-31 Thread Gregory Szorc

On 7/31/13 5:59 PM, Justin Lebar wrote:

Sadly, mercurial doesn't support having multiple working directories
from a single clone, which would be useful to avoid wasting so much disk
space on .hg.


I'm not usually one to defend hg, but hg does have the |relink|
command, which gets you most of the way there in terms of saving disk
space.


There's also the share extension [1]. Although, it doesn't play well 
with mq and other history editing commands. Works great for automated 
environments, however.


[1] http://mercurial.selenic.com/wiki/ShareExtension


On Wed, Jul 31, 2013 at 5:57 PM, Mike Hommey m...@glandium.org wrote:

On Wed, Jul 31, 2013 at 10:28:38AM -0700, Justin Lebar wrote:

Wouldn't switching branches in the same repo clone touch many files
and trigger unfortunately clobber builds? Even with ccache and
separate per-branch objdirs, this seems like a problem.


Yes.


Nothing about this proposal forces you to have only one clone and
switch back and forth between aurora and central, thus clobbering your
objdir.  You could still have two trees, one used for aurora, and one
used for central, if you prefer that.


Sadly, mercurial doesn't support having multiple working directories
from a single clone, which would be useful to avoid wasting so much disk
space on .hg. But git does ;) (albeit, with a script in contrib/)
I guess someone could patch mercurial to support such setups, which,
independently of what we do on the servers, could be useful on the
clients (pull different branches in the same local clone, use different
working directories for each).

Mike

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-07-31 Thread Joshua Cranmer 

On 7/31/2013 9:19 PM, Mike Hommey wrote:

On Wed, Jul 31, 2013 at 12:41:12PM -0500, Joshua Cranmer ? wrote:

Thoughts/comments/corrections/questions/concerns/flames/insightful
discussion?

My feeling is that, while these are interesting questions, they are one
step ahead. I think we should step back and start by defining what we
want to achieve.

I think the end goal should be for our code to be more idiomatic, and
less boilerplate-y. Does that mean we should use more STL? maybe, but
I'm not convinced it's the main concern.


Probably most of our boilerplate issues comes from the stilted nature of 
XPIDL and XPCOM; deCOMtamination would solve a lot of issues. In same 
cases, it may desirable to add more C++-y APIs to things largely 
dominated by XPIDL (networking code is the prime example here); other 
places already have relatively tolerable C++ APIs ready to use 
(mozilla::Preferences, say). XPIDL is presently the only easy way to get 
both C++ and JS bindings to code, but the internals of how it works 
means that the resulting APIs (for C++ in particular) are far from 
natural code, due in part to the need to have predictable ABIs.



I was recently mind-blown by the work the libreoffice people have been
doing to refactor their code, particularly by page 16 in
https://archive.fosdem.org/2013/schedule/event/challenges_libreoffice/attachments/slides/300/export/events/attachments/challenges_libreoffice/slides/300/2013_02_03_re_factoring.pdf


Doing massive refactorings in our code is certainly possible, we just 
need to actually commit ourselves to doing those refactorings. I have a 
minor goal I'm working towards of removing most uses of NSPR from Gecko...



Now think of all those NS_LITERAL_STRING() and other horrible
boilerplate we have.
... and my next target is s/PRUnichar/char16_t/, the last step of which 
basically amounts to killing NS_LITERAL_STRING. :-)


--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standard C/C++ and Mozilla

2013-07-31 Thread Nicholas Nethercote
On Wed, Jul 31, 2013 at 3:22 PM, Joshua Cranmer  pidgeo...@gmail.com wrote:
 We also have needs like
 sizeOfIncludingThis/sizeOfExcludingThis that can't be as easily satisfied
 with STL code.

This is, unsurprisingly, a requirement that's close to my heart.  We
actually have a few instances of std:: classes already, which leads to
ridiculousness like
https://bugzilla.mozilla.org/show_bug.cgi?id=806514.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform