The next step is to s/nsnull/nullptr/ in the codebase, and get rid of nsnull.
Forgive my ignorance, but how does this affect NULL? Would that be
deprecated in favor of nullptr as well? Should we use nsnull instead
of NULL in new code, in anticipation of the nsnull -- nullptr switch?
-Justin
On Tue, Aug 14, 2012 at 3:14 PM, Ed Morley
bmo.takethis...@edmorley.co.uk wrote:
On Thursday, 9 August 2012 15:35:28 UTC+1, Justin Lebar wrote:
Is there a plan to mitigate the coalescing on m-i? It seems like that
is a big part of the problem.
Reducing the amount of coalescing permitted would
August 2012 15:35:28 UTC+1, Justin Lebar wrote:
Is there a plan to mitigate the coalescing on m-i? It seems like that
is a big part of the problem.
Reducing the amount of coalescing permitted would just mean we end up with
a backlog of pending tests on the repo tip - which would result in tree
bholley and I have a script for doing this in git. With thanks to
glandium for telling us how to do it:
0. Fetch the prtypes change, and merge it into your local master branch.
1. Let your git checkout be directory |src|.
2. Save the script at the end of this message as src/../convert.sh.
3.
I landed bug 780970 today, which adds [infallible], a new XPIDL attribute.
You can only use [infallible] within interfaces marked as
[builtinclass] (which means the interface may not be implemented in
JS), and [infallible] is only applicable at the moment on XPIDL
attributes which return
nsCOMPtrnsIFoo foo;
int32_t f = foo-GetFoo();
Why was I expecting this to be Foo()? (Perhaps unreasonably.)
Yeah, it should be Foo().
File a bug?
I considered Foo(), but my concern was that, when we extend this to
attributes which return interfaces (e.g. nsIFoo), then Foo() versus
On Thu, Aug 30, 2012 at 3:34 AM, Mike Hommey m...@glandium.org wrote:
Ideally, we should make talos regressions visible on tbpl as oranges,
and star them as other oranges.
FWIW, making this possible is an explicit goal of the SfN effort.
-Justin
___
* This repo does not have an inbound branch like my mirror did, so if you
want a commit which is on mozilla-inbound but not on mozilla-central yet, I
guess you should wait until it gets merged to mozilla-central.
Although that repository doesn't have an inbound branch, there is a
separate
We've been adding a lot of new code lately, particularly as part of
B2G. But we have not been adding high-level in-source documentation
along with that code.
The result is that it's becoming increasingly difficult to find one's
way around our code.
To be clear, my beef here isn't so much with a
: 1) a
publishing API 2) read-only pages. Then, we need some middleware that
synchronizes the source tree to MDN. Of course, we should probably have
discussion on whether this is a good idea first.
On 9/14/12 2:56 PM, Justin Lebar wrote:
We've been adding a lot of new code lately
(Can you hear that thud, thud, thud? It's the sound of me beating my head
against my desk.)
One of the intriguing things about this benchmark is that it's open
source, and they're committed to changing it over time.
FWIW Paul Irish agrees the sieve is a bad test, although he doesn't
hate it to
with the PM that headed that effort up, do you all want to get some code
committed to help our numbers out?
- Daniel
On Tue, Sep 25, 2012 at 6:51 AM, Justin Lebar justin.le...@gmail.com
wrote:
(Can you hear that thud, thud, thud? It's the sound of me beating my
head
against my desk.)
One
For case 1., an idea that has been floated here and again (in Automation and
Tools and Release Engineering, anyway) is landing directly from try -
inbound (or central) for green try pushes. However, this isn't a small
endeavor, both for the reasons of building the infra + software to do this
For example, suppose
* we land W, X, Y, and Z all in a row within 10 minutes or so,
* csets W and X have try runs and csets Y and Z do not, and
* we have capacity for two builds
then I'd rather get builds for [WX and YZ] than for [W and XYZ].
Erm, I mean that I'd rather get [WXY, Z],
1) Build errors are hard to identify with make. Parallel execution can make
them even harder to track down. Poor output from invoked processes is also a
problem.
I have a script [1] which works well enough for my purposes in the
normal Mozilla build (I haven't tried it with mach). It
njn didn't want to call me out as the culprit here, but I'm happy to
own up to it. :)
Pushed to inbound is an important status to have indicated in the bug,
I don't feel like it's /always/ important.
On a bug that njn and I are the only ones watching and which gets
landed on m-i over the
What other concerns are there?
It took me a not insubstantial amount of effort to develop expertise
with our baroque and only half-documented make commands, and while I'm
happy to believe that the new ones are better, that still doesn't make
the switch simple.
If we want to deprecate the make
I suspect having the inbound changeset is useful for someone doing regression
hunting (ie, looking between merges)?
It's the same hash on inbound and central, so I don't see why this
would matter. For example,
http://hg.mozilla.org/mozilla-central/rev/8ebfc639c69f
By turning off Linux PGO testing, you really mean stop making and
distributing Linux PGO builds, right?
The main reason I'd want Linux PGO is for mobile. On desktop Linux,
most users (I expect) don't run our builds, so it's not a big deal if
they're some percent slower. (Unless distros commonly
2. Linux is the foundation of B2G and Firefox for Android, where we
*definitely* must deliver
the fastest product we can
I totally agree, but it's not clear to me whether continuing to do PGO
on desktop Linux has any effect on our ability to potentially do PGO
on Android/B2G. If we were to
Dear dev.platform,
If you'd like to help with B2G but don't know how, one way you could
be extremely helpful is to help us land our patches on Aurora.
(We're double-landing because B2G v1 will be built off FF18, which is
currently in Aurora.)
Fabrice built a page which will show you which bugs
If your patch falls in a range which
causes more than 4% Ts regression, it will be backed out by our sheriffs
together with the rest of the patches in that range, and you can only
reland after you fix the regression by testing locally or on the try
server.
Our tools for comparing talos
If we really wanted to know, either someone would have to spend some time
doing this over and over, or we'd have to use Telemetry with some A/B testing.
This would actually be a pretty easy thing to do, to a first
approximation anyway. Just turn off PGO on Windows for one nightly
build and see
Can you please file a bug (with STR, if you understand them) here? (I
presume this user is opening about:memory from Thunderbird?)
-Justin
On Mon, Oct 22, 2012 at 9:29 AM, Philipp Kewisch mozi...@kewis.ch wrote:
This is probably rather something worth reporting in dev.platform. That
memory
the alternative.
On Thu, Oct 25, 2012 at 11:49 AM, Henrik Skupin hsku...@gmail.com wrote:
Justin Lebar wrote on 10/25/12 10:06 AM:
I'd probably be a lot more sympathetic to this proposal if I
understood in a concrete way how making my life a little harder here
would make your life a little easier
Not a concern, but the obvious question is: Do you have any idea how
this affects compile times?
On Mon, Oct 29, 2012 at 7:44 PM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:
I'd like to switch our coding style to use #pragma once instead of #include
guards. #pragma once is supported on all
On Tue, Oct 30, 2012 at 4:31 PM, Dave Townsend dtowns...@mozilla.com wrote:
This plan really worries me. [...] I'm worried that we'll just break the
platform b2g runs on in subtle ways that we might not notice before shipping.
It's pretty concerning to have a fundamental change to how
On Mon, Dec 3, 2012 at 5:39 PM, Norbert Lindenberg
mozillali...@lindenbergsoftware.com wrote:
Well, the first question is what size increase would be acceptable given the
benefits that ICU provides.
I have currently trimmed it to 9.7 MB for the data library and 3.1 MB for two
code
Is code like this safe in the C++1 Unordered model?
Thread 1:
int x = obj-v;
obj-Release();
Thread 2:
obj-Release();
where obj's destructor trashes obj-v.
The potential hazard is if thread 1's obj-Release() atomic decrement is
reordered to run before the obj-v load has completed,
Can you make NS_ENSURE_TRUE(foo, /* */) and NS_ENSURE_TRUE(foo,) errors?
Maybe there's a better way to do this, but I think we could do
templateclass T
MOZ_ALWAYS_INLINE void EnsureIsLvalue(T t) {}
and then make NS_ENSURE_TRUE call EnsureIsLvalue on the second arg.
I'd want to check that this
Aren't anonymous mmap'ed pages automatically zeroed for you? (It has
to be this way for security reasons.) So I'd guess you could just
make an anonymous mmap (or the Windows equivalent) and you'd get what
you want.
Of course, I imagine your goal is not to pull in pages in RAM for
these zero
, 2013 at 11:17 AM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:
On 2013-01-18 11:03 AM, Justin Lebar wrote:
Fri, Jan 18, 2013 at 10:35 AM, Ehsan Akhgari ehsan.akhg...@gmail.com
wrote:
On Fri, Jan 18, 2013 at 5:39 AM, L. David Baron dba...@dbaron.org
wrote:
So given
We've had bug 764220 open on the CPG memory regression for six months
now. Although it's a MemShrink:P1, it hasn't gotten the attention it
deserves.
FWIW, I think part of this stems from the fact that the MemShrink team
is often ineffective at getting others to fix bugs that we think are a
To save everyone having to look at the graph - the initial landing showed a
consistent 20% regression in trace malloc maxheap. If this were a 1-5%
regression, then I think it would be worth discussing the trade-off. At 20%,
I really don't see how we can take this, sorry! :-(
I hope it's not
, and I'm not convinced that's a safe
thing to do in general.
But this is a better discussion to have in the context of DDD than in
the context of this bug.
-Justin
On Wed, Feb 13, 2013 at 9:13 AM, Benjamin Smedberg
benja...@smedbergs.us wrote:
On 2/13/2013 3:12 AM, Justin Lebar wrote:
c
Don't we need to update our servers first due to phases? But either way,
yes, this should be a bigger priority.
The client bug that's fixed with the new version of hg is slowly and
irreversibly ruining our blame, so I don't think we should wait before
upgrading clients.
Based on experience,
.
Follow along at home at
https://bugzilla.mozilla.org/show_bug.cgi?id=843081
-Justin
On Thu, Feb 21, 2013 at 6:36 AM, Gervase Markham g...@mozilla.org wrote:
On 20/02/13 16:06, Justin Lebar wrote:
The client bug that's fixed with the new version of hg is slowly and
irreversibly ruining our blame, so
It sounds to me like people want both
1) Easier access to aggregated data so they can build their own
dashboards roughly comparable in features to the current dashboards.
2) Easier access to raw databases so that people can build up more
complex analyses, either by exporting the raw data from
1) something checked into mc anyone can easily author or run (for tracking
down regressions) without having to checkout a separate repo, or setup and
run a custom perf test framework.
I don't oppose the gist of what you're suggesting here, but please
keep in mind that small perf changes are
1) Preferences and all.js. We currently define most of the default
preferences in /modules/libpref/src/init/all.js. There are things in there
related to the browser, Necko, gfx, dom, etc. Prety much the kitchen sink.
2) Telemetry histograms. They are all defined in
hg-git (the tool we use to synchronize Mercurial and Git repos) supports
subrepos. Although, I'm not sure how well it works.
Well, we should definitely figure this out before we move forward with
this plan.
If the hg support for git repos is decent, that might be a better way
to go, since then
https://bugzilla.mozilla.org/show_bug.cgi?id=699670
On Tue, Apr 2, 2013 at 11:58 AM, Patrick McManus pmcma...@mozilla.com wrote:
Today I noticed some (relatively) new CDF plots of telemetry histogram
data on metrics.mozilla.com. Maybe in the last week or so?
This makes it much easier to
In general you'll have much more success running these benchmarks on
tryserver rather than trying to run them locally. Even if you got the
test working, there's no guarantee that your local benchmark results
will have any bearing on the benchmark results on our servers. (In
particular, the
If anything this should improve the experience of bisecting, because
you'll be able to bisect known-good csets on m-c and only at the end
step in to the merge csets which may or may not be good.
Right now we say that when people push a patch queue to m-c every
patch should be green, but in
AIUI, on Windows the smallest block you can ask for with VirtualAlloc
is 4 KiB. However, no more than one VirtualAlloc block can exist per
64 KiB chunk. So if you ask for 4 KiB you'll end up wasting the
remaining 60 KiB of address space in the 64 KiB chunk.
Awesome memory, Nick.
MSDN seems
I see, so the hypothesis is that 100% of the fragmentation is coming from
VirtualAlloc/MapViewOfFile, not from allocations in general?
jemalloc does not make 4kb allocations, I think ever. So yes.
On Tue, Apr 9, 2013 at 9:23 AM, Kevin Gadd kevin.g...@gmail.com wrote:
I see, so the hypothesis
Right now the status and tracking flags for a version get hidden when
that version becomes old. If we switched away from using
target-milestone, we'd need to prevent this from happening.
On Wed, Apr 10, 2013 at 4:53 PM, Alex Keybl ake...@mozilla.com wrote:
* The need for a particular team to
I think the possibility of deleting user data should be taken
seriously. Exactly who is doing the deletion (configure vs. make) is
immaterial. It's also not right to argue that since a majority of
users don't expect to lose data, it's OK to silently delete data for a
minority of them.
I think
I think we should consider using much less JS in the parts of Gecko that are
used in B2G. I'd like us to consider writing new modules in C++ where
possible, and I'd like us to consider rewriting existing modules in C++.
I'm only proposing a change for modules which are enabled for B2G. For
Of course attachments don't work great on newsgroups. I've uploaded
the about:memory dumps I tried to attach to people.m.o:
http://people.mozilla.org/~jlebar/downloads/merged.json.xz
http://people.mozilla.org/~jlebar/downloads/unmerged.json.xz
On Sun, Apr 21, 2013 at 7:51 PM, Justin Lebar
from Mobile.
On Apr 22, 2013, at 7:05, Justin Lebar justin.le...@gmail.com wrote:
Of course attachments don't work great on newsgroups. I've uploaded
the about:memory dumps I tried to attach to people.m.o:
http://people.mozilla.org/~jlebar/downloads/merged.json.xz
http://people.mozilla.org
need to improve.
On Mon, Apr 22, 2013 at 10:31 AM, Mike Hommey m...@glandium.org wrote:
On Sun, Apr 21, 2013 at 07:51:18PM -0400, Justin Lebar wrote:
I think we should consider using much less JS in the parts of Gecko that are
used in B2G. I'd like us to consider writing new modules in C
does not solve
all of our problems, only the single largest.
On Mon, Apr 22, 2013 at 11:05 AM, Mike Hommey m...@glandium.org wrote:
On Mon, Apr 22, 2013 at 10:53:40AM -0400, Justin Lebar wrote:
How about pre-compiling JS in JITed form?
While significant, it seems that memory used for script
all can agree on doing that much, I'd be happy.
On Mon, Apr 22, 2013 at 10:40 AM, Boris Zbarsky bzbar...@mit.edu wrote:
On 4/21/13 7:51 PM, Justin Lebar wrote:
Since most of these features implemented in JS seem to be DOM features,
I'm
particularly interested in the opinions of the DOM folks
how many other high-priority projects
you have.
On Mon, Apr 22, 2013 at 1:36 PM, Terrence Cole tc...@mozilla.com wrote:
On 04/21/2013 04:51 PM, Justin Lebar wrote:
I think we should consider using much less JS in the parts of Gecko that are
used in B2G. I'd like us to consider writing new
There are a few things we're working on in SpiderMonkey that should improve
this situation quite a bit:
Thanks, but I again need to emphasize that these are large, long-term
plans. Terrence tells me that GGC is planned for sometime this
year. Lazy bytecode generation has been on the roadmap
This is all great stuff, but as mentioned elsewhere, B2G branched at
version 18 and so they need improvements that that can land quickly on
the relevant branches.
Well, to be clear, it would be great if we could land some
improvements for v1.1 (which is based off version 18), but we're
locking
The ratio of things landed on inbound which turn out to busted is really
worrying
* 116 of the 916 changesets (12.66%) were backed out
If 13% is really worrying, what do you think our goal should be?
On Tue, Apr 23, 2013 at 12:39 AM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:
This was a
To close the loop on this thread, the consensus here seems to be that
1. We should continue to make JS slimmer. This is a high priority for
B2G even setting aside the memory consumption of B2G's chrome JS,
since of course B2G runs plenty of content JS.
The memory profile of B2G is different
One thing I love about the MoCo meetings is that if I don't go, I
don't miss anything except the chance to ask questions: mbrubeck co
create detailed minutes (really, transcripts) of every meeting, which
I can read on my schedule. He then e-mails the transcript out to
everyone, so I don't even
One idea might be to give developers feedback on the consequences of a
particular push, e.g. the AWS cost, a proxy for time during which
developers couldn't push or some other measurable metric. Right now
each push probably feels as expensive as every other.
For tryserver, I proposed bug
It would be nice if we had data indicating how often tests fail on
just one version of MacOS, so we didn't have guess how useful having
10.6, 10.7, and 10.8 tests are. That's bug 860870. It's currently
blocked on treeherder, but maybe it should be re-prioritized, since we
keep running into cases
are 10.7
Unfortunately, we have a lot of them down (maybe a dozen) trying to fix them
(broken hard drives, bad memory, NIC). They don't have warranty.
On 2013-04-25 1:55 PM, Justin Lebar wrote:
It would be nice if we had data indicating how often tests fail on
just one version of MacOS, so we
:
On 2013-04-25 2:39 PM, Justin Lebar wrote:
We could come to the compromise of running them on m-c, m-a, m-b and
m-r. Only this would help a lot since most of the load comes from m-i and
try. We could make it a non-by-default platform on try.
I wonder if we should do the same for debug 10.6
Sorry, I must have misunderstood what you meant.
If all you're saying is that sometimes, it's good to call a meeting to
make a decision, I don't think we disagree.
On Thu, Apr 25, 2013 at 4:56 PM, Milan Sreckovic msrecko...@mozilla.com wrote:
On 2013-04-25, at 2:07 PM, Justin Lebar justin.le
If apps are served from and signed by the marketplace, then any origin is
okay (after
review.)
I know that we rely on code review for a lot of security assurance
questions, but it seems to me that allowing /any origin/ opens us up
to attacks needlessly.
Could we allow any out of a whitelist
So what we're saying is that we are going to completely reverse our
previous tree management policy?
Basically, yes.
Although, due to coalescing, do you always have a full run of tests on
the tip of m-i before merging to m-c?
A better solution would be to let you trigger a full set of tests
to implement and *might* reduce the
load is to disable all debug jobs for 10.7.
cheers,
Armen
On 2013-04-26 11:29 AM, Justin Lebar wrote:
As a compromise, how hard would it be to run the Mac 10.6 and 10.7
tests on m-i occasionally, like we run the PGO tests? (Maybe we could
trigger them on the same
PM, Armen Zambrano G. arme...@mozilla.com wrote:
On 2013-04-26 12:14 PM, Justin Lebar wrote:
Would we be able to go back to where we disabled 10.7 altogether?
On m-i and try only, or everywhere?
The initial proposal was for disabling everywhere.
We could leave 10.7 opt jobs running
The current level of flakiness in the IndexedDB test suite (especially on
OSX) makes me concerned about what to expect if it starts getting heavier
use across the various platforms.
Is that just in the OOP tests, or everywhere?
___
dev-platform
I like that inbound2 would be open only when inbound is closed. That
way you don't have to make a decision wrt which tree to push to.
sgtm.
On Fri, Apr 26, 2013 at 3:17 PM, Ryan VanderMeulen rya...@gmail.com wrote:
As has been discussed at length in the various infrastructure meetings, one
3/ Being a git guy, I prefer having a try-like server where you don't get
push contention or closed tree, because we are creating a new head
every-time, and let the sheriffs cherry-pick the good changes which are not
source of conflicts. And ask developers to rebase their changes otherwise.
Is there sanity to this proposal or am I still crazy?
If we had a lot more project branches, wouldn't that increase the load
on infra dramatically, because we'd have less coalescing?
This is of course a solvable problem, but the fact that the problem
exists suggests to me that your proposal
Given the whole point of this thread is about how unreliable inbound is, why
are people trying to develop against it?
You still need a copy of inbound to rebase your patches against when pushing.
Whatever your personal opinions about git happen to be, I don't think
a git doesn't need a copy of
See https://bugzilla.mozilla.org/show_bug.cgi?id=809430#c39 for details.
As roc points out, this has broken |mach build dir|. Stay tuned in
the bug if you're interested in whether we resolve this by backing out
the change or fixing mach.
-Justin
___
Four points here.
1. We're assuming that MathJax is as good with MathML as it is without
it, but perhaps we could ask the MathJax folks to comment on whether
this is true. I'd certainly be a lot more comfortable dropping MathML
if the MathJax folks said there was no point.
2.
A suitable
I believe roc proposed just having an explicit hard coded list of things
to start up a while ago, and I'm tempted to say that's what we should do for
shutdown too. So just add an explicit call to some os.file thing
followd by a call to a telemetry function after profile-before-change
but not
This is only tangentially on topic, but I have a git pre-commit hook
which detects .orig files and trailing whitespace. It's saved me a
lot of embarrassment.
I also have a git tool which will fix trailing whitespace in your patch.
https://github.com/jlebar/moz-git-tools#pre-commit
For example, a public method (which we want to test in the unit test) has a
number of side effects, but we don't have the public accessors to examine all
of those private side effects/state.
I had this problem with the B2G process priority tests.
From a mochitest, I wanted to create a
tl;dr - Changes from bug 820686:
1. We renamed MOZ_NOT_REACHED to MOZ_ASSUME_UNREACHABLE.
2. In Gecko, please use MOZ_CRASH instead of MOZ_NOT_REACHED unless you care
about code size or perf.
3. In JS, we removed JS_NOT_REACHED in favor of MOZ_ASSUME_UNREACHABLE.
4. Don't put code after
no difference between
them.
Sorry for the confusion!
On Fri, Jun 28, 2013 at 7:07 PM, Kyle Huey m...@kylehuey.com wrote:
On Fri, Jun 28, 2013 at 7:04 PM, Justin Lebar justin.le...@gmail.com
wrote:
tl;dr - Changes from bug 820686:
1. We renamed MOZ_NOT_REACHED to MOZ_ASSUME_UNREACHABLE.
2
On Wed, Jun 5, 2013 at 3:25 AM, Philipp Kewisch mozi...@kewis.ch wrote:
I also agree to Randell and Joshua. I've been using both lately and there
are just a few things missing in git that I am used to in hg.
Mercurial Queues is the most prominent. I am used to switching the order of
patches
I can't see how they are a good alternative. With patch queues, I can
maintain a complex refactoring in a patch queue
containing dozens of smallish patches. In particular, I can easily realize I
made a mistake in patch 3 while working on patch
21 and make sure that the fix ends up in patch
On Fri, May 31, 2013 at 4:07 PM, Matt Brubeck mbrub...@mozilla.com wrote:
On 5/31/2013 12:32 PM, Boris Zbarsky wrote:
On 5/31/13 3:20 PM, Matt Brubeck wrote:
blame mobile/android/chrome/content/browser.xul:
git 1.015s
hg 0.830s
Was this a git blame -C (which would be more similar
One definition of insanity is doing the same thing twice and expecting
different results.
I recall that Taras has written basically this same e-mail before. We
seem to have this conversation every six months or so. Why do we
expect different results this time?
If I can propose something that's
:
On Wed, Jul 10, 2013 at 3:26 PM, Justin Lebar justin.le...@gmail.com
wrote:
If I can propose something that's perhaps different:
1) Write software to figure out who's slow with reviews.
2) We talk to those people.
We've done this before too.
But we should just do it again
cause you to lose work (unless you're versioning
your patch queue, which is a whole other can of worms).
On Wed, Jul 10, 2013 at 6:49 PM, Chris Peterson cpeter...@mozilla.com wrote:
On 7/10/13 3:01 PM, Justin Lebar wrote:
I can't see how they are a good alternative. With patch queues, I can
I may still be missing something, but afaict mq git rebase -i hg
qcrecord (from the crecord extension.) This is speaking as someone who
hasn't used git rebase -i much, but people who have seem to agree with me
after seeing a qcrecord/qcrefresh demo.
qcrecord is, as far as I'm aware (it's
We can't require any c++11 feature until we drop support for gcc 4.4.
[...] there are problems in the gcc 4.4 system headers that make using c++11
mode impossible (except on b2g/android).
Is there any reason to support gcc 4.4 outside of B2G/Android?
If we dropped support for gcc 4.4 on
The flip side of this, of course, is that build peers need to ensure
that they are not the long pole in reviews. But I presume you guys
are prepared to turn around these additional reviews quickly,
otherwise you wouldn't have asked for the extra load.
On Wed, Jul 17, 2013 at 5:00 PM, Gregory
Maybe we should call ours mozilla::move and mozilla::forward so that we can
change to std::move and std::forward with minimal pain?
On Jul 19, 2013 4:36 PM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:
On 2013-07-19 7:04 PM, Mike Hommey wrote:
On Fri, Jul 19, 2013 at 03:45:13PM -0400, Ehsan
Maybe we should call ours mozilla::move and mozilla::forward so that we
can change to std::move and std::forward with minimal pain?
Won't that cause confusion if someone accidentally has both using namespace
mozilla; and using namespace std; at the same time?
That's a fair point. Maybe we
AIUI the new constructor would be something like
nsRefPtrT(nsRefPtrT aOther)
where means r-value, which means temporary, so moveable.
But I'm not totally sure about being able to return nsRefPtr.
Right now, if I do
already_AddRefedT GetFoo();
Foo* foo = GetFoo();
that's a compile
It seems really dangerous that there is an implicit conversion from a strong
ref ptr to a weak pointer. With C++11, you can thankfully require this
conversion to be explicit which should alleviate your concern.
Wouldn't disallowing this implicit conversion break code which does
void
As I understand it this is the case. From Mark's original post:
Ah, thanks. I missed that whole paragraph.
With some simple Gecko patches, we could pretty easily control which
process an iframe gets allocated into, so going down that route sounds
sane to me.
Alternatively, if you made the
Thanks for asking about this; we have a lot of unnecessary unlinking
code in our JS,
Let me share how I investigated your question.
$ git grep -i addmessagelistener -- '*.cpp'
content/base/src/nsFrameMessageManager.cpp:nsFrameMessageManager::AddMessageListener(const
nsAString aMessage,
Only one
Just to be clear though, if I find they are *not* all being removed, I
should open a bug on that rather than just removing the listeners myself and
calling it done? ie, is it accurate to say that it *should* not be
necessary to remove these handlers (and, if I verify that is true, that I
1. How much, and where, should we be using standard C++ library
functionality in Mozilla code?
We've tuned tarray, nsthashtable, strings, etc. to meet our precise
needs, and the implementations are consistent across all platforms.
I can imagine things becoming quite messy we had three or four
space.
On Wed, Jul 31, 2013 at 5:57 PM, Mike Hommey m...@glandium.org wrote:
On Wed, Jul 31, 2013 at 10:28:38AM -0700, Justin Lebar wrote:
Wouldn't switching branches in the same repo clone touch many files
and trigger unfortunately clobber builds? Even with ccache and
separate per-branch
On Thu, Aug 1, 2013 at 6:50 PM, Nicholas Nethercote
n.netherc...@gmail.com wrote:
On Thu, Aug 1, 2013 at 6:29 PM, Gavin Sharp ga...@gavinsharp.com wrote:
Do you have specific issues you're worried about, or are you just speaking
about issues in general?
This AdBlock issue worries me
1 - 100 of 105 matches
Mail list logo