Re: PSA: mozilla::Pair is now a little more flexible

2015-03-13 Thread Seth Fowler

 On Mar 13, 2015, at 6:14 AM, Eric Rescorla e...@rtfm.com wrote:
 
 Sorry if this is a dumb question, but it seems like std::pair is fairly 
 widely used in our
 code base. Can you explain the circumstances in which you think we should be
 using mozilla::Pair instead?

It’s not at all a dumb question. This came up on IRC every time mozilla::Pair 
came up, so a lot of people are wondering about this.

I’m not the person that originally introduced mozilla::Pair, so I wouldn’t 
consider my answer to this question definitive, but I’ll give it a shot anyway.

mozilla::Pair is about avoiding implementation quality issues with std::pair. 
There are two quality issues in particular that have bit us in the past:

- Poor packing, particularly when one of the types stored in the pair has no 
members. In that situation the empty type should consume no space, but 
std::pair implementations sometimes don’t handle that case efficiently.

- Poor or non-existent support for move semantics. I don’t know specifically 
about the case of std::pair, but this is still biting people with other STL 
containers quite recently. Obviously the same code can have significantly 
different performance characteristics in some cases depending on move semantics 
support, so this is a serious problem.

Until we know that we can rely on high quality std::pair implementations 
everywhere, my recommendation would be to always use mozilla::Pair.

- Seth


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are your pain points when running unittests?

2015-03-13 Thread Andrew Halberstadt

On 12/03/15 07:17 PM, Gijs Kruitbosch wrote:

IME the issue is not so much about not running tests identical to the
ones on CI, but the OS environment which doesn't match, and then
reproducing intermittent failures.

If a failure happens once in 100 builds, it is very annoying for the
sheriffs (happens multiple times a day) and needs fixing or backing out
- but running-by-dir, say, mochitest-browser for
browser/base/content/test/general/ 100 times takes way too long, and OS
settings / screen sizes / machine speed / (...) differences mean you
might not be able to reproduce anyway (or in the worst case, that you
get completely different failures).

It'd be better if we could more easily get more information about
failures as they happened on infra (replay debugging stuff a la what roc
has worked on, or better logs, or somehow making it possible to
remote-debug the infra machines as/when they fail).


This is an excellent point. Being able to accurately reproduce the 
configuration used in production is obviously a good thing, but only in 
limited circumstances. Knowing the command line for an OSX job isn't 
going to be any use to you if you don't have an OSX machine to try it on.


Being able to remote debug c++/js/python seems like it would be the holy 
grail (short of rr everywhere) of intermittent or non-reproducible test 
failures. Currently it would be difficult if not impossible to implement 
as the slaves are heavily locked down. I heard that this is something 
that might be easier with taskcluster. It would definitely be worth 
investigating.


-Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are your pain points when running unittests?

2015-03-13 Thread Andrew Halberstadt

On 12/03/15 07:50 PM, Xidorn Quan wrote:

I wonder if it is possible to trigger particular single unittest on which
we observe intermittent failures, instead of the whole test set. I guess it
would save time. I sometimes disable all tests I do not need to check
before pushing to the try to make it end faster.

- Xidorn


Yes, this is currently possible. E.g for mochitest you can add 
--test-path=path/to/test here:

https://dxr.mozilla.org/mozilla-central/source/testing/config/mozharness/linux_config.py#27

And then push that to try with a linux m-1 job.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Chrome removed support for multipart/x-mixed-replace documents. We should too.

2015-03-13 Thread Gervase Markham
On 12/03/15 16:04, Seth Fowler wrote:
 It looks like it doesn’t anymore, because it works fine in Chrome.

It does; it browser-sniffs.

Gerv

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are your pain points when running unittests?

2015-03-13 Thread Gregory Szorc
Another pain point: running all relevant tests.

Many features have relevant tests across many test suites, sometimes spread
across different directories. When people are hacking on a feature, they
should have a way to run all relevant tests for that feature. All too often
I've submitted something to Try (or worse landed without Try) only to find
out I wasn't locally running all tests relevant for the thing that changed.

I did some work in bug 987360 around test tagging. e.g. mach test fhr
would run all tests tagged as relevant to Firefox Health Report.

Of course, this subject overlaps with intelligent scheduling of automation
jobs based on what changed.

A corollary is making tests easier to run. The N variants of mochitest-*
mach commands IMO should be excised and consolidated into `mach mochitest`.
This is better, but itself less ideal than `mach test` (which exists today
and automatically runs the test runner relevant to the requested tests).
`mach test` is less ideal than `mach filename`, which realizes the
argument is a test file and invokes the test harness appropriate for that
test.

FWIW, I don't think the documentation around the mach commands for test
selection is that great. e.g. I'm not sure how many people realize that
they can run `mach xpcshell-test test_foo.js` from the topsrcdir and all
`test_foo.js` files under source control are executed. Perhaps we could add
some docs to `mach help` or drop some inline hints that people don't have
to type so much.

On Thu, Mar 12, 2015 at 3:51 PM, Jonathan Griffin jgrif...@mozilla.com
wrote:

 The A-Team is embarking on a project to improve the developer experience
 when running unittests locally.  This project will address the following
 frequently-heard complaints:

 * Locally developers often use mach to run tests, but tests in CI use
 mozharness, which can result in different behaviors.
 * It's hard to reproduce a try job because it's hard to set up the test
 environment and difficult to figure out which command-line arguments to
 use.
 * It's difficult to run tests from a tests.zip package if you don't have a
 build on that machine and thus can't use mach.
 * It's difficult to run tests through a debugger using a downloaded build.

 The quintessential use case here is making it easy to reproduce a try run
 locally, without a local build, using a syntax something like:

 * runtests --try 2844bc3a9227

 Ideally, this would download the appropriate build and tests.zip package,
 bootstrap the test environment, and run the tests using the exact same
 arguments as are used on try, optionally running it through an appropriate
 debugger.  You would be able to substitute a local build and/or local
 tests.zip package if desired.  You would be able to override command-line
 arguments used in CI if you wanted to, otherwise the tests would be run
 using the same args as in CI.

 What other use cases would you like us to address, which aren't derivatives
 of the above issues?

 Thanks for your input,

 Jonathan
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are your pain points when running unittests?

2015-03-13 Thread Gregory Szorc
On Fri, Mar 13, 2015 at 3:49 PM, L. David Baron dba...@dbaron.org wrote:

 On Friday 2015-03-13 15:34 -0700, Gregory Szorc wrote:
  1. Create a commit that introduces a new test
  2. Test it
  3. Create a commit that purportedly fixes the test
  4. Build
  5. Test and verify
  6. Fold the commits

 Sure, that's what I'd do in an ideal world.  But in reality I
 sometimes start with 3 (especially if it's a bug that I notice by
 code inspection), at which point the obvious order to do the rest of
 the steps quickly and correctly is 1-2-4-5.  (And I prefer not to do
 6, actually, and to order the test as the earlier commit and then
 have the code patch actually remove the todo/fails annotation.)


(I prefer to leave the commits separate as well - didn't want to add the
complication.)

If you start with 3, why can't you reorder the commits? Is this a case of
rebuilds take too long and I prefer the build system didn't add overhead?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are your pain points when running unittests?

2015-03-13 Thread L. David Baron
On Friday 2015-03-13 16:06 -0700, Gregory Szorc wrote:
 On Fri, Mar 13, 2015 at 3:49 PM, L. David Baron dba...@dbaron.org wrote:
 
  On Friday 2015-03-13 15:34 -0700, Gregory Szorc wrote:
   1. Create a commit that introduces a new test
   2. Test it
   3. Create a commit that purportedly fixes the test
   4. Build
   5. Test and verify
   6. Fold the commits
 
  Sure, that's what I'd do in an ideal world.  But in reality I
  sometimes start with 3 (especially if it's a bug that I notice by
  code inspection), at which point the obvious order to do the rest of
  the steps quickly and correctly is 1-2-4-5.  (And I prefer not to do
  6, actually, and to order the test as the earlier commit and then
  have the code patch actually remove the todo/fails annotation.)
 
 
 (I prefer to leave the commits separate as well - didn't want to add the
 complication.)
 
 If you start with 3, why can't you reorder the commits? Is this a case of
 rebuilds take too long and I prefer the build system didn't add overhead?

I do often reorder the commits, but I don't think that's related to
the problem.  The problem is just that I've modified both test and
code (and done appropriate version control mechanics, whatever they
are), and I'd like to try the test first before rebuilding the code.

It is indeed a case where what I'm trying to optimize away is an
extra rebuild.

-David

-- 
턞   L. David Baron http://dbaron.org/   턂
턢   Mozilla  https://www.mozilla.org/   턂
 Before I built a wall I'd ask to know
 What I was walling in or walling out,
 And to whom I was like to give offense.
   - Robert Frost, Mending Wall (1914)


signature.asc
Description: Digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Project Silk on Desktop

2015-03-13 Thread Mike de Boer
Yeah, Jared makes a good point - exactly the reason why I reacted so 
enthusiastically to this announcement is that I’d never heard of Project Silk, 
even though I read HN (headlines, not comments ;-) ) on a daily basis. Or maybe 
it did, but perhaps it being all about B2G and sort of like ‘Android-parity’ 
kinda thing made me skip the item.

Regardless, anything cool hitting Fx Desktop - our most-used product - is 
worthy of a special treatment. IMHO.

 On 13 Mar 2015, at 02:22, Jared Wein j...@mozilla.com wrote:
 
 Within the small circle of Mozilla contributors it may feel spammy or 
 repetitive, but I wouldn't be surprised for people outside of the Mozilla 
 project to think of b2g and Firefox desktop as both separate userbases and 
 separate impacts.
 
 On Thu, Mar 12, 2015 at 6:59 PM, Mason Chang mch...@mozilla.com 
 mailto:mch...@mozilla.com wrote:
 Yeah it is, but I don’t really want to do another PR run when lots of people 
 have already read about Silk on b2g. Feels spammy to me to do another one 
 just a month after the previous one, but that’s my 2 cents.
 
 Mason
 
  On Mar 12, 2015, at 3:17 PM, Robert O'Callahan rob...@ocallahan.org 
  mailto:rob...@ocallahan.org wrote:
 
  On Fri, Mar 13, 2015 at 5:28 AM, Mason Chang mch...@mozilla.com 
  mailto:mch...@mozilla.com mailto:mch...@mozilla.com 
  mailto:mch...@mozilla.com wrote:
  Hi Mike,
 
   This sounds like a massive improvement to our rendering pipeline, 
   definitely worth of some PR effort! Is that being considered?
 
  We’ve had a PR effort before when Silk landed on b2g. It hit hacker news 
  and received over 10K views IIRC on mozilla hacks, so I’m not keen on doing 
  another one.
 
  Isn't hackernews and 10K views on h.m.o a *good * thing?
 
  Rob
  --
  oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
  owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
  osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo 
  owohooo
  osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o 
  oioso
  oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo 
  owohooo
  osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro 
  ooofo
  otohoeo ofoioroeo ooofo ohoeololo.
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org mailto:dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform 
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rr with test infrastructure

2015-03-13 Thread Ted Mielczarek
On Thu, Mar 12, 2015, at 08:50 PM, Robert O'Callahan wrote:
 On Fri, Mar 13, 2015 at 12:34 PM, Seth Fowler s...@mozilla.com wrote:

 To work around these issues, I would like to have a dedicated machine
 that
 continuously downloads builds and runs tests under rr. Ideally it would
 reenable tests that have been disabled-for-orange. When it finds
 failures,
 we would match failures to bugs and notify in the bug that an rr trace is
 available. Developers could then ssh into the box to get a debugging
 session. This should be reasonably easy to set up, especially if we start
 by focusing on the simpler test suites and manually update bugs.

Before we go buying a machine and sticking it under someone's desk
(although let's not rule that out entirely!) I filed a bug[1] to see if
we have any existing machines that are virtual machine hosts that have a
usable CPU such that we could enable performance counters[2] and run rr
in a VM there.

Regardless of what hardware we wind up running it on, we'll still need
to sort out the actual automation here. Historically we had people using
a VMware record-and-replay setup on a physical machine in the MV office.
AIUI that was entirely manual--someone would do a build, run the test
harness with --run-until-failure, and let it churn until it hit a
failure, at which point debugging would commence. Replicating this setup
with rr seems pretty doable, but obviously a more automated setup would
be preferable.

The other question I have is: what percentage of our intermittent
failures occur on Linux? If it's not that high then this is a lot of
investment for minimal gain.

-Ted

1. https://bugzilla.mozilla.org/show_bug.cgi?id=1142947
2
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=2030221
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: mozilla::Pair is now a little more flexible

2015-03-13 Thread Eric Rescorla
Sorry if this is a dumb question, but it seems like std::pair is fairly
widely used in our
code base. Can you explain the circumstances in which you think we should be
using mozilla::Pair instead?

Ekr


On Thu, Mar 12, 2015 at 5:53 PM, Seth Fowler s...@mozilla.com wrote:

 I thought I’d let everyone know that bug 1142366 and bug 1142376 have
 added some handy new features to mozilla::Pair. In particular:

 - Pair objects are now movable. (It’s now a requirement that the
 underlying types be movable too. Every existing use satisfied this
 requirement.)

 - Pair objects are now copyable if the underlying types are copyable.

 - We now have an equivalent of std::make_pair, mozilla::MakePair. This
 lets you construct a Pair object with type inference. So this code:

  PairFoo, Bar GetPair() {
return PairFoo, Bar(Foo(), Bar());
  }

 Becomes:

  PairFoo, Bar GetPair() {
return MakePair(Foo(), Bar());
  }

 Nice! This can really make a big difference for long type names or types
 which have their own template parameters.

 These changes should make Pair a little more practical to use. Enjoy!

 - Seth
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are your pain points when running unittests?

2015-03-13 Thread Ted Mielczarek
On Thu, Mar 12, 2015, at 07:48 PM, Boris Zbarsky wrote:
 On 3/12/15 6:51 PM, Jonathan Griffin wrote:
  What other use cases would you like us to address, which aren't derivatives
  of the above issues?
 
 I ran into a problem just yesterday: I wanted to run mochitest-browser 
 locally, to debug an error that happened very early in the test run
 startup.
 
 So I did:
 
mach mochitest-browser --debugger=gdb
 
 and hit my breakpoint and so forth... then quit the debugger.
 
 Then the test harness respawned another debugger to run more tests.  And 
 then another.

Can you file a bug on this specific issue? This is pretty dumb behavior.
(We started running Mochitests in run by dir mode which runs a
separate browser per-directory-chunk to reduce interference between
tests, and this is an obviously undesirable side effect.)

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are your pain points when running unittests?

2015-03-13 Thread James Graham
On 12/03/15 22:51, Jonathan Griffin wrote:
 The A-Team is embarking on a project to improve the developer experience
 when running unittests locally.  This project will address the following
 frequently-heard complaints:
 
 * Locally developers often use mach to run tests, but tests in CI use
 mozharness, which can result in different behaviors.
 * It's hard to reproduce a try job because it's hard to set up the test
 environment and difficult to figure out which command-line arguments to use.
 * It's difficult to run tests from a tests.zip package if you don't have a
 build on that machine and thus can't use mach.
 * It's difficult to run tests through a debugger using a downloaded build.

So my problems might be slightly unusual because I am often working on
the test harnesses themselves rather than the browser code. But a fairly
common scenario I have is that tests fail on try in some platform I
don't have access to locally (typically Windows). At this point I
usually loan a slave and try to debug on there. The main problems I have
are:

* Getting mozharness to run in almost the same way as production, but
without the bits that require it to actually be running in production.
There is documentation for this, but it's still far from simple.

* After reproducing the problems using the above setup it's usually
necessary to add some logging, or other changes, to the tests or the
harness. But it usually takes a couple of attempts to work out how to
get mozharness to not overwrite my edited files with a freshly
downloaded copy.

* Once I've done this, if I want to actually land my changes I need to
manually move them over from the unpacked version of tests.zip created
by mozharness to a source tree and start a whole new try run that will
go through a whole build cycle even if nothing in the browser itself has
changed.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are your pain points when running unittests?

2015-03-13 Thread kgupta
On Thursday, March 12, 2015 at 6:51:26 PM UTC-4, Jonathan Griffin wrote:
 The quintessential use case here is making it easy to reproduce a try run
 locally, without a local build, using a syntax something like:
 
 * runtests --try 2844bc3a9227
 
 Ideally, this would download the appropriate build and tests.zip package,
 bootstrap the test environment, and run the tests using the exact same
 arguments as are used on try, optionally running it through an appropriate
 debugger.  You would be able to substitute a local build and/or local
 tests.zip package if desired.

I think this would be a *huge* help - the hardest part for me is just 
reproducing the stuff that happens on try. The platform I usually have the most 
trouble with is Fennec, because running things locally (when it works) never 
gives me the same results as the tryserver.

There's some mention in this thread seem of getting rr recordings of stuff, and 
I think that would help with intermittents. However rr is limited to Linux 
(where it's relatively straightforward to run the same test locally) so I'm not 
convinced that spending resources getting rr support *at the expense of other 
things* is the best idea. If you have somebody sitting around with nothing to 
do, then great. If not, I think bringing some of the less-easy-to-work-with 
platforms (B2G and Fennec) up to the same level as the desktop platforms would 
provide better value.

kats
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rr with test infrastructure

2015-03-13 Thread Jonathan Griffin
OrangeFactor suggests that linux is about equal to our other platforms in
terms of catching intermittents:
http://brasstacks.mozilla.com/orangefactor/?display=BugCounttree=trunkincludefiltertype=quicksearchincludefilterdetailsexcludeResolved=falseincludefilterdetailsexcludeDisabled=falseincludefilterdetailsquicksearch=includefilterdetailsnumbugs=0includefilterdetailsresolvedIds=excludefiltertype=quicksearchexcludefilterdetailsquicksearch=excludefilterdetailsnumbugs=0excludefilterdetailsresolvedIds=startday=2015-03-05endday=2015-03-13

Jonathan

On Fri, Mar 13, 2015 at 5:26 AM, Ted Mielczarek t...@mielczarek.org wrote:


 The other question I have is: what percentage of our intermittent
 failures occur on Linux? If it's not that high then this is a lot of
 investment for minimal gain.

 -Ted


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rr with test infrastructure

2015-03-13 Thread Mats Palmgren

On 03/13/2015 12:26 PM, Ted Mielczarek wrote:

The other question I have is: what percentage of our intermittent
failures occur on Linux? If it's not that high then this is a lot of
investment for minimal gain.


FYI, there have been several intermittent crashes reported on Linux test
runs lately, e.g. bug 1142662.  If this setup can help us fix just one of
those crashes (and I expect it will) the investment has paid off, IMO.

/Mats

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Project Silk on Desktop

2015-03-13 Thread Dirkjan Ochtman
On Fri, Mar 13, 2015 at 3:39 PM, Mike Conley mcon...@mozilla.com wrote:
 Perhaps its worth talking to comms to get their input on whether or not
 it's a good time.

 But I have to agree with Jared and Mike - I think showing progress,
 especially on the part of performance (perceived or not) can only be a
 good thing.

+1 from me, Project Silk had fallen off my radar even although I'm a
pretty close follower of the Mozilla/Firefox ecosystem.

So although the initial impression has been less than stellar (bug
1142957), I'm pretty excited.

Cheers,

Dirkjan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform