Re: Git pushes to mercurial, early testers needed

2014-12-19 Thread Nicolas B. Pierron

On 12/18/2014 12:13 PM, Mike Hommey wrote:

I just published initial support for pushing to mercurial from git. It's
still experimental and as such I'm looking for volunteers who would want
to try it and confirm that it doesn't break stuff.

See http://glandium.org/blog/?p=3405 for more details.

(Yes, it's a bummer that it only allows to push to a local mercurial
clone, but it's better than breaking hg.mozilla.org repositories)


This is great!

I will look if I can use it as a substitute for mq in the moz-git-tools.

--
Nicolas B. Pierron
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Flaky timeouts in mochitest-plain now fail newly added tests

2014-12-19 Thread Ehsan Akhgari
Let me try to rephrase the problem in different terms, to hopefully make 
it clearer why using timers like this is a bad idea.


setTimeout(foo, 1000) may seem to suggest run foo after 1 second, but 
that is *not* what that function does, at all.  What it does is, run foo 
after 1 second, or perhaps at any other point in time later than that, 
and it could even be *minutes* later.


A lot of people are under the assumption that just because their local 
timings pan out, and they've added a, let's say, 10x error margin to 
their timers, their use of timers is fine.  It's not, as has been 
demonstrated as intermittent oranges over the years.  We have no way of 
knowing whether those calculations hold under various types of machine 
loads, on new platforms, on much lower powered machines (think of 
running the tests where the such measurements have been done on a beefy 
desktop on a low-end phone.)  And increasing the error margin won't save 
us, it will just move the problem further into the future.


Also, please note that letting the test time out if it doesn't succeed 
is a perfectly valid failure scenario, we have numerous tests that do 
things like fire an async event, setup a listener and wait for the 
listener to fire and SimpleTest.finish() there, without any timers to 
catch the case where the event does not fire.  Logging sufficiently is 
almost always enough to not have to use these timers, as those tests 
have demonstrated in practice.


Cheers,
Ehsan

On 2014-12-19 1:07 AM, Boris Zbarsky wrote:

On 12/18/14, 2:28 PM, Nils Ohlmeier wrote:

Well there is an event listener waiting for the event to fire.
But how else then through a timeout, obviously with a high timeout
value like
30 or 60 seconds


We've had quite a number of random oranges from precisely this scenario.
  It seems that it's not uncommon for the test VMs to completely lose
the timeslice for 60 seconds or more.

If the test is in fact expecting the event to fire, the right thing to
do is to jut have the test time out per the harness timeout (which can
be globally adjusted as needed depending on the characteristics of the
test environment) if the event doesn't fire.  Rolling your own shorter
timer just means contributing to random orange.


Sure I can print an info message that my test is now waiting for an
event to pop,


Sure, and another one when the event comes in.  Then you can check the
log to see whether the timeout was for this reason in the event of a
harness timeout on this test.


But as I tried to describe in my other
email having a long living timer which pops complaining that event X
is missing
I think is a legit use case for setTimeout in test.


Given the number of random oranges we've had from _precisely_ this use
of setTimeout, I don't think it's legit.

-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Updating the policy for Talos performance regression in 2015

2014-12-19 Thread Ehsan Akhgari

This looks good overall.  Two questions though:

On 2014-12-18 6:47 AM, jmaher wrote:

Mozilla - 2015 Talos performance regression policy

Over the last year and a half the Talos tests have been rewritten to be more 
useful and meaningful.  This means we need to take them seriously and cannot 
just ignore real issues when we don't have time.  This does not mean we need to 
fix or backout every changeset that caused a regression.

Starting in 2015, when a regression is identified to be related to a specific 
changeset, the patch author will be ask for information via the needinfo flag.  
We expect a response and reasonable dialog within 72 hours (3 business days) of 
requesting information.  If no response is given we will backout the patch(es) 
in question and the patch author can investigate when they have time and reland.

Some requirements before requesting needinfo:
* On integration branches (higher volume), a talos sheriff will have verified 
the root cause within 1 week of the patch landing
* a patch or set of patches from a bug must be identified as the root cause.  
This can take place through retriggers on the tree or in the case of many 
patches landing at once this would take place through a push to try backing out 
the suspected patch(es)
* links in the bug to document the regression (and any related 
regressions/improvements)
* if we are confident this is the root cause and it meets a 3% regression 
threshold, then the needinfo request will mention that this policy will be 
enforced

Acceptable outcomes:
* A promise to attempt a fix at the bug is agreed upon, the bug is assigned to 
someone and put in a queue.


How do we ensure that the follow-up bug actually does get fixed and it 
fixes the regression completely?



* The bug will contain enough details and evidence to support accepting this 
regression, we will mark it as wontfix
* It is agreed that this should be backed out


Do we plan to have a different approach towards more severe regressions? 
 For example, if a patch regresses startup time by 50%, would we still 
accept evidence to support that the regression should be accepted, or 
would we tolerate it in the tree for a few weeks before it gets fixed?



Other scenarios:
* A bug related to the alert is not filed within 1 week of the patch landing.  
This removes the urgency and required action.
* We only caught a regression at uplift time.  There is a chance this isn't 
easily determined, this will be documented and identified patch authors will 
use their judgement to fix the bug
* Regression is unrelated to code (say pgo issue) - this should be documented 
in the bug and closed as wontfix.
* When we uplift to Aurora or Beta, all regressions filed before the uplift 
that show up on the upstream branch will have a needinfo flag set and require 
action to be taken.


Please take a moment to look over this and outline any concerns you might have.

Thanks,
Joel
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Updating the policy for Talos performance regression in 2015

2014-12-19 Thread Benjamin Smedberg


On 12/19/2014 10:05 AM, Ehsan Akhgari wrote:




Acceptable outcomes:
* A promise to attempt a fix at the bug is agreed upon, the bug is 
assigned to someone and put in a queue.


How do we ensure that the follow-up bug actually does get fixed and it 
fixes the regression completely?


Avi/Vladan will be tracking these and nagging as appropriate.




* The bug will contain enough details and evidence to support 
accepting this regression, we will mark it as wontfix

* It is agreed that this should be backed out


Do we plan to have a different approach towards more severe 
regressions?  For example, if a patch regresses startup time by 50%, 
would we still accept evidence to support that the regression should 
be accepted, or would we tolerate it in the tree for a few weeks 
before it gets fixed?


I don't think this can be answered in advance. If we're in this 
situation, it will be because we're making some huge cost/benefit 
tradeoff and we have high confidence that the regression can be fixed or 
that it's worth the corresponding benefit. Product managers would likely 
be involved in making the final decision based on a technical 
recommendations from the engineers and the performance team.


--BDS

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Support for Visual C++ 2010 has been dropped

2014-12-19 Thread Neil

Neil wrote:


Neil wrote:


Mike Hommey wrote:


On Wed, Dec 17, 2014 at 06:06:25PM +, Neil wrote:

I downloaded the MSVC 2013 Community Edition, but there was no sign 
of an SDK, so I downloaded that separately. Is this expected? If 
so, I'll update MDN.


The SDK comes with it.


So you say, but it wasn't there (no windows.h).


My bad, I looked in /include/, but apparently these days it's in /um/.


(Somewhat strange that mozillabuild didn't find it either; having 
installed and now uninstalled the SDK it still finds it. Go figure.)


--
Warning: May contain traces of nuts.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Flaky timeouts in mochitest-plain now fail newly added tests

2014-12-19 Thread Gijs Kruitbosch

On 19/12/2014 14:56, Ehsan Akhgari wrote:

Logging sufficiently is
almost always enough to not have to use these timers, as those tests
have demonstrated in practice.


Who's working on improving the log output from tinderbox? Timestamps get 
mashed [0], sometimes only the failing assertion is printed with no 
other logs at all [1], ... it's not really such a happy world as you assert.


~ Gijs

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1049545#c0
[1] 
https://treeherder.mozilla.org/ui/logviewer.html#?repo=fx-teamjob_id=1494921

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Gmail calendar information for Platform Engineering meeting

2014-12-19 Thread Chris Peterson

On 12/19/14 12:25 AM, Axel Hecht wrote:

On 12/19/14 6:20 AM, Chris Peterson wrote:

Here is the new Gmail calendar information for the weekly Platform
Engineering meeting. If these calendar links don't work, please let me
know.


* iCal ics link:

https://www.google.com/calendar/ical/mozilla.com_p51ksu9ddgqt72f0jl21vaq76o%40group.calendar.google.com/public/basic.ics



* Atom/XML feed:

https://www.google.com/calendar/feeds/mozilla.com_p51ksu9ddgqt72f0jl21vaq76o%40group.calendar.google.com/public/basic




Isn't the meeting at 11am? Calendar says 10.


Yes, 11am is the correct time. Thanks for catching that! I've updated 
the calendar.



chris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Updating the policy for Talos performance regression in 2015

2014-12-19 Thread jmaher
Great questions folks.

:bsmedberg has answered the questions quite well, let me elaborate:
Before a bug can be marked as resolved:fixed we need to verify the regression 
is actually fixed.  In many cases we will fix a large portion of the regression 
and accept the small remainder.

We do keep track of all the bugs filed per version (firefox 36 example: 
https://bugzilla.mozilla.org/show_bug.cgi?id=1084461)

these get looked at more specifically during each uplift.

I will update the verbage next week to call out how these will be followed up 
and posted to:
https://www.mozilla.org/hacking/regression-policy.html

Do speak up if this should be posted elsewhere or linked from a specific 
location.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Is anyone still using JS strict warnings?

2014-12-19 Thread Jason Orendorff
So if you go to about:config and set the javascript.options.strict pref,
you'll get warnings about accessing undefined properties.

js Math.TAU
undefined
/!\ ReferenceError: reference to undefined property Math.TAU

(It says ReferenceError, but your code still runs normally; it really is
just a warning.)

Is anyone using this? Bug 1113380 points out that the rules about what kind
of code can cause a warning are a little weird (on purpose, I think). Maybe
it's time to retire this feature.

https://bugzilla.mozilla.org/show_bug.cgi?id=1113380

Please speak up now, if you're still using it!

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is anyone still using JS strict warnings?

2014-12-19 Thread J. Ryan Stinnett
Some prior discussion of this feature happened in the platform thread
Disabling
strict warnings as errors in xpcshell[1].

A few people argued for the extra warnings to be removed, while one person
said they were useful.

No clear conclusion was reached.

[1]:
https://groups.google.com/d/topic/mozilla.dev.platform/znIkVsh5YYA/discussion

On Fri, Dec 19, 2014 at 2:19 PM, Jason Orendorff jorendo...@mozilla.com
wrote:

 So if you go to about:config and set the javascript.options.strict pref,
 you'll get warnings about accessing undefined properties.

 js Math.TAU
 undefined
 /!\ ReferenceError: reference to undefined property Math.TAU

 (It says ReferenceError, but your code still runs normally; it really is
 just a warning.)

 Is anyone using this? Bug 1113380 points out that the rules about what kind
 of code can cause a warning are a little weird (on purpose, I think). Maybe
 it's time to retire this feature.

 https://bugzilla.mozilla.org/show_bug.cgi?id=1113380

 Please speak up now, if you're still using it!

 -j
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is anyone still using JS strict warnings?

2014-12-19 Thread William McCloskey
I was the person in the previous thread who found them useful, and I still
do. Some of the extraWarnings stuff is of questionable value, but the
undefined property stuff is really useful.

I don't really know if other people use this stuff. extraWarnings are
enabled by default in debug builds, and I think a lot of people don't
realize the difference between these extra warnings and normal JS errors.

Writing front-end JS code for Firefox is generally a painful experience:
terrible error messages, often no filename or line number information, or
even no output at all. But I think we should be making things better, not
worse. Remove features like this takes us in the wrong direction. If the
undefined property warning is broken, we should fix it.

-Bill

On Fri, Dec 19, 2014 at 12:32 PM, J. Ryan Stinnett jry...@gmail.com wrote:

 Some prior discussion of this feature happened in the platform thread
 Disabling
 strict warnings as errors in xpcshell[1].

 A few people argued for the extra warnings to be removed, while one person
 said they were useful.

 No clear conclusion was reached.

 [1]:

 https://groups.google.com/d/topic/mozilla.dev.platform/znIkVsh5YYA/discussion

 On Fri, Dec 19, 2014 at 2:19 PM, Jason Orendorff jorendo...@mozilla.com
 wrote:
 
  So if you go to about:config and set the javascript.options.strict pref,
  you'll get warnings about accessing undefined properties.
 
  js Math.TAU
  undefined
  /!\ ReferenceError: reference to undefined property Math.TAU
 
  (It says ReferenceError, but your code still runs normally; it really
 is
  just a warning.)
 
  Is anyone using this? Bug 1113380 points out that the rules about what
 kind
  of code can cause a warning are a little weird (on purpose, I think).
 Maybe
  it's time to retire this feature.
 
  https://bugzilla.mozilla.org/show_bug.cgi?id=1113380
 
  Please speak up now, if you're still using it!
 
  -j
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is anyone still using JS strict warnings?

2014-12-19 Thread Chris Peterson

On 12/19/14 12:45 PM, William McCloskey wrote:

I don't really know if other people use this stuff. extraWarnings are
enabled by default in debug builds, and I think a lot of people don't
realize the difference between these extra warnings and normal JS errors.

Writing front-end JS code for Firefox is generally a painful experience:
terrible error messages, often no filename or line number information, or
even no output at all. But I think we should be making things better, not
worse. Remove features like this takes us in the wrong direction. If the
undefined property warning is broken, we should fix it.


I don't know if Firefox front-end developers typically use debug builds, 
but I see many warnings in front-end JS when I run debug builds. If 
extraWarnings are useful, they could be enabled for Nightly release 
builds (specifically to aid Firefox developers) instead of just debug 
builds.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Flaky timeouts in mochitest-plain now fail newly added tests

2014-12-19 Thread Nils Ohlmeier

 On Dec 19, 2014, at 6:56 AM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:
 
 Let me try to rephrase the problem in different terms, to hopefully make it 
 clearer why using timers like this is a bad idea.
 
 setTimeout(foo, 1000) may seem to suggest run foo after 1 second, but that is 
 *not* what that function does, at all.  What it does is, run foo after 1 
 second, or perhaps at any other point in time later than that, and it could 
 even be *minutes* later.
 
 A lot of people are under the assumption that just because their local 
 timings pan out, and they've added a, let's say, 10x error margin to their 
 timers, their use of timers is fine.  It's not, as has been demonstrated as 
 intermittent oranges over the years.  We have no way of knowing whether those 
 calculations hold under various types of machine loads, on new platforms, on 
 much lower powered machines (think of running the tests where the such 
 measurements have been done on a beefy desktop on a low-end phone.)  And 
 increasing the error margin won't save us, it will just move the problem 
 further into the future.
 
Ok. That’s fine by my.
First of all I assume that the slowness of the machine also effects mochitest’s 
over all timer, right? If not, then we are in big trouble.
Secondly remember that I’m NOT doing anything useful when my 
“missing-event-timer” pops. I’m just logging a message that the event is 
missing and then I’m trying to exit the test early (earlier then the mochitests 
general test timeout).

The only problem I can see with that approach is: if the “missing-event-timer” 
actually pops more or less precisely after the requested time, BUT the actual 
work is running super slow motion for some reason. And so the event would have 
arrived, but after the even-timeout.
So if my “missing-event-timers” are popping on time, because they are for some 
bizarre reason not affected by the over all slowness of the machine then we 
have a problem with this. But in that case we should have a problem with the 
over test timeout of mochitest as well, or?

 Also, please note that letting the test time out if it doesn't succeed is a 
 perfectly valid failure scenario, we have numerous tests that do things like 
 fire an async event, setup a listener and wait for the listener to fire and 
 SimpleTest.finish() there, without any timers to catch the case where the 
 event does not fire.

My test are not about testing this one timer. Doing that would be relatively 
easy.
In WebRTC I need to orchestrate two participants to establish a real time audio 
video call. Lots of the tests need to establish a full call, before assessing 
if the call got properly established under the given criteria. The most common 
scenario for intermittent failures is that the call does not even gets 
established. From start to a full call I need to verify several state machines 
and lots of events firing and all of that twice for the two participants of 
each call.
So yes I could simply try to establish a call (and not worrying about all the 
intermediary steps and events) and then do the verification/assessment. And let 
call failures be caught by the generic mochitest timeout. But then I would 
spend hours and hours staring at pretty useless log files (see below) every 
time a call got stuck.

  Logging sufficiently is almost always enough to not have to use these 
 timers, as those tests have demonstrated in practice.
 

I disagree. Especially as the logging of mochitest is really poor.
- the timestamps of log messages are lost as described in bug 1020516
- SimpleTest only has one log level: info(). Which I think is the main reason 
the log files get big, because test writers can’t fine tune the logging.
- Log messages from mochitest are not printed together with the stdout/stderr 
of the browser (so correlating these two is almost impossible)
- Logging behaves differently on your local machine then on the build servers

BTW I understand that WebRTC has pretty special and probably unique 
requirements, compared to what the rest of the mochitest users do with it. 
That’s why I was trying to make the case for adding a special connivence 
function for setting up these special “missing-event-timer”, which BTW would 
also allow us to increase or decrease the timeout values based on the 
platform/machine the test currently gets executed on.

But as I don’t seem to be able to make my case here I’ll simply live with 
requesting flaky timeouts for all WebRTC tests.

Best
  Nils

 Cheers,
 Ehsan
 
 On 2014-12-19 1:07 AM, Boris Zbarsky wrote:
 On 12/18/14, 2:28 PM, Nils Ohlmeier wrote:
 Well there is an event listener waiting for the event to fire.
 But how else then through a timeout, obviously with a high timeout
 value like
 30 or 60 seconds
 
 We've had quite a number of random oranges from precisely this scenario.
  It seems that it's not uncommon for the test VMs to completely lose
 the timeslice for 60 seconds or more.
 
 If the test is in fact expecting the 

Re: Is anyone still using JS strict warnings?

2014-12-19 Thread Nick Fitzgerald
I generally don't find them useful, but instead annoying, and that they
provide a lot of noise to filter out to find actual relevant errors. This
is including the undefined property errors. It is a common JS style to pass
around configuration/option objects that will be missing many properties
that get accessed by functions they are passed to. My understanding is that
part of this is special cased not to generate these messages, but in my
experience it isn't close to enough.

At minimum, we should mark them all JSEXN_NONE in js.msg so that they don't
appear as misleading ReferrenceError or whatever other kind of error.

​We also shouldn't call them strict errors, since they have nothing to do
with strict mode.​


Additionally, whether these messages show up depends on how the JS is
loaded. I haven't looked at the specifics in a while, but IIRC it depended
upon whether you were using JSMs or Cu.evalInSandbox, etc. We have tests in
devtools that fail if there are any errors in the console, and so changing
the way we load JS sources with no other changes to the code, results in
tests breaking for perfectly good code. It can be *very* frustrating.

I'm like the idea of providing optional JS Hint style linting with these
messages, but it isn't really optional right now, and even if you don't
want these messages, they are still forced upon you.

Personally, I'm A-OK with retiring these messages, but it seems at least
some people do get value from them. Maybe this should be a flag?
--enable-extra-js-warnings? A pref?

Just not enabled by default, please...
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is anyone still using JS strict warnings?

2014-12-19 Thread Gijs Kruitbosch

On 19/12/2014 20:45, William McCloskey wrote:

I was the person in the previous thread who found them useful, and I still
do. Some of the extraWarnings stuff is of questionable value, but the
undefined property stuff is really useful.


Can you give an example of a useful undefined property warning? Because 
my experience is the same as fitzgen's in that they are basically never 
useful to me.



Writing front-end JS code for Firefox is generally a painful experience:
terrible error messages, often no filename or line number information, or
even no output at all. But I think we should be making things better, not
worse.


Agreed that we should be improving this, but I think we differ in that I 
don't think this change makes things worse - in fact, removing these is 
better, IMO, in that real errors now don't get hidden inbetween warnings 
pretending to be errors.


We should be fixing the filename/line number info (fwiw, that part is 
unrecognizable to me in the general sense - have you filed bugs?), and 
parse errors generating no errors at all (I think there are bugs on file 
for this), but keeping these warnings won't help us do that.


~ Gijs
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is anyone still using JS strict warnings?

2014-12-19 Thread Jim Porter
On 12/19/2014 02:19 PM, Jason Orendorff wrote:
 So if you go to about:config and set the javascript.options.strict pref,
 you'll get warnings about accessing undefined properties.
 
 Please speak up now, if you're still using it!

I find these warnings quite useful (granted, I'm the sort of person who
compiles C++ with `-Wall -Wextra -Werror -pedantic`), and I've often
wished that I could make these errors show up at compile/lint time in
Firefox OS.

In particular, I really like seeing warnings about accessing undefined
properties when I'm refactoring code or changing an API. It's easy to
forget to change a single instance of a property when refactoring, and
having a warning at least gets me partway to being able to verify that I
did things correctly. (Tests help too, of course, but even a good test
suite probably won't catch *every* possible bug.)

- Jim
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is anyone still using JS strict warnings?

2014-12-19 Thread David Rajchenbach-Teller
I am going to suggest, once again, that warnings generally noise and
should be replaced by actionable errors, at least when the code is
executed in a test suite.

See
https://groups.google.com/forum/#!topic/mozilla.dev.platform/gqSIOc5b-BI

Cheers,
 David

On 19/12/14 21:19, Jason Orendorff wrote:
 So if you go to about:config and set the javascript.options.strict pref,
 you'll get warnings about accessing undefined properties.
 
 js Math.TAU
 undefined
 /!\ ReferenceError: reference to undefined property Math.TAU
 
 (It says ReferenceError, but your code still runs normally; it really is
 just a warning.)
 
 Is anyone using this? Bug 1113380 points out that the rules about what kind
 of code can cause a warning are a little weird (on purpose, I think). Maybe
 it's time to retire this feature.
 
 https://bugzilla.mozilla.org/show_bug.cgi?id=1113380
 
 Please speak up now, if you're still using it!
 
 -j
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 


-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is anyone still using JS strict warnings?

2014-12-19 Thread William McCloskey
On Fri, Dec 19, 2014 at 2:34 PM, Gijs Kruitbosch gijskruitbo...@gmail.com
wrote:

 Can you give an example of a useful undefined property warning? Because my
 experience is the same as fitzgen's in that they are basically never useful
 to me.


I can't cite any bugzilla bugs, no. They're more useful while still
developing patches. Say you have a counter that's a property of your object
and you accidentally write this.conter++. You'll end up with the property
set to NaN, which probably won't show up until some later time. Without
warnings, you need to stick print statements all over the place to figure
out where the problem is. The warning tells you exactly where to look.

 Writing front-end JS code for Firefox is generally a painful experience:
 terrible error messages, often no filename or line number information, or
 even no output at all. But I think we should be making things better, not
 worse.


 Agreed that we should be improving this, but I think we differ in that I
 don't think this change makes things worse - in fact, removing these is
 better, IMO, in that real errors now don't get hidden inbetween warnings
 pretending to be errors.


I'm definitely sympathetic to that point of view. I run with MOZ_QUIET and
MOZ_IGNORE_WARNINGS for this reason. It seems fine to me to have these
warnings be optional. I suspect that people who value static types are the
ones who like these warnings, and everybody else doesn't.


 We should be fixing the filename/line number info (fwiw, that part is
 unrecognizable to me in the general sense - have you filed bugs?)


Bug 1067942 is the worst example of that that I'm aware of right now. I try
to file these problems as I see them.

-Bill
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is anyone still using JS strict warnings?

2014-12-19 Thread Jim Blandy
The bug is surprising, in that it claims that the bytecode that consumes
the value determines whether a warning is issued (SETLOCAL;CALL), rather
than the bytecode doing the fetch.

Is that the intended behavior? I can't see how that makes much sense.
On Dec 19, 2014 2:55 PM, David Rajchenbach-Teller dtel...@mozilla.com
wrote:

 I am going to suggest, once again, that warnings generally noise and
 should be replaced by actionable errors, at least when the code is
 executed in a test suite.

 See
 https://groups.google.com/forum/#!topic/mozilla.dev.platform/gqSIOc5b-BI

 Cheers,
  David

 On 19/12/14 21:19, Jason Orendorff wrote:
  So if you go to about:config and set the javascript.options.strict pref,
  you'll get warnings about accessing undefined properties.
 
  js Math.TAU
  undefined
  /!\ ReferenceError: reference to undefined property Math.TAU
 
  (It says ReferenceError, but your code still runs normally; it really
 is
  just a warning.)
 
  Is anyone using this? Bug 1113380 points out that the rules about what
 kind
  of code can cause a warning are a little weird (on purpose, I think).
 Maybe
  it's time to retire this feature.
 
  https://bugzilla.mozilla.org/show_bug.cgi?id=1113380
 
  Please speak up now, if you're still using it!
 
  -j
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform
 


 --
 David Rajchenbach-Teller, PhD
  Performance Team, Mozilla


 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is anyone still using JS strict warnings?

2014-12-19 Thread Jason Orendorff
On Fri, Dec 19, 2014 at 5:13 PM, Jim Blandy j...@red-bean.com wrote:

 The bug is surprising, in that it claims that the bytecode that consumes
 the value determines whether a warning is issued (SETLOCAL;CALL), rather
 than the bytecode doing the fetch.

 Is that the intended behavior? I can't see how that makes much sense.


That's intentional, yes. The idea is that phrases like these:

if (obj.prop === undefined)  // ok

if (obj.prop  obj.prop.prop2)  // ok

shouldn't cause warnings. These are detecting uses: obj.prop is fetched
for the purpose of determining whether or not it exists.

But to distinguish detecting uses from others, we have to look at the
bytecode that consumes the value. All this is one reason I'd like to remove
the feature --- the bytecode inspection here is pretty sloppy. See comment
4 in the bug.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Flaky timeouts in mochitest-plain now fail newly added tests

2014-12-19 Thread Ehsan Akhgari

On 2014-12-19 4:40 PM, Nils Ohlmeier wrote:



On Dec 19, 2014, at 6:56 AM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:

Let me try to rephrase the problem in different terms, to hopefully make it 
clearer why using timers like this is a bad idea.

setTimeout(foo, 1000) may seem to suggest run foo after 1 second, but that is 
*not* what that function does, at all.  What it does is, run foo after 1 
second, or perhaps at any other point in time later than that, and it could 
even be *minutes* later.

A lot of people are under the assumption that just because their local timings 
pan out, and they've added a, let's say, 10x error margin to their timers, 
their use of timers is fine.  It's not, as has been demonstrated as 
intermittent oranges over the years.  We have no way of knowing whether those 
calculations hold under various types of machine loads, on new platforms, on 
much lower powered machines (think of running the tests where the such 
measurements have been done on a beefy desktop on a low-end phone.)  And 
increasing the error margin won't save us, it will just move the problem 
further into the future.


Ok. That’s fine by my.
First of all I assume that the slowness of the machine also effects mochitest’s 
over all timer, right?


That's correct.

 If not, then we are in big trouble.

Secondly remember that I’m NOT doing anything useful when my 
“missing-event-timer” pops. I’m just logging a message that the event is 
missing and then I’m trying to exit the test early (earlier then the mochitests 
general test timeout).


Your approach is _still_ faulty in that you are assuming that section of 
your test will reasonably finish in 60s, but if the test slave is 
starved, the 60s timer may fire *before* the event you are waiting for 
has a chance to be dispatched.  Based on that, your approach is prone to 
intermittent failures.


Of course, the exact level of problem depends on what you mean by log 
above.  If you mean something that is *actually* side-effect free such 
as dump() or info(), then the worst thing that can happen is that you 
may see misleading log messages sometimes.  But if you do *anything* 
which can have side-effects, you should stop using setTimeout ASAP. 
That includes things such as ok(false, the event didn't arrive), 
SimpleTest.finish(), etc.


At this point though, you should be asking yourself, why go through the 
pain of ensuring those timer callbacks are side-effect-free and that 
they remain so for the years to come?  Really, the problem that you are 
trying to protect yourself against doesn't exist.  So, please just don't 
use setTimeouts like that, ever!  :-)  That will keep things much 
simpler, and less footgun-ish.



The only problem I can see with that approach is: if the “missing-event-timer” 
actually pops more or less precisely after the requested time, BUT the actual 
work is running super slow motion for some reason. And so the event would have 
arrived, but after the even-timeout.
So if my “missing-event-timers” are popping on time, because they are for some 
bizarre reason not affected by the over all slowness of the machine then we 
have a problem with this. But in that case we should have a problem with the 
over test timeout of mochitest as well, or?


The default mochitest timeout is 5 minutes.  And you're right, if your 
test takes longer than that, it will fail.  If it does so 
intermittently, then it will fail intermittently.  And if that happens, 
the solution is to either use SimpleTest.requestLongerTimeout() to 
request a longer timeout, or to split up the test.  Years of experience 
has shown that the default 5 minute timeout is sufficient for all of our 
platforms in practice.  And in the future if that proves to be too 
short, we can change it globally with a one line change (in 
TestRunner.js).  If individual test authors roll out their own clones of 
the harness timeout mechanism, updating them in that situation will 
become very very hard, if possible at all.



Also, please note that letting the test time out if it doesn't succeed is a 
perfectly valid failure scenario, we have numerous tests that do things like 
fire an async event, setup a listener and wait for the listener to fire and 
SimpleTest.finish() there, without any timers to catch the case where the event 
does not fire.


My test are not about testing this one timer. Doing that would be relatively 
easy.
In WebRTC I need to orchestrate two participants to establish a real time audio 
video call. Lots of the tests need to establish a full call, before assessing 
if the call got properly established under the given criteria. The most common 
scenario for intermittent failures is that the call does not even gets 
established. From start to a full call I need to verify several state machines 
and lots of events firing and all of that twice for the two participants of 
each call.
So yes I could simply try to establish a call (and not worrying about all the 
intermediary steps 

Re: Is anyone still using JS strict warnings?

2014-12-19 Thread ISHIKAWA,Chiaki

(2014/12/20 5:19), Jason Orendorff wrote:

So if you go to about:config and set the javascript.options.strict pref,
you'll get warnings about accessing undefined properties.

 js Math.TAU
 undefined
 /!\ ReferenceError: reference to undefined property Math.TAU

(It says ReferenceError, but your code still runs normally; it really is
just a warning.)

Is anyone using this? Bug 1113380 points out that the rules about what kind
of code can cause a warning are a little weird (on purpose, I think). Maybe
it's time to retire this feature.

 https://bugzilla.mozilla.org/show_bug.cgi?id=1113380

Please speak up now, if you're still using it!


thunderbird relies on many JS source files, and
sometimes a bug creeps in, for example,

 - attributes which were set before, do not get set any more
   after a modification, and

 - some other files are still referencing the now non-existing
   attributes.

JS strict warnings are the only way for me to realize such bugs exists.
Most of the codes are written many years ago, and so frankly nobody owns 
them, so to speak these days.


(I also notice that there are typos that can only be uncovered by
 JS strict warnings, etc.)

From the software engneerng point of view, it is essential to keep the 
large amount of JS source files of TB in shape IMHO.


I use the JS strict warnings that are printed from the DEBUG build of 
C-C TB to find and fix remaining and newly introduced bugs in TB all the 
time. (The number of such warnings and errors printed during |make 
mozmill| tests are staggering, and I created a script to sort them and 
prioritize them. I see probably a few dozen different bugs and attack 
the most frequent or most disturbing-looking bugs/warnings first.)


TIA



-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is anyone still using JS strict warnings?

2014-12-19 Thread Jim Blandy
On Fri, Dec 19, 2014 at 2:22 PM, Nick Fitzgerald nfitzger...@mozilla.com
wrote:

 I generally don't find them useful, but instead annoying, and that they
 provide a lot of noise to filter out to find actual relevant errors. This
 is including the undefined property errors. It is a common JS style to pass
 around configuration/option objects that will be missing many properties
 that get accessed by functions they are passed to. My understanding is that
 part of this is special cased not to generate these messages, but in my
 experience it isn't close to enough.


In a recent message, Jason explained that code that tests for the presence
of a property:

  if (obj.prop)
  if (obj.prop === undefined)
  if (obj.prop  ...)

are not supposed to generate warnings.

Do you find that you're getting these warnings from code that has one of
those forms?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform