Re: ERROR: Logon failure: unknown user name or bad password. on mochitests

2012-09-26 Thread Armen Zambrano G.

On 12-09-26 6:45 AM, Neil wrote:

Armen Zambrano G. wrote:


Hi all,
Recently we've set up some new XP slaves that fail on mochitest jobs
due to this error:
ERROR: Logon failure: unknown user name or bad password.

Unfortunately, I have not been able what is triggering that error (I
can't find the strings in our code).

If you're curious or have any ideas please let me know in bug 788382


Perhaps the file system permissions are incorrect and NT
AUTHORITY\NETWORK SERVICE doesn't have permission to access
C:\WINDOWS\system32\wbem\wmiprvse.exe ?



You might be right.
How can I check this? I'm not very familiar with this concept.

Every time I run tasklist /svc I get an error logged:

Event Type: Error
Event Source:   DCOM
Event Category: None
Event ID:   1
Date:   9/25/2012
Time:   2:25:36 PM
User:   NT AUTHORITY\NETWORK SERVICE
Computer:   TALOS-R3-XP-082
Description:
Unable to start a DCOM Server: {73E709EA-5D93-4B2E-BBB0-99B7938DA9E4}. 
The error:

Access is denied. 
Happened while starting this command:
C:\WINDOWS\system32\wbem\wmiprvse.exe -Embedding
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Robohornet browser benchmark

2012-09-26 Thread Daniel Buchner
This was not pitched to me by the folks involved as being our official
buy-in for this test. They asked me to try and get our people involved and
help. I helped get some of the tests working that were crashing in Fx. I
thought I was doing something good to help us not get docked for crashing
tests.

I can get my name off that list, I wasn't aware this would be used in the
press this way.

On Tue, Sep 25, 2012 at 7:56 AM, Justin Lebar justin.le...@gmail.comwrote:

 Maybe this is naive of me, but I for one don't really believe in
 tweaking benchmarks for the purposes of making Firefox look better.

 If we look bad in a benchmark, badmouthing it seems somehow more
 gentlemanly than stacking it.  :)

 Anyway, I filed a bug on getting rid of the microbenchmarks, which I
 think we all agree is important, regardless of how that affects
 Firefox's score.

 https://github.com/robohornet/robohornet/issues/67

 On Tue, Sep 25, 2012 at 10:46 AM, Daniel Buchner dbuch...@mozilla.com
 wrote:
  I've got private access to the RoboHornet repo and have been in
 discussions
  with the PM that headed that effort up, do you all want to get some code
  committed to help our numbers out?
 
  - Daniel
 
 
 
  On Tue, Sep 25, 2012 at 6:51 AM, Justin Lebar justin.le...@gmail.com
  wrote:
 
   (Can you hear that thud, thud, thud? It's the sound of me beating my
   head
   against my desk.)
 
  One of the intriguing things about this benchmark is that it's open
  source, and they're committed to changing it over time.
 
  FWIW Paul Irish agrees the sieve is a bad test, although he doesn't
  hate it to the extent you or i would think is deserved.
  https://github.com/robohornet/robohornet/issues/20#issuecomment-8837867
   So maybe all hope is not lost.
 
  It's really laughable that they count the sieve as a test of
  handlebars.js performance.  Instead of, you know, actually using
  handlebars.js.  But I again wonder how much we can mold this into
  something interesting.
 
  -Justin
 
  On Tue, Sep 25, 2012 at 1:41 AM, Nicholas Nethercote
  n.netherc...@gmail.com wrote:
   On Mon, Sep 24, 2012 at 5:22 PM, Tim timvk...@gmail.com wrote:
   So there's a new benchmark out, seemingly from google.
  
   It is designed to test performance in web app bottlenecks, especially
   DOM, canvas API methods, SVG.
  
   Paul Irish from Google's Chrome team is in charge of it. He blogged
 on
   it here:
  
  
  
 http://paulirish.com/2012/a-browser-benchmark-that-has-your-back-robohornet/
  
   I'm horrified by this.  Quoting my Hacker News comments
   (http://news.ycombinator.com/item?id=4567796):
  
   Oh god, just when web people were starting to understand how to
 create
   good
   benchmarks
   (https://blog.mozilla.org/nnethercote/2012/08/24/octane-minus...),
   now we're going back to 1980s microbenchmark hell.
  
   Doesn't anyone read Hennessy and Patterson any more? The best
   benchmarks
   are real apps, not crappy little microbenchmarks that measure a
 single
   thing.
  
   (Can you hear that thud, thud, thud? It's the sound of me beating my
   head
   against my desk.)
  
   Also, one of the tests is basically a no-op executed many times
   (https://bugzilla.mozilla.org/show_bug.cgi?id=793913#c7).
  
   Even better, there's a prime numbers calculation test, apparently to
   test math.  This is grimly hilarious:  Hennessy and Patterson
   specifically cite the Sieve of Erastosthenes as an example of a toy
   (and thus crap) benchmark.  Sigh.
  
   Daniel Buchner is apparently Mozilla's official representative on the
   RoboHornet committee of JavaScript experts
   (https://github.com/robohornet/robohornet/wiki/Committee-Membership).
   I don't know what his role is, but the thought of Mozilla officially
   blessing RoboHornet fills me with dread.
  
   While the suite may push us into some useful improvements, I worry
   that we'll end up implementing some stupid benchmarketing features
   that we will then carefully have to avoid regressing for the next 10
   years.
  
   Nick
   ___
   dev-platform mailing list
   dev-platform@lists.mozilla.org
   https://lists.mozilla.org/listinfo/dev-platform
 
 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Robohornet browser benchmark

2012-09-26 Thread Daniel Buchner
I will talk with Alex about all of the concerns you've raised (hopefully
this week)

Hopefully we can get everything straightened out and produce another great
benchmark option that we can all consume/contribute to!

- Daniel
On Sep 25, 2012 7:03 PM, Nicholas Nethercote n.netherc...@gmail.com
wrote:

 On Tue, Sep 25, 2012 at 6:23 PM, Daniel Buchner dbuch...@mozilla.com
 wrote:
  I know the principal Google PM, Alex K, who heads up RoboHornet - he has
  been extremely helpful with our Web Components initiative. I believe he
 had
  good intentions with RoboHornet, and his personal posts (and those of
 Paul
  Irish) did not claim Google had Mozilla's official organizational support
  for the benchmark/tests (it was supposed to be by-devs-for-devs)

 Oh, absolutely.  I just didn't want your presence on that list to be
 perceived as any kind of support from Mozilla.  I read that a
 Microsoft person removed their name just today for the same reason.


  I believe Alex would be very amenable to changes to the tests, general
  benchmark strategy, and our contributions. I can help make sure our
 concerns
  are addressed - I'd be more than happy to do so.

 https://github.com/robohornet/robohornet/issues/67 has a good summary
 of the concerns, via comments from jlebar and I.  I guess we can wait
 and see if they respond further.  If you want to also press him
 privately for a response to those comments, that wouldn't hurt.  If
 they are willing to redo things from scratch, avoiding
 microbenchmarks, then there's potential for a good outcome here.

 Nick

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Robohornet browser benchmark

2012-09-26 Thread Daniel Buchner
I've got private access to the RoboHornet repo and have been in discussions
with the PM that headed that effort up, do you all want to get some code
committed to help our numbers out?

- Daniel


On Tue, Sep 25, 2012 at 6:51 AM, Justin Lebar justin.le...@gmail.comwrote:

  (Can you hear that thud, thud, thud? It's the sound of me beating my head
  against my desk.)

 One of the intriguing things about this benchmark is that it's open
 source, and they're committed to changing it over time.

 FWIW Paul Irish agrees the sieve is a bad test, although he doesn't
 hate it to the extent you or i would think is deserved.
 https://github.com/robohornet/robohornet/issues/20#issuecomment-8837867
  So maybe all hope is not lost.

 It's really laughable that they count the sieve as a test of
 handlebars.js performance.  Instead of, you know, actually using
 handlebars.js.  But I again wonder how much we can mold this into
 something interesting.

 -Justin

 On Tue, Sep 25, 2012 at 1:41 AM, Nicholas Nethercote
 n.netherc...@gmail.com wrote:
  On Mon, Sep 24, 2012 at 5:22 PM, Tim timvk...@gmail.com wrote:
  So there's a new benchmark out, seemingly from google.
 
  It is designed to test performance in web app bottlenecks, especially
 DOM, canvas API methods, SVG.
 
  Paul Irish from Google's Chrome team is in charge of it. He blogged on
 it here:
 
 
 http://paulirish.com/2012/a-browser-benchmark-that-has-your-back-robohornet/
 
  I'm horrified by this.  Quoting my Hacker News comments
  (http://news.ycombinator.com/item?id=4567796):
 
  Oh god, just when web people were starting to understand how to create
 good
  benchmarks (
 https://blog.mozilla.org/nnethercote/2012/08/24/octane-minus...),
  now we're going back to 1980s microbenchmark hell.
 
  Doesn't anyone read Hennessy and Patterson any more? The best benchmarks
  are real apps, not crappy little microbenchmarks that measure a single
 thing.
 
  (Can you hear that thud, thud, thud? It's the sound of me beating my
 head
  against my desk.)
 
  Also, one of the tests is basically a no-op executed many times
  (https://bugzilla.mozilla.org/show_bug.cgi?id=793913#c7).
 
  Even better, there's a prime numbers calculation test, apparently to
  test math.  This is grimly hilarious:  Hennessy and Patterson
  specifically cite the Sieve of Erastosthenes as an example of a toy
  (and thus crap) benchmark.  Sigh.
 
  Daniel Buchner is apparently Mozilla's official representative on the
  RoboHornet committee of JavaScript experts
  (https://github.com/robohornet/robohornet/wiki/Committee-Membership).
  I don't know what his role is, but the thought of Mozilla officially
  blessing RoboHornet fills me with dread.
 
  While the suite may push us into some useful improvements, I worry
  that we'll end up implementing some stupid benchmarketing features
  that we will then carefully have to avoid regressing for the next 10
  years.
 
  Nick
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: UA Override List Policy

2012-09-26 Thread Robert Kaiser

Gervase Markham schrieb:

Does this strike the right balance?


I think it does. This looks like a good way to go forward for me.

And yes, IMGO, we can have very short timeouts on #2 for sites we 
already know about, and esp. in the time before we finish up the first 
release and are still able to remove a pref before actually shipping.


Robert Kaiser
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


mach has landed

2012-09-26 Thread Gregory Szorc
The next time you pull mozilla-central, you'll find a new tool in the 
root directory: mach


$ ./mach build


0.09 /usr/bin/make -f client.mk -j8 -s
0.25 Adding client.mk options from /Users/gps/src/services-central/.mozconfig:
0.25 MOZ_OBJDIR=$(TOPSRCDIR)/objdir
0.25 MOZ_MAKE_FLAGS=-j9 -s
0.33 TEST-PASS | check-sync-dirs.py | /Users/gps/src/services-central/js/src/build 
= /Users/gps/src/services-central/build
0.34 Generating /Users/gps/src/services-central/configure using autoconf
1.55 cd /Users/gps/src/services-central/objdir
1.55 /Users/gps/src/services-central/configure
1.57 Adding configure options from /Users/gps/src/services-central/.mozconfig:
1.57   --enable-application=browser
1.57   --enable-optimize
1.70 loading cache ./config.cache
snip

 335.19 309 compiler warnings present.

$ ./mach warnings-summary

 snip

10  -Wlogical-op-parentheses
10  -Wtautological-compare
13  -Wsometimes-uninitialized
26  -Wconversion
30  -Wunused-variable
34  -Wunused-function
34  -Wdeprecated-declarations
47  -Wtautological-constant-out-of-range-compare
67  -Wunused-private-field
309 Total


$ ./mach warnings-list

 snip

content/base/src/nsContentUtils.cpp:279:15 [-Wunused-private-field] private 
field 'mKey' is not used
content/base/src/nsDocumentEncoder.cpp:2014:78 [-Wlogical-op-parentheses] '' 
within '||'
content/xbl/src/nsBindingManager.cpp:433:1 [-Wdelete-non-virtual-dtor] delete 
called on 'nsBindingManager' that has virtual functions but non-virtual 
destructor
content/xbl/src/nsXBLProtoImpl.cpp:22:21 [-Wunused-private-field] private field 
'cx' is not used
content/xbl/src/nsXBLProtoImpl.cpp:23:13 [-Wunused-private-field] private field 
'versionBefore' is not used
snip


$ ./mach mochitest-browser browser/base/content/test/

 expected mochitest output here

(Yes, you can tab complete test paths in your shell instead of going 
through some silly environment variable dance).


If you get lost:

$ ./mach help

(Try doing that with make!)

Read more about mach at [1].

I want to stress that mach is still in its infancy. It is currently only 
a fraction of what I want it to be. However, mach today is infinitely 
more than what it was yesterday: non-existent. That's progress. If you 
have a feature request or a work flow, please file a bug. Even better, 
contribute a patch! You only need to know beginner's Python to add new 
commands to mach.


I hope you enjoy using mach. I look forward to many new and exciting 
features in the near future.


[1] http://gregoryszorc.com/blog/2012/09/26/mach-has-landed/

Gregory
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ERROR: Logon failure: unknown user name or bad password. on mochitests

2012-09-26 Thread Clint Talbert

On 9/26/2012 8:32 AM, Armen Zambrano G. wrote:

On 12-09-26 6:45 AM, Neil wrote:

Perhaps the file system permissions are incorrect and NT
AUTHORITY\NETWORK SERVICE doesn't have permission to access
C:\WINDOWS\system32\wbem\wmiprvse.exe ?



You might be right.
How can I check this? I'm not very familiar with this concept.
You can go right click on the file, select the security tab and you 
should see what users have what permissions on the file. For a test, 
simply grant everyone full access to the file and try to run it.


It could be that one of the OPSI scripts either deleted or changed 
permissions for one of the users.


Mochitest itself doesn't log into anything, so this isn't from the 
harness. We have removed this method of killing tasks on windows in 
mozbase and this should get much more stable once we port mochitest to 
use the mozbase toolchain, tentatively slated for this coming quarter.


Clint
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [b2g] UA Override List Policy

2012-09-26 Thread Jonas Sicking
On Wed, Sep 26, 2012 at 5:55 AM, Gervase Markham g...@mozilla.org wrote:
 This is a proposed lightweight policy for adding sites to the UA string
 override list. (Follow up to dev-platform; let me know if I've not spread
 this wide enough.)

 I propose that we agree this policy, then switch the B2G default UA back to
 the OS-agnostic one (no Android; bug 787054). If we hit problems with
 specific sites, we file bugs, run through this process, and get an override
 in place if appropriate. It's important for a number of reasons to fix the
 B2G UA ASAP, and I suspect that if we resolve it for v1 to include
 Android, we'll never manage to get it out of there, with all of the
 problems that will entail long term.

 We want to balance the ability to react to problems users are experiencing,
 with a requirement to check that we are aiming before shooting, that we
 don't actually degrade user's UX (e.g. by bypassing a check which kept them
 from a very broken site) and that we are making sure the list does not grow
 without any organized efforts to shrink it again.

 Sites should be added to the list if/when:

 1) An evangelism bug has been opened and the site has been contacted;

 2) The site has proved unresponsive or unwilling to accommodate us (how long
 we wait for this will depend on factors such as the popularity of the site
 and the extent of breakage);

 3) There is a specific proposed alternative UA for each broken product which
 has minimal changes from the default;

 4) Either: Deep testing (not just the front page) has shown that a UA
 override for that UA in that product leads to a significant UX improvement
 on the site; or we know that the fix works because it restores a UA which
 that product was previously using;

 5) The override is only for the broken products;

 6) The entry in the prefs file comes with a comment with a reference to the
 bug in question.

 Criterion 2 and criterion 4 are the only ones which could potentially lead
 to significant delay. 4 is unavoidable; we don't want to be doing an
 override without checking that it actually improves things. Sites required
 by teams for functional testing or demoing of B2G would have a very short
 timeout for criterion 2.

 Sites should be removed from the list, for all active branches (I propose:
 including stable and ESR), once the site has confirmed that they have fixed
 it, or deep testing makes us believe they have.


 Does this strike the right balance?

This sounds like a good policy Gerv.

However I'm very worried that this is happening so late in the B2G
development cycle. The feature freeze for B2G is on friday this week,
and while we have a few features that will land after that, that list
is very short and driven by that we simply can't ship a phone without
those features (like 3rd party app updates).

When do you expect this feature to be implemented and land?

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [b2g] UA Override List Policy

2012-09-26 Thread Gavin Sharp
The site-specific UA override functionality has already landed on
mozilla-central and aurora (bug 782453). See also bug 787054 for its
intended use in b2g.

Gavin

On Wed, Sep 26, 2012 at 7:28 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Sep 26, 2012 at 5:55 AM, Gervase Markham g...@mozilla.org wrote:
 This is a proposed lightweight policy for adding sites to the UA string
 override list. (Follow up to dev-platform; let me know if I've not spread
 this wide enough.)

 I propose that we agree this policy, then switch the B2G default UA back to
 the OS-agnostic one (no Android; bug 787054). If we hit problems with
 specific sites, we file bugs, run through this process, and get an override
 in place if appropriate. It's important for a number of reasons to fix the
 B2G UA ASAP, and I suspect that if we resolve it for v1 to include
 Android, we'll never manage to get it out of there, with all of the
 problems that will entail long term.

 We want to balance the ability to react to problems users are experiencing,
 with a requirement to check that we are aiming before shooting, that we
 don't actually degrade user's UX (e.g. by bypassing a check which kept them
 from a very broken site) and that we are making sure the list does not grow
 without any organized efforts to shrink it again.

 Sites should be added to the list if/when:

 1) An evangelism bug has been opened and the site has been contacted;

 2) The site has proved unresponsive or unwilling to accommodate us (how long
 we wait for this will depend on factors such as the popularity of the site
 and the extent of breakage);

 3) There is a specific proposed alternative UA for each broken product which
 has minimal changes from the default;

 4) Either: Deep testing (not just the front page) has shown that a UA
 override for that UA in that product leads to a significant UX improvement
 on the site; or we know that the fix works because it restores a UA which
 that product was previously using;

 5) The override is only for the broken products;

 6) The entry in the prefs file comes with a comment with a reference to the
 bug in question.

 Criterion 2 and criterion 4 are the only ones which could potentially lead
 to significant delay. 4 is unavoidable; we don't want to be doing an
 override without checking that it actually improves things. Sites required
 by teams for functional testing or demoing of B2G would have a very short
 timeout for criterion 2.

 Sites should be removed from the list, for all active branches (I propose:
 including stable and ESR), once the site has confirmed that they have fixed
 it, or deep testing makes us believe they have.


 Does this strike the right balance?

 This sounds like a good policy Gerv.

 However I'm very worried that this is happening so late in the B2G
 development cycle. The feature freeze for B2G is on friday this week,
 and while we have a few features that will land after that, that list
 is very short and driven by that we simply can't ship a phone without
 those features (like 3rd party app updates).

 When do you expect this feature to be implemented and land?

 / Jonas
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform