[chromium-dev] Re: cygwin dependence missing?

2009-07-03 Thread Dirk Pranke

Debugging this a bit more ... it seems to be some sort of XP vs. Vista
thing. If I run wdiff exp.txt act.txt in a bash shell or a command
prompt on Vista, it works. On XP, it works in a bash shell, but in a
command prompt, I get the /tmp error. It looks like XP is expecting
to find the cygwin environment, but isn't, and it is finding it on
Vista. I don't see anything obvious in my environment that is
different.

I am running Cygwin 1.5.25, though. At this point, my knowledge of
cygwin falters, so I'm out of ideas. Anyone else?

-- Dirk

On Thu, Jul 2, 2009 at 6:08 PM, Dirk Prankedpra...@google.com wrote:
 Oh, I should add that if I run the same binaries under Vista,
 everything works fine (no error from wdiff).

 -- Dirk

 On Thu, Jul 2, 2009 at 6:07 PM, Dirk Prankedpra...@google.com wrote:
 Hi Marc-Antoine,

 I am getting the same wdiff: /tmp/t101c.0: No such file or directory
 errors ... I'm running an XP VM on a Vista 64 host, but I've tried
 both local files (running the tests on a virtual drive) as well as a
 network share to the host VM. I've tried the /etc/fstab, the
 CYGWIN=nontsec, and the changing of directory ACLs, all to no avail.
 Any other ideas?

 -- Dirk

 On Thu, Jul 2, 2009 at 5:53 PM, Marc-Antoine Ruelmar...@chromium.org wrote:
 2009/7/1 Bradley Nelson bradnel...@google.com

 gyp should be setting CYGWIN=nontsec for actions and rules (unless you use
 the msvs_use_cygwin_shell:0).

 FYI, cygwin 1.7 doesn't honour CYGWIN=NONTSEC anymore. You need to modify
 /etc/fstab, e.g. c:\cygwin\etc\fstab to add something line:
 none /cygdrive cygdrive binary,posix=0,user,noacl 0 0
 to have the same effect.
 M-A
 




--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: cygwin dependence missing?

2009-07-03 Thread Dirk Pranke

Hi Marc-Antoine,

I am getting the same wdiff: /tmp/t101c.0: No such file or directory
errors ... I'm running an XP VM on a Vista 64 host, but I've tried
both local files (running the tests on a virtual drive) as well as a
network share to the host VM. I've tried the /etc/fstab, the
CYGWIN=nontsec, and the changing of directory ACLs, all to no avail.
Any other ideas?

-- Dirk

On Thu, Jul 2, 2009 at 5:53 PM, Marc-Antoine Ruelmar...@chromium.org wrote:
 2009/7/1 Bradley Nelson bradnel...@google.com

 gyp should be setting CYGWIN=nontsec for actions and rules (unless you use
 the msvs_use_cygwin_shell:0).

 FYI, cygwin 1.7 doesn't honour CYGWIN=NONTSEC anymore. You need to modify
 /etc/fstab, e.g. c:\cygwin\etc\fstab to add something line:
 none /cygdrive cygdrive binary,posix=0,user,noacl 0 0
 to have the same effect.
 M-A
 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Clobber third_party/ffmpeg/binaries before syncing

2009-07-06 Thread Dirk Pranke

On Mon, Jul 6, 2009 at 1:49 PM, Andrew Scherkusscher...@chromium.org wrote:
 I just checked in a Chromium-specific version of FFmpeg that only includes
 Ogg+Theora+Vorbis support.
 If you previously had any binaries located in third_party/ffmpeg/binaries,
 you may have to clobber that entire directory and force sync:
 rm -rf third_party/ffmpeg/binaries
 svn co third_party/ffmpeg/binaries
 gclient sync --force
 Andrew

If you do this, and keep getting the message WARNING:
src/third_party/ffmpeg/binaries is no longer part of this client.
It is recommended that you manually remove it., you might try
deleting your .gclient_entries file as well, before you do the gclient
sync --force.

-- Dirk

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: cygwin dependence missing?

2009-07-06 Thread Dirk Pranke

Okay, I think I've figured this out and it's a relatively undocumented
aspect of our win builds (from what I can tell).

The third_party/cygwin install we have has a modified version of
cygwin1.dll , which has been patched to tell cygwin the root is
third_party/cygwin rather than /. In order for this to work correctly,
you need to run third_party/cygwin/setup_mount.bat to stuff the right
entries into the registry.

We should probably change run_layout_tests to automatically run this
script prior to running the regression (running it every time should
be harmless).

-- Dirk

On Thu, Jul 2, 2009 at 7:27 PM, Dirk Prankedpra...@google.com wrote:
 Debugging this a bit more ... it seems to be some sort of XP vs. Vista
 thing. If I run wdiff exp.txt act.txt in a bash shell or a command
 prompt on Vista, it works. On XP, it works in a bash shell, but in a
 command prompt, I get the /tmp error. It looks like XP is expecting
 to find the cygwin environment, but isn't, and it is finding it on
 Vista. I don't see anything obvious in my environment that is
 different.

 I am running Cygwin 1.5.25, though. At this point, my knowledge of
 cygwin falters, so I'm out of ideas. Anyone else?

 -- Dirk

 On Thu, Jul 2, 2009 at 6:08 PM, Dirk Prankedpra...@google.com wrote:
 Oh, I should add that if I run the same binaries under Vista,
 everything works fine (no error from wdiff).

 -- Dirk

 On Thu, Jul 2, 2009 at 6:07 PM, Dirk Prankedpra...@google.com wrote:
 Hi Marc-Antoine,

 I am getting the same wdiff: /tmp/t101c.0: No such file or directory
 errors ... I'm running an XP VM on a Vista 64 host, but I've tried
 both local files (running the tests on a virtual drive) as well as a
 network share to the host VM. I've tried the /etc/fstab, the
 CYGWIN=nontsec, and the changing of directory ACLs, all to no avail.
 Any other ideas?

 -- Dirk

 On Thu, Jul 2, 2009 at 5:53 PM, Marc-Antoine Ruelmar...@chromium.org 
 wrote:
 2009/7/1 Bradley Nelson bradnel...@google.com

 gyp should be setting CYGWIN=nontsec for actions and rules (unless you use
 the msvs_use_cygwin_shell:0).

 FYI, cygwin 1.7 doesn't honour CYGWIN=NONTSEC anymore. You need to modify
 /etc/fstab, e.g. c:\cygwin\etc\fstab to add something line:
 none /cygdrive cygdrive binary,posix=0,user,noacl 0 0
 to have the same effect.
 M-A
 





--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: cygwin dependence missing?

2009-07-06 Thread Dirk Pranke

2009/7/6 David Jones ds...@163.com:
Okay, I think I've figured this out and it's a relatively undocumented
aspect of our win builds (from what I can tell).

The third_party/cygwin install we have has a modified version of
cygwin1.dll , which has been patched to tell cygwin the root is
third_party/cygwin rather than /. In order for this to work correctly,
you need to run third_party/cygwin/setup_mount.bat to stuff the right
entries into the registry.

We should probably change run_layout_tests to automatically run this
script prior to running the regression (running it every time should
be harmless).

-- Dirk

 Do you mean my layout-test actually uses the cygwin under depot_tools, but
 not the third_party/cygwin?
 also, what does setup_mount.bat  do?


There isn't a cygwin under depot_tools, unless you have something I
don't. setup_mount.bat creates a bunch of registry entries that tells
cygwin where to find the root of its source tree. Read the source for
that file and the README.google in that directory; they're pretty
clear about what's going on (enough so that me repeating it here
wouldn't add anything).

Unless you either had those files added at some point, or somehow
managed to install cygwin into the same path, I'm not sure why it
would be working for you on one machine and not the other.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] running layout_tests on vista and windows 7

2009-07-13 Thread Dirk Pranke

Hi all,

(If you don't ever care to run the webkit layout tests, you can skip this note).

As most of you are no doubt aware, we currently can only run the
webkit layout_tests on windows XP. For some of us who primarily
develop on 64-bit Vista, this is inconvenient at best, and this is
only going to get worse over time as more of migrate to 64-bit
machines and (eventually) Windows 7.

So, I'm working on porting the layout tests to Vista. This note is a
writeup of the approach I'm thinking of taking, and I'm looking for
feedback and suggestions, especially since most of you have been on
this code base a lot longer than me.

My basic approach is to try and get something up as quickly as
possible as a proof of concept, and then work to try and reduce the
maintenance over time. So, I've started by cloning the chromium-win
port over to vista, and modifying the test scripts to be aware of the
new set of test expectations. I will then tweak the tests to get
roughly the same list of tests passing on Vista as on Windows. The
main differences will have to do with how the theming renders scroll
bars and a few other native controls. I have most of this now, and
should have the rest of this in a day or two, but this is not a
maintainable solution without a lot of manual overhead.

Next, we'll get a buildbot setup to run on Vista.

While we're doing this, I'll start working on reducing the test set
duplication between XP and Vista. The best way to do this (we
currently think) will be to modify test_shell to *not* draw the native
controls, but rather stub them out in a platform-independent way for
test purposes (e.g., just painting a grey box instead of a real scroll
bar). Then we can write a few platform-specific unit tests to ensure
that the widgets do work correctly, but the bulk of the tests will
become (more) platform-independent. My hope is that we'll have
something that I can demonstrate here in a week or two, and that it
will extend trivially to Win 7.

A stretch hope is that we can even get the rendering to be
platform-independent enough that we may even be able to leverage them
across the linux and mac ports. I don't know if this is realistic or
not, as many of the tests may differ just due to font rendering and
other minor differences.

An alternative strategy is to start looking at more and more of the
tests and making sure they are written to be as platform-independent
as possible. First we'd this by making sure that we don't rely on
pixel-based tests where text-based tests would do. Another option
would be to switch to writing two tests just to ensure that page A
renders the same way as page B (where A and B use two different sets
of layout but should produce the same output). Both of these options
are significantly more work up front, but will payoff in much less
maintenance down the line. Also, all of this work will also overlap
with the webkit test suites, so it'll need to be coordinated with our
upstream buddies.

Comments? Thoughts?

-- Dirk

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: running layout_tests on vista and windows 7

2009-07-13 Thread Dirk Pranke

Yup, I've already adopted that. Thanks!

On Mon, Jul 13, 2009 at 12:55 PM, Thomas Van
Lententhoma...@chromium.org wrote:
 Quick skimmed reply: Mac already has expectations per OS where we need them,
 so you might be able to follow that basic model (and maybe small tweaks to
 the scripts to use it).  It looks for version specific and then falls back
 to generic platform files so we only have to dup the ones that are os
 version specific.
 TVL

 On Mon, Jul 13, 2009 at 3:50 PM, Dirk Pranke dpra...@google.com wrote:

 Hi all,

 (If you don't ever care to run the webkit layout tests, you can skip this
 note).

 As most of you are no doubt aware, we currently can only run the
 webkit layout_tests on windows XP. For some of us who primarily
 develop on 64-bit Vista, this is inconvenient at best, and this is
 only going to get worse over time as more of migrate to 64-bit
 machines and (eventually) Windows 7.

 So, I'm working on porting the layout tests to Vista. This note is a
 writeup of the approach I'm thinking of taking, and I'm looking for
 feedback and suggestions, especially since most of you have been on
 this code base a lot longer than me.

 My basic approach is to try and get something up as quickly as
 possible as a proof of concept, and then work to try and reduce the
 maintenance over time. So, I've started by cloning the chromium-win
 port over to vista, and modifying the test scripts to be aware of the
 new set of test expectations. I will then tweak the tests to get
 roughly the same list of tests passing on Vista as on Windows. The
 main differences will have to do with how the theming renders scroll
 bars and a few other native controls. I have most of this now, and
 should have the rest of this in a day or two, but this is not a
 maintainable solution without a lot of manual overhead.

 Next, we'll get a buildbot setup to run on Vista.

 While we're doing this, I'll start working on reducing the test set
 duplication between XP and Vista. The best way to do this (we
 currently think) will be to modify test_shell to *not* draw the native
 controls, but rather stub them out in a platform-independent way for
 test purposes (e.g., just painting a grey box instead of a real scroll
 bar). Then we can write a few platform-specific unit tests to ensure
 that the widgets do work correctly, but the bulk of the tests will
 become (more) platform-independent. My hope is that we'll have
 something that I can demonstrate here in a week or two, and that it
 will extend trivially to Win 7.

 A stretch hope is that we can even get the rendering to be
 platform-independent enough that we may even be able to leverage them
 across the linux and mac ports. I don't know if this is realistic or
 not, as many of the tests may differ just due to font rendering and
 other minor differences.

 An alternative strategy is to start looking at more and more of the
 tests and making sure they are written to be as platform-independent
 as possible. First we'd this by making sure that we don't rely on
 pixel-based tests where text-based tests would do. Another option
 would be to switch to writing two tests just to ensure that page A
 renders the same way as page B (where A and B use two different sets
 of layout but should produce the same output). Both of these options
 are significantly more work up front, but will payoff in much less
 maintenance down the line. Also, all of this work will also overlap
 with the webkit test suites, so it'll need to be coordinated with our
 upstream buddies.

 Comments? Thoughts?

 -- Dirk

 



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Proposal for adding ChangeLog files to Chromium

2009-07-22 Thread Dirk Pranke

-1000 to manual changelog updates

+1 to a changelog populated by a commit trigger. Having a
local/offline copy of the change history can be useful, in the absence
of git.

-100 to reverts deleting stuff from changelogs. changelogs should be
(except in exceptional circumstances) append only, just like version
control.

Seems to me that any substantive change or feature addition should be
tracked by a bug, and that bug should have a 'feature' or
'release_note' flag associated with it. Then writing a script to pull
all the relevant notes would be pretty easy.

-- Dirk


On Wed, Jul 22, 2009 at 10:13 AM, Darin Fisherda...@chromium.org wrote:
 On Wed, Jul 22, 2009 at 10:09 AM, Peter Kasting pkast...@google.com wrote:

 ...
 Actually, I've never been too sure about reverting in WebKit: does one
 revert the ChangeLog file too or add another ChangeLog entry at the
 top describing the revert?

 Unless my memory is faulty, according to the Apple folks who have guided
 me through reverts (in particular, bdash), you add a new entry at top saying
 you're reverting; you never remove the old CL entry.

 Oh, good to know!
 -Darin
 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Generating files in source tree considered harmful

2009-07-22 Thread Dirk Pranke

Being the person who perpetrated this crime, if someone could even
tell me how to fix it, that would be an improvement. It seems like
nsylvain is the only one with the appropriate mojo (at least in the
evenings...)

-- Dirk

On Wed, Jul 22, 2009 at 8:27 PM, Dan Kegeld...@kegel.com wrote:

 Stop me if you've heard this one before.

 Today, a new directory was added to the source tree, and shortly
 thereafter was reverted.
 Should have been no problem, but... because the new directory
 contained a gyp file, a file was generated in that directory,
 and svn couldn't delete the directory when the revert landed.
 This caused a build breakage, and I gather from nsylvain's
 comments that this wasn't the first time this has happened.

 At some point soon, it'd be good to teach gyp not to generate
 files in the source tree.

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: FF's search engine keyword

2009-07-23 Thread Dirk Pranke

(cc'ing chromium-discuss, bcc'ing chromium-dev)

It is? How do you specify keywords in Chrome's Bookmarks editor?

-- Dirk

On Wed, Jul 22, 2009 at 12:49 PM, Peter Kastingpkast...@google.com wrote:
 On Wed, Jul 22, 2009 at 12:46 PM, Igor Gatis igorga...@gmail.com wrote:

 (please forgive me if this not the right list)

 It's not.  chromium-discuss@ is the right list, or the Help Center.

 So when I want to google something, I just type g something, when I
 want to lookup the meaning of a certain word, I just type dc word and so
 on.
 Will something like that be supported by chromium?

 It already is, and furthermore if you imported all your settings from
 Firefox during install, you have those same keywords set up for you.
 PK
 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



generated files in the tree (was Re: [chromium-dev] Fwd: Make workers functional on OSX and Linux.)

2009-08-04 Thread Dirk Pranke

On Tue, Aug 4, 2009 at 10:10 AM, Darin Fisherda...@chromium.org wrote:
 I think we need to make it possible for the buildbots to run in a mode where
 there are absolutely no generated files output into the source directory.
 How can we make this happen?
 (I understand that some users like having files output to the source
 directory, so that's why I said this only has to be a mode usable by the
 buildbots.)

Insofar as I've had a couple of checkins break builds because of this,
I support the idea of fixing this, but I'm not sure that moving
generated project files of the source tree is the way to go. What
problem are you really trying to solve? Not having the files show up
in svn status? Or making sure that files actually get deleted? Maybe
we need a more selective make clean command instead?

More importantly, I would really prefer it if we just had one way of
doing things, rather than multiple configurable ways. All of these
options make the build and test systems hard to understand and harder
to debug, and I've broken the build more times because of this than I
have because of generated files ;)

-- Dirk

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: PSA: irc awareness [Was: Trick question: Who is responsible for repairing a red tree?]

2009-08-05 Thread Dirk Pranke

I would love to enable that feature ... anyone know how to do that for
Adium on the Mac (IRC support is new in the 1.4 beta)?
Failing that, Colloquy?

-- Dirk

On Wed, Aug 5, 2009 at 5:48 PM, Marc-Antoine Ruelmar...@chromium.org wrote:
 Most irc clients have an option to beep, flash or shutdown your computer
 when your nick is mentioned on a channel. It may not be enabled by default
 so please enable this. Ask around if you can't find how to enable this.
 Thanks,
 M-A

 On Wed, Aug 5, 2009 at 5:48 PM, Dan Kegel d...@kegel.com wrote:

 On Wed, Aug 5, 2009 at 2:45 PM, Tim Steelet...@chromium.org wrote:
  you have to keep asking, unless you're always on IRC and can cleverly
  search
  the window contents.  A constant place to go looking for this would make
  it
  easier, at least in my opinion.  Like right now I don't know what's up
  with
  Chromium Mac (valgrind)

 You need to be on IRC and scroll back:

 [13:55] dkegel        mac valgrind unit - rohitrao?
 [13:57] motownavi     dkegel: very likely
 [13:58] dkegel        emailed rohit
 [14:02] rohitrao      dkegel, jar: looking
 [14:30] rohitrao      jar, dkegel: reverted 22517

 I doubt anything more durable than IRC is going to help...
 IRC and the tree status are the place to go, I'm afraid.




 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: gcl change that you should be aware of

2009-08-06 Thread Dirk Pranke

I also fear that I may have unwanted files sneaking in. This was less
of an issue with Perforce since you have to manually 'p4 edit' files
first anyway.

I'll be curious to see if there's a consensus preference one way or
another. Maybe we can make this a gclient config setting or something?

-- Dirk

On Thu, Aug 6, 2009 at 12:20 AM, Darin Fisherda...@chromium.org wrote:
 This seemed really great at first, but after some use, I find it a bit
 frustrating since each time I run gcl change, I now have to be mindful
 that gcl may add unwanted files to the CL.  The only way I've found to avoid
 this is to make sure that unwanted files are part of a dummy CL :-(
 -Darin



 On Wed, Aug 5, 2009 at 5:56 PM, Anthony LaForge lafo...@google.com wrote:

 Howdy,
 Quick little change to gcl that everyone should be aware of.  When you
 execute the script it will now automatically pull all physically modified
 files into the Paths in this changelist section, which means no more
 copying and pasting the files you changed into your CL.  The behavior is
 closer to that of P4 (where we delete files as opposed to adding them).  All
 the unchanged files are still below.
 Kind Regards,

 Anthony Laforge
 Technical Program Manager
 Mountain View, CA




 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Handling layout test expectations for failing tests

2009-08-21 Thread Dirk Pranke

Hi all,

As Glenn noted, we made great progress last week in rebaselining the
tests. Unfortunately, we don't have a mechanism to preserve the
knowledge we gained last week as to whether or not tests need to be
rebaselined or not, and why not. As a result, it's easy to imagine
that we'd need to repeat this process every few months.

I've written up a proposal for preventing this from happening again,
and I think it will also help us notice more regressions in the
future. Check out:

http://dev.chromium.org/developers/design-documents/handlinglayouttestexpectations

Here's the executive summary from that document:

We have a lot of layout test failures. For each test failure, we have
no good way of tracking whether or not someone has looked at the test
output lately, and whether or not the test output is still broken or
should be rebaselined. We just went through a week of rebaselining,
and stand a good chance of needing to do that again in a few months
and losing all of the knowledge that was captured last week.

So, I propose a way to capture the current broken output from
failing tests, and to version control them so that we can tell when a
test's output changes from one expected failing result to another.
Such a change may reflect that there has been a regression, or that
the bug has been fixed and the test should be rebaselined.

Changes

We modify the layout test scripts to check for 'foo-bad' as well as
'foo-expected'. If the output of test foo does not match
'foo-expected', then we check to see if it matches 'foo-bad'. If it
does, then we treat it as we treat test failures today, except that
there is no need to save the failed test result (since a version of
the output is already checked in). Note that although -bad is
similar to a different platform, we cannot actually use a different
platform, since we actually need up to N different -bad versions,
one for each supported platform that a test fails on.
We check in a set of '*-bad' baselines based on current output from
the regressions. In theory, they should all be legitimate.
We modify the test to also report regressions from the *-bad
baselines. In the cases where we know the failing test is also flaky
or nondeterministic, we can indicate that as NDFAIL in test
expectations to distinguish from a regular deterministic FAIL.
We modify the rebaselining tools to handle *-bad output as well as
*-expected.
Just like we require each test failure to be associated with a bug, we
require each *-bad output to be associated with a bug - normally
(always?) the same bug. The bug should contain comments about what the
difference is between the broken output and the expected output, and
why it's different, e.g., something like Note that the text is in two
lines in the -bad output, and it should be all on the same line
without wrapping.
The same approach can be used here to justify platform-specific
variances in output, if we decide to become even more picky about
this, but I suggest we learn to walk before we try to run.
Eventually (?) we modify the layout test scripts themselves to fail if
the *-bad baselines aren't matched.

Let me know what you think. If it's a thumbs' up, I'll probably
implement this next week. Thanks!

-- Dirk

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Handling layout test expectations for failing tests

2009-08-21 Thread Dirk Pranke

On Fri, Aug 21, 2009 at 4:47 PM, Peter Kastingpkast...@chromium.org wrote:
 On Fri, Aug 21, 2009 at 2:33 PM, Pam Greene p...@chromium.org wrote:

 I'm not convinced that passing tests we used to fail, or failing tests
 differently, happens often enough to warrant the extra work of producing,
 storing, and using expected-bad results. Of course, I may be completely
 wrong. What did other people see in their batches of tests?

 There were a number of tests in my set that were affected by innocuous
 upstream changes (the type that would cause me to rebaseline) but were also
 affected by some other critical bug that meant I couldn't rebaseline.  I
 left comments about these on the relevant bugs and occasionally in the
 expectations file.
 Generally when looking at a new test I can tell whether it makes sense to
 rebaseline or not without the aid of when did we fail this before?, since
 there are upstream baselines and also obvious correct and incorrect outputs
 given the test file.
 I agree that the benefit here is low (for me, near zero) and the cost is
 not.
 PK

This is all good feedback, thanks! To clarify, though: what do you
think the cost will be? Perhaps you are assuming things about how I
would implement this that are different than what I had in mind.

-- Dirk

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Handling layout test expectations for failing tests

2009-08-21 Thread Dirk Pranke

On Fri, Aug 21, 2009 at 6:43 PM, Ojan Vafaio...@chromium.org wrote:
 On Fri, Aug 21, 2009 at 4:54 PM, Peter Kasting pkast...@chromium.org
 wrote:

 On Fri, Aug 21, 2009 at 4:50 PM, Dirk Pranke dpra...@chromium.org wrote:

 This is all good feedback, thanks! To clarify, though: what do you
 think the cost will be? Perhaps you are assuming things about how I
 would implement this that are different than what I had in mind.

 Some amount of your time, and some amount of space on the bots.

 Also, some amount of the rest of the team's time to follow this process.
 Ojan

Okay, it sounds like there's enough initial skepticism that it's
probably worth doing a hack before pushing this fully through. I think
I'll try to take a few snapshots of the layout test failures over a
few days and see if we see any real diffs, and then report back.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Handling layout test expectations for failing tests

2009-08-24 Thread Dirk Pranke

On Mon, Aug 24, 2009 at 11:37 AM, Ojan Vafaio...@chromium.org wrote:
 The end goal is to be in a state where we have near zero failing tests that
 are not for unimplemented features. And new failures from the merge get
 addressed within a week.
 Once we're at that point, would this new infrastructure be useful? I
 completely support infrastructure that sustainably supports us being at near
 zero failing tests (e.g. the rebaseline tool). All infrastructure/process
 has a maintenance cost though.

True enough. There are at least two counterexamples that are worth
considering. The first is that probably won't be at zero failing tests
any time soon (where any time soon == next 3-6 months), and so there
may be intermediary value. The second is that we have a policy of
running every test, even tests for unimplemented features, and so we
may catch regressions for the foreseeable future.

That said, I don't know if the value will offset the cost. Hence the
desire to run a couple of cheap experiments :)

-- Dirk

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Handling layout test expectations for failing tests

2009-08-24 Thread Dirk Pranke

On Mon, Aug 24, 2009 at 1:52 PM, David Levinle...@google.com wrote:


 On Mon, Aug 24, 2009 at 1:37 PM, Dirk Pranke dpra...@chromium.org wrote:

 On Mon, Aug 24, 2009 at 11:37 AM, Ojan Vafaio...@chromium.org wrote:
  The end goal is to be in a state where we have near zero failing tests
  that
  are not for unimplemented features. And new failures from the merge get
  addressed within a week.
  Once we're at that point, would this new infrastructure be useful? I
  completely support infrastructure that sustainably supports us being at
  near
  zero failing tests (e.g. the rebaseline tool). All
  infrastructure/process
  has a maintenance cost though.

 True enough. There are at least two counterexamples that are worth
 considering. The first is that probably won't be at zero failing tests
 any time soon (where any time soon == next 3-6 months), and so there
 may be intermediary value. The second is that we have a policy of
 running every test, even tests for unimplemented features, and so we
 may catch regressions for the foreseeable future.

 That said, I don't know if the value will offset the cost. Hence the
 desire to run a couple of cheap experiments :)

 What do the cheap experiments entail?  Key concern: If the cheapness is to
 put more work on the webkit gardeners, it isn't cheap at all imo.


Cheap experiments == me snapshotting the results of tests I run
periodically and comparing them. No work for anyone else.

-- Dirk

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Layout tests can now be run on both XP and Vista

2009-09-15 Thread Dirk Pranke

On Tue, Sep 15, 2009 at 2:15 PM, Ojan Vafai o...@chromium.org wrote:
 On Tue, Sep 15, 2009 at 4:26 AM, Dirk Pranke dpra...@chromium.org wrote:

 I have just landed a patch that enables us to run layout tests on
 Vista as well as XP.

 Thanks for doing this! Needing to run the tests on a 32bit XP machine sucks.
 I don't think we can call running the tests on Vista supported until the
 tooling people rely on also supports it. Otherwise, we are causing more
 burden than benefit (e.g. we'll essentially be requiring everyone to have
 both Vista and XP machines). Things that come to mind:

 Vista bot on the primary (non-fyi) waterfall
 Vista layout test try server
 Vista canary bot for tip of tree webkit
 Rebaseline tool support


Well, practically, there's very little difference between XP and Vista
(about 10 baselines). Victor is updating the rebaselining tool to
properly support Vista, and we need to get a release mode bot as well.
While I agree that it would be good to have a layout test try server
on vista and a canary bot, we didn't have those until very recently
for XP, either :)

I think it's a more general issue that we should discuss how we want
to divide our build machines between XP, Vista, and, soon, Win 7 (as
you say below). Just doubling or tripling the number of machines seems
like it has minimal ROI. On the other hand, flipping the machines to
64-bit Vista may speed them up substantially ...

 We'll also need the same before we consider Win7 supported. I'm OK with
 adding Win7 baselines and a Win7 fyi bot before that, but the team should
 not be expected to support it until the tools do.
 It's not clear to me what our desired end result should be. Do we need a
 full set of bots for WebKit Vista and WebKit Win7? Is just having release
 bots for each enough? The waterfall is already very crowded. My first
 intuition is that we should have release and debug bots for the latest
 supported platform (which will soon be Vista) and just release bots for
 other platforms (definitely XP, what about Win7?).

 Also, the

 checkin involved updating  700 images, so I didn't have anyone but me
 review them.

 I'm confused by this. Are you saying you didn't get this code reviewed? If
 so, why not? Can you provide a link to the checkin so it can get reviewed?

The code that I changed did get reviewed, but only I reviewed the
images themselves. I did this after discussing the pros and cons with
Darin yesterday. I'm not sure that there's a lot of value in two
people staring at the sets of images.

The link to the checkin is here:
http://src.chromium.org/viewvc/chrome?view=revrevision=26204 .

I can also probably pull you a couple of tarballs of the chromium-win
directory pre- and post- checkin if you're volunteering to review them
after the fact :)

 Test rebaselines should not get checked in without review. Simple, mistakes
 are made all the time.

Generally speaking, I agree. In this case, the changes were so
repetitive in nature it was pretty easy to be sure that things were
okay.

-- Dirk

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Layout tests can now be run on both XP and Vista

2009-09-15 Thread Dirk Pranke

Hi Hironori,

That's not a stupid question at all. I haven't actually tried using XP
mode, or Win 7 at all yet. I will be installing Win 7 as soon as I can
get my hands on a copy, and then we should get some Win 7 baselines.
Once we have those, and can get some Win 7 build bots configured, then
you should be able to stop using XP mode.

I will send out more notes to the list as we get closer to this.

-- Dirk

2009/9/15 Hironori Bono (坊野 博典) hb...@chromium.org:
 Hi Dirk,

 Thank you so much for your great work!
 By the way, I have been using XP Mode of Windows 7 (*1) to run layout
 tests since I changed my development PC to Windows 7. (It's not so
 fast, but it's OK.) Does this Win7 baselines mean we don't need to
 use XP Mode? (I assume yes.)

 (*1) http://www.microsoft.com/windows/virtual-pc/

 Sorry for my stupid question in advance.

 Regards,

 Hironori Bono
 E-mail: hb...@chromium.org

 On Tue, Sep 15, 2009 at 8:26 PM, Dirk Pranke dpra...@chromium.org wrote:

 Hi all,

 I have just landed a patch that enables us to run layout tests on
 Vista as well as XP.

 In theory, you should continue to use the tools just as you have in
 the past, and everything will work transparently. In practice, I may
 have screwed up something, so if you notice problems please let me
 know.

 One important change is that we now have a few XP-specific baselines
 in webkit/data/layout_tests/platform/chromium-win-xp (mostly in the
 way we handle various international characters and font differences
 between XP and Vista). We do not have any Vista-specific baselines
 (although one could argue that if there is a baseline in chromium-win
 and a baseline in chromium-win-xp then the chromium-win one is
 Vista-specific). We will be following the WebKit convention that the
 generic platform directory represents the most recent version of the
 platform (meaning that until Win 7 is released, all Vista baselines go
 in chromium-win. When Win 7 is released, Vista-specific baselines will
 go in chromium-win-vista).

 In practice, this means you might need to be careful about where your
 new baselines end up when using the rebaselining tool. You should make
 sure they end up in chromium-win unless you are sure they are
 XP-specific (in which case you will be responsible for landing
 baselines for both XP and Vista).

 If you have any questions about this, or run into problems, please let
 me know ASAP.

 One last thing for those who might look at this stuff in detail -
 test_shell has been changed to use a generic theme for rendering
 form controls, scroll bars, and other native widgets, in order to not
 have any differences from the different themes on the different
 versions of Windows. If you are wondering why the scroll bars and
 other controls in the baselines look really odd, that's why. Also, the
 checkin involved updating  700 images, so I didn't have anyone but me
 review them. Let me know if you see anything that doesn't look right
 :)

 Also, we will probably be landing Win 7 baselines Real Soon Now, since
 adding them is a very small additional amount of work on top of the
 stuff I just landed.

 Cheers,

 -- Dirk

 



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: [LTTF] Goals for the Layout Tests Task Force

2009-09-22 Thread Dirk Pranke

Yes, exactly. I'm working on some additional reports and dashboards
that will allow us to track the funnel of finds/fixes better as well.

-- Dirk

On Tue, Sep 22, 2009 at 11:38 AM, Dimitri Glazkov dglaz...@chromium.org wrote:

 Yep. Dirk was the one to suggest bringing it back. I didn't put this
 in the documentation, but only because I wasn't yet sure whether we'll
 track them by bug milestone or explicitly using the tag.

 :DG

 On Tue, Sep 22, 2009 at 11:33 AM, Ojan Vafai o...@chromium.org wrote:
 On Tue, Sep 22, 2009 at 10:26 AM, Jeffrey Chang jeffr...@google.com wrote:

 Fix all Windows layout tests: make test_expectations.txt only contain
 items that we will never fix, features we have not yet implemented, or bugs
 less than one week old that are a result of a recent WebKit merge.
 Set up a public dashboard which tracks the number of failing layout tests
 over time on the Chromium site.

 How do you intend to track these numbers? Right now we have no way to
 distinguish between failures that need fixing versus failures due to
 unimplemented features. One way would be to use DEFER again. All the support
 is still there, and the initial code for the tracking dashboard exposes it.
 It would mostly just work.
 I propose that we bring back to defer, but use it *only* for tests that fail
 due to unimplemented features.
 Ojan
 


 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Getting pixel tests running on the Mac

2009-09-23 Thread Dirk Pranke

No, there's no way to do that but it would be easy enough to add.

-- Dirk

On Wed, Sep 23, 2009 at 12:16 PM, Avi Drissman a...@google.com wrote:
 I've been looking into the pixel test situation on the Mac, and it isn't bad
 at all. Of ~5300 tests that have png results, we're failing ~800, most of
 which fall into huge buckets of easily-separable fail.

 Is there a way to specify that we're expecting an image compare to fail but
 still want the layout to succeed? We don't want to turn off the tests
 entirely while we fix them and run the chance of breaking something that
 layout would have caught.

 Avi

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Getting pixel tests running on the Mac

2009-09-24 Thread Dirk Pranke

 On Thu, Sep 24, 2009 at 3:11 PM, Ojan Vafai o...@chromium.org wrote:

 I don't think this is just about ignoring image-only results on mac for
 the short-term. My subjective sense is that we have many tests that start
 out as failing only image comparisons (e.g. due to theming), but over time
 another failure creeps in that causes a text failure that goes unnoticed.
 We'd ideally like to notice that extra regression when it happens as it
 might be easy to identify and fix right at the time the regression occurs.

 That's pretty much where we are with the Mac pixel tests--we do want to
 know if layout geometry regresses, even if there's a known pixel expectation
 failure (a color, or a missing spelling underline, or whatever it is for
 that test).
 --Amanda



I'm on record earlier as wanting to know when even the expected errors
change, so this seems to me like a nice half-way compromise.

Pam is right, of course, that we should be targeting zero errors, but
I don't think this will really slow us much in that effort (and it'll
probably help).

-- Dirk

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] new values for failures in test_expectations.txt

2009-09-25 Thread Dirk Pranke

Hi all,

If you don't run layout_tests or ever need to modify
test_expecations.txt, you can ignore this ...

As discussed earlier this week, we've added the ability to indicate
whether or not a test is expected to produce incorrect text output
(either a bad render tree or bad simplified text output), incorrect
images, or both. The keywords in test expectations are 'TEXT',
'IMAGE', and 'IMAGE+TEXT', respectively.

Specifying a test expectation as 'FAIL' will continue to indicate any
one of the above three choices might be happening. However, we
intended to migrate all FAILs to one of the three choices. Once that
is complete, we'll flip 'FAIL' to mean 'IMAGE+TEXT', and remove the
'IMAGE+TEXT' option. I expect this'll probably happen in the next week
or two.

Thanks, and let me know if you see any problems!

-- Dirk

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Shouldn't the Mac ignore the platform/win tests?

2009-10-01 Thread Dirk Pranke

Ojan,

As you know, run_webkit_tests doesn't have the concept of it's okay
that there are no expected results.

Several other people have also mentioned to me that it would be nice
if it did, but I don't feel strongly about it one way or another. I
don't know that I would consider tests marked WONTFIX special in this
regard; tests that are marked FAIL seem like they could also qualify
under some reasoning. If people agree that some form of this would be
a useful feature, I'll be happy to add it.

In the mean time, I agree that marking platform/X as SKIP for all
platforms != X is a very reasonable thing to do.

-- Dirk

On Thu, Oct 1, 2009 at 11:34 AM, Ojan Vafai o...@chromium.org wrote:
 They are marked WONTFIX. We run them to ensure they don't crash. This looks
 like a bug in run_webkit_tests to me. For tests that are WONTFIX, we
 shouldn't care if the expected results are missing.
 Seems a bit silly to me that we run them at all though. I would be a fan of
 just skipping the platform tests that don't apply to the current platform.
 In practice, running them adds maintenance cost and cycle time (although,
 very little) but doesn't really catch crashes.
 Ojan

 On Thu, Oct 1, 2009 at 11:20 AM, Avi Drissman a...@google.com wrote:

 The output of a test run of the Mac pixel tests is at:


 http://build.chromium.org/buildbot/try-server/builders/layout_mac/builds/85/steps/webkit_tests/logs/stdio

 What's weird is the line:

 Missing expected results (2):
   LayoutTests/fast/forms/menulist-style-color.html
   LayoutTests/platform/win/accessibility/scroll-to-anchor.html


 Why is it running a platform/win test? That's where the outputs are, and
 not even the ones for the correct platform...

 Avi


 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: FAIL not catching image failures

2009-10-02 Thread Dirk Pranke

Stephen's right; if that doesn't fix things, let me know and I'll look at it.

-- Dirk

On Fri, Oct 2, 2009 at 9:59 AM, Stephen White senorbla...@chromium.org wrote:
 I think that's a Release builder, and the tests are marked DEBUG, no?
 Stephen

 On Fri, Oct 2, 2009 at 12:34 PM, Avi Drissman a...@google.com wrote:

 Latest Mac pixel test result is here:
 http://build.chromium.org/buildbot/waterfall/builders/Webkit%20Mac10.5/builds/4420/steps/webkit_tests/logs/stdio:

 Regressions: Unexpected failures (2):
   LayoutTests/svg/custom/js-late-marker-and-object-creation.svg = FAIL
   LayoutTests/svg/hixie/error/012.xml = FAIL

 Those are both image mismatches, but are both accounted for in the
 expectations file:

 // Flaky. The width of containing RenderBlocks sometimes becomes larger
 BUG21958 WIN MAC LINUX DEBUG : LayoutTests/svg/hixie/error/012.xml = FAIL
 PASS

 and

 // Regressions from WebKit Merge 42932:42994
 BUG11239 MAC DEBUG :
 LayoutTests/svg/custom/js-late-marker-and-object-creation.svg = FAIL PASS

 Why is the FAIL in those lines not catching the image failure?

 Avi

 



 --
 All truth passes through three stages. First, it is ridiculed. Second, it is
 violently opposed. Third, it is accepted as being self-evident. --
 Schopenhauer


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: [LTTF][WebKit Gardening]: Keeping up with the weeds.

2009-10-13 Thread Dirk Pranke

On Tue, Oct 13, 2009 at 10:31 AM, Dimitri Glazkov dglaz...@chromium.org wrote:

 I think we need to change something. I am not sure what -- I have
 ideas, but -- I would appreciate some collective thinking on this.

 PROBLEM: We accumulate more test failures via WebKit rolls than we fix
 with our LTTF effort. This ain't right.

 ANALYSIS:

 Ok, WebKit gardening is hard. So is fixing layout tests. You can't
 call it a successful WebKit roll if it breaks layout tests. But we
 don't revert WebKit rolls. It's a forward-only thing. And we want to
 roll quickly, so that we can react to next big breaker faster. So
 we're stuck with roll-now/clean-up-after deal. This sucks, because the
 clean-up-after is rarely fully completed. Which brings failing
 layout tests, which brings the suffering and spells asymptotic doom to
 the LTTF effort.

 POSSIBLE SOLUTIONS:

 * Extend WebKit gardener's duties to 4 days. First two days you roll.
 Next two days you fix layout tests. Not file bugs -- actually fix
 them. The net result of 4 days should be 0 (or less!) new layout test
 failures. This solution kind of expects the gardener to be part of
 LTTF, which is not always the case. So it may not seem totally fair.

 * Assign LTTF folks specifically for test clean-up every day. The idea
 here is to slant LTTF effort aggressively toward fixing newer
 failures. This seems nice for the gardeners, but appears to separate
 the action/responsibility dependency: no matter what you roll, the
 LTTF elves will fix it.

 * [ your idea goes here ]


* Stop WebKit committers from checking in changes that break our code?

Granted, I don't know how to do this, but it seems like this gets
closer to the real problem. Do we need more stuff running at
webkit.org? Do we need someone watching the webkit buildbots and
asking them to revert changes that break our builds? Are there other
things we can do here?

-- Dirk

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] talk on Mozilla's Content Security Policy @ Stanford today @ 4:30

2009-10-13 Thread Dirk Pranke

Hi all,

Someone from Mozilla is talking about their proposed new security
spec, CSP, today at Stanford.

I'm planning to go; was anyone else from MTV aware of this and hoping
to go? I can send out a summary of the talk afterwards if there's
interest.

https://wiki.mozilla.org/Security/CSP/Spec

I have not heard of any discussion of this spec or if we plan to
implement it. Anyone have any thoughts?

-- Dirk

 Title: Shutting Down XSS with Content Security Policy

 Speaker: Sid Stamm, Mozilla

 Abstract:

 The last 3 years have seen a dramatic increase in both awareness and
 exploitation of Web Application Vulnerabilities. 2008 saw dozens of
 high-profile attacks against websites using Cross Site Scripting (XSS)
 and Cross Site Request Forgery (CSRF) for the purposes of information
 stealing, website defacement, malware planting, etc. While an ideal
 solution may be to develop web applications free from any exploitable
 vulnerabilities, real world security is usually provided in layers.
 We present Content Security Policy (CSP), which intends to be one
 such layer. CSP is a content restrictions policy language and
 enforcement system that allows site designers or server administrators
 specify how content interacts on their web sites. We also discuss the
 long road traveled to a useful policy definition and lessons learned
 along the way to an implementation in Firefox.

 13 Oct (Tuesday) at 1630 hrs
 Gates 4B (opposite 490)

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: [LTTF][WebKit Gardening]: Keeping up with the weeds.

2009-10-14 Thread Dirk Pranke

+1

On Tue, Oct 13, 2009 at 10:20 PM, Pam Greene p...@chromium.org wrote:
 If there are areas that nobody knows anything about, that's a lack that's
 hobbling us. Suppose we take the entire list of directories, slap it into a
 doc, and assign at least one owner to everything. For the areas that don't
 yet have anyone knowledgeable, we take volunteers to become knowledgeable if
 needed. It will be a valuable investment.
 Tests often fail due to problems outside their nominal tested areas, but the
 area owner would still be better than an arbitrary gardener at recognizing
 that and reassigning the bug.
 - Pam

 On Tue, Oct 13, 2009 at 10:09 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:

 Ownership is a great concept. I started out planning LTTF as
 ownership-based. Unfortunately, the types of failures are scattered
 far and wide across the directories, some clustered, some not. After a
 few initial passes, I walked away thinking that it's not as simple as
 drawing the lines and basically gave up. That's how Finders/Fixers
 idea was born.

 :DG

 On Tue, Oct 13, 2009 at 4:24 PM, Yaar Schnitman y...@chromium.org wrote:
  I think ownership might actually help with flakiness.
  Today, in order to distinguish flakiness from real bugs, the gardener
  needs
  to have intimate knowledge of the relevant part of the code base and its
  history. That is beyond the capabilities of the average webkit gardener.
  Now, imagine a world were every layout test has an owner who can decide
  intelligently that the bug is flakey and advise the gardener what to do
  with
  it. Wouldn't it make gardening much easier?
  [Flakiness dashboard is very helpful in making the decision, but
  specialized
  knowledge topples generic statistics, especially if a test just started
  flaking]
  On Tue, Oct 13, 2009 at 1:21 PM, Julie Parent jpar...@chromium.org
  wrote:
 
  I like the idea of ownership of groups of layout tests.  Maybe these
  directory owners could be more like the finders?  An owner
  shouldn't
  have to necessarily fix everything in a group/directory, but they
  should be
  responsible for triaging and getting meaningful bugs filled for them,
  to
  keep things moving along. (I volunteer for editing/)
  Another complicating factor -
  The state of the main Chrome tree has a lot of effect on the gardener.
   If
  the tree is already filled with flakiness, then the webkit roll is
  likely to
  show failures, which may or may not have been there before the roll.
   This
  was largely the case in the situation pkasting was referring to, when
  he
  took over as sheriff, he inherited a tree with a lot of flakiness not
  reflected in test_expectations/disabled ui tests.  I think very few (if
  any)
  of the tests he added to test_expectations had anything to
  do with the roll.
  Any policy we make needs to keep in mind that main tree sheriffs deal
  with
  flakiness differently; some cross their fingers and hope it goes away,
  and
  some do clean up.  Maybe we need to get better at enforcing (or
  automating)
  adding flaky tests to expectations, so we at least have a clean slate
  for gardeners to start with.
  On Tue, Oct 13, 2009 at 11:53 AM, Stephen White
  senorbla...@chromium.org
  wrote:
 
  I agree with Dimitri that we're fighting a losing battle here.
  In my last stint as gardener, I did informally what I proposed
  formally
  last time:  I spent basically 1 full day just triaging failures from
  my 2
  days gardening.  Not fixing, but just running tests locally,
  analyzing,
  grouping, creating bugs, assigning to appropriate people (when I knew
  who
  they were, cc'ing poor dglazkov when I didn't).  So at least I didn't
  leave
  a monster bug with layout tests broken by merge #foo but at least
  grouped
  by area.  That was manageable, but I don't know if another day would
  actually be enough for a meaningful amount of fixing.
  I also agree with Drew that actively fixing all the broken tests is
  usually beyond the skills of any one gardener.
  Perhaps we should start taking ownership of particular groups of
  layout
  tests?  And maybe automatically assign them (or least cc them), the
  same way
  Area-Foo causes automatic cc'ing in bugs.chromium.org (I think?)  That
  way,
  the gardener wouldn't have to know who to assign to.
 
  I've basically taken responsibility for fixing all layout tests broken
  by
  Skia rolls, which can pretty heavy on its own, but I'm willing to take
  ownership of a directory or two.
  BTW, the layout test flakiness dashboard has become an invaluable tool
  for analyzing failures:  searching for a test by name is
  lightning-fast, and
  you can clearly see if a test has become flaky, on which platforms,
  and
  which WebKit merge was responsible, which can also help with grouping.
   (Props to Ojan for that).
  Also, it may be Gilbert-and-Sullivan-esque of me, but I think everyone
  who contributes patches to WebKit for chromium should be on the WebKit
  gardener rotation.
  

[chromium-dev] Re: LTTF helping the GTTF make cycle times *minutes* faster

2009-10-16 Thread Dirk Pranke

Hm. I actually wrote that test, and I'm surprised that it's timing
out. I'll take a look at it as well. It is a slow test, because it's
basically trying to reproduce a stack overflow.

-- Dirk

On Fri, Oct 16, 2009 at 12:07 AM, Yuta Kitamura yu...@chromium.org wrote:
 I'm also looking at LayoutTests/fast/css/large-list-of-rules-crash.html.
 Yuta
 2009/10/16 Yuta Kitamura yu...@chromium.org

 I'm currently working on tests under LayoutTests/http/tests/navigation/.
 Thanks,
 Yuta
 2009/10/16 Ojan Vafai o...@google.com

 There are a lot of tests that consistently (i.e. not flaky) timeout. They
 eat up significant percentage (~10%!) of the cycle time for the test bots
 (e.g., 1 minute on Windows release). If LTTF folk focus some effort on
 fixing these first, it would help all of us move forward faster as the bot
 cycle times would be improved as would the times to run the tests locally.
 To make this easier, I compiled the list of all the tests that
 consistently timeout. I excluded the flaky timeouts since the LTTF is
 currently focused on non-flaky failures. Any takers?
 Ojan

 ALL PLATFORMS:
 LayoutTests/fast/dom/Window/window-property-shadowing-name.html
 LayoutTests/fast/dom/cssTarget-crash.html
 LayoutTests/fast/events/add-event-without-document.html
 LayoutTests/http/tests/history/back-to-post.php

 LayoutTests/http/tests/loading/deleted-host-in-resource-load-delegate-callback.html
 LayoutTests/http/tests/navigation/onload-navigation-iframe-2.html
 LayoutTests/http/tests/navigation/onload-navigation-iframe-timeout.html
 LayoutTests/http/tests/navigation/onload-navigation-iframe.html
 LayoutTests/http/tests/security/cross-frame-access-document-direct.html
 LayoutTests/http/tests/security/xss-DENIED-defineProperty.html
 LayoutTests/http/tests/xmlhttprequest/methods-async.html
 LayoutTests/loader/go-back-to-different-window-size.html
 LayoutTests/media/audio-constructor-src.html
 LayoutTests/media/audio-play-event.html
 LayoutTests/media/controls-drag-timebar.html
 LayoutTests/media/event-attributes.html
 LayoutTests/media/video-no-audio.html
 LayoutTests/media/video-source-add-src.html

 LayoutTests/platform/gtk/scrollbars/overflow-scrollbar-horizontal-wheel-scroll.html
 LayoutTests/storage/domstorage/localstorage/iframe-events.html
 LayoutTests/storage/domstorage/sessionstorage/iframe-events.html
 WIN RELEASE+DEBUG:
 LayoutTests/http/tests/cache/subresource-expiration.html
 WIN DEBUG:
 LayoutTests/http/tests/xmlhttprequest/redirect-cross-origin-tripmine.html
 LINUX RELEASE+DEBUG:
 LayoutTests/fast/loader/local-JavaScript-from-local.html
 LayoutTests/http/tests/misc/timer-vs-loading.html
 LINUX DEBUG:
 LayoutTests/fast/css/large-list-of-rules-crash.html
 LayoutTests/fast/frames/frame-limit.html
 MAC RELEASE+DEBUG:
 LayoutTests/fast/loader/local-JavaScript-from-local.html
 LayoutTests/http/tests/misc/timer-vs-loading.html
 LayoutTests/http/tests/plugins/get-url.html
 LayoutTests/http/tests/plugins/interrupted-get-url.html
 LayoutTests/http/tests/plugins/npapi-response-headers.html
 LayoutTests/http/tests/plugins/post-url-file.html

 LayoutTests/http/tests/security/frameNavigation/xss-DENIED-plugin-navigation.html
 LayoutTests/plugins/destroy-stream-twice.html
 LayoutTests/plugins/embed-inside-object.html
 LayoutTests/plugins/geturl-replace-query.html
 LayoutTests/plugins/npruntime.html

 LayoutTests/plugins/return-error-from-new-stream-doesnt-invoke-destroy-stream.html
 MAC RELEASE:
 LayoutTests/http/tests/cache/subresource-expiration.html
 MAC DEBUG:
 LayoutTests/fast/css/large-list-of-rules-crash.html
 LayoutTests/fast/frames/frame-limit.html




 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] revising the output from run_webkit_tests

2009-10-23 Thread Dirk Pranke

If you've never run run_webkit_tests to run the layout test
regression, or don't care about it, you can stop reading ...

If you have run it, and you're like me, you've probably wondered a lot
about the output ... questions like:

1) what do the numbers printed at the beginning of the test mean?
2) what do all of these test failed messages mean, and are they bad?
3) what do the numbers printed at the end of the test mean?
4) why are the numbers at the end different from the numbers at the beginning?
5) did my regression run cleanly, or not?

You may have also wondered a couple of other things:
6) What do we expect this test to do?
7) Where is the baseline for this test?
8) What is the baseline search path for this test?

Having just spent a week trying (again), to reconcile the numbers I'm
getting on the LTTF dashboard with what we print out in the test, I'm
thinking about drastically revising the output from the script,
roughly as follows:

* print the information needed to reproduce the test and look at the results
* print the expected results in summary form (roughly the expanded
version of the first table in the dashboard - # of tests by
(wontfix/fix/defer x pass/fail/are flaky).
* don't print out failure text to the screen during the run
* print out any *unexpected* results at the end (like we do today)

The goal would be that if all of your tests pass, you get less than a
small screenful of output from running the tests.

In addition, we would record a full log of (test,expectation,result)
to the results directory (and this would also be available onscreen
with --verbose)

Lastly, I'll add a flag to re-run the tests that just failed, so it's
easy to test if the failures were flaky.

Then I'll rip out as much of the set logic in test_expectations.py as
we can possibly get away with, so that no one has to spend the week I
just did again. I'll probably replace it with much of the logic I use
to generate the dashboard, which is much more flexible in terms of
extracting different types of queries and numbers.

I think the net result will be the same level of information that we
get today, just in much more meaningful form.

Thoughts? Comments? Is anyone particularly wedded to the existing
output, or worried about losing a particular piece of info?

-- Dirk

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: revising the output from run_webkit_tests

2009-10-24 Thread Dirk Pranke

Sure. I was floating the idea first before doing any work, but I'll
just grab an existing text run and hack it up for comparison ...

-- Dirk

On Fri, Oct 23, 2009 at 3:51 PM, Ojan Vafai o...@chromium.org wrote:
 Can you give example outputs for the common cases? It would be easier to
 discuss those.

 On Fri, Oct 23, 2009 at 3:43 PM, Dirk Pranke dpra...@chromium.org wrote:

 If you've never run run_webkit_tests to run the layout test
 regression, or don't care about it, you can stop reading ...

 If you have run it, and you're like me, you've probably wondered a lot
 about the output ... questions like:

 1) what do the numbers printed at the beginning of the test mean?
 2) what do all of these test failed messages mean, and are they bad?
 3) what do the numbers printed at the end of the test mean?
 4) why are the numbers at the end different from the numbers at the
 beginning?
 5) did my regression run cleanly, or not?

 You may have also wondered a couple of other things:
 6) What do we expect this test to do?
 7) Where is the baseline for this test?
 8) What is the baseline search path for this test?

 Having just spent a week trying (again), to reconcile the numbers I'm
 getting on the LTTF dashboard with what we print out in the test, I'm
 thinking about drastically revising the output from the script,
 roughly as follows:

 * print the information needed to reproduce the test and look at the
 results
 * print the expected results in summary form (roughly the expanded
 version of the first table in the dashboard - # of tests by
 (wontfix/fix/defer x pass/fail/are flaky).
 * don't print out failure text to the screen during the run
 * print out any *unexpected* results at the end (like we do today)

 The goal would be that if all of your tests pass, you get less than a
 small screenful of output from running the tests.

 In addition, we would record a full log of (test,expectation,result)
 to the results directory (and this would also be available onscreen
 with --verbose)

 Lastly, I'll add a flag to re-run the tests that just failed, so it's
 easy to test if the failures were flaky.

 Then I'll rip out as much of the set logic in test_expectations.py as
 we can possibly get away with, so that no one has to spend the week I
 just did again. I'll probably replace it with much of the logic I use
 to generate the dashboard, which is much more flexible in terms of
 extracting different types of queries and numbers.

 I think the net result will be the same level of information that we
 get today, just in much more meaningful form.

 Thoughts? Comments? Is anyone particularly wedded to the existing
 output, or worried about losing a particular piece of info?

 -- Dirk



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Layout tests can now be run on Win 7

2009-11-03 Thread Dirk Pranke

Hi all,

I have just checked in a new set of baselines that should allow you to
run the layout tests on Win 7 as well as Vista and XP.

For those of you playing along at home, this means that if you have a
baseline that is windows 7 specific, or is the same across 7, Vista,
and XP, check it into
src/webkit/data/platform/chromium-win. If you have a baseline that is
specific to Vista or is the same on Vista and XP, check it into
src/webkit/data/platform/chromium-win-vista, and if you have an
XP-specific baseline, check it into
src/webkit/data/platform/chromium-win-xp.

99.9% of all of your baselines will be generic, so you can probably
just check it into chromium-win and let me fix diffs when they crop up
down the road.

Also, if you have a test that only fails on Win 7, you can specify
WIN-7 in test_expectations.txt. But, I wouldn't expect any of those.

Cheers, and any questions or problems on this should go to me,

-- Dirk

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: revising the output from run_webkit_tests

2009-11-03 Thread Dirk Pranke

Anyone who wants to follow along on this, I've filed
http://code.google.com/p/chromium/issues/detail?id=26659 to track it.

-- Dirk

On Sat, Oct 24, 2009 at 5:01 PM, Dirk Pranke dpra...@chromium.org wrote:
 Sure. I was floating the idea first before doing any work, but I'll
 just grab an existing text run and hack it up for comparison ...

 -- Dirk

 On Fri, Oct 23, 2009 at 3:51 PM, Ojan Vafai o...@chromium.org wrote:
 Can you give example outputs for the common cases? It would be easier to
 discuss those.

 On Fri, Oct 23, 2009 at 3:43 PM, Dirk Pranke dpra...@chromium.org wrote:

 If you've never run run_webkit_tests to run the layout test
 regression, or don't care about it, you can stop reading ...

 If you have run it, and you're like me, you've probably wondered a lot
 about the output ... questions like:

 1) what do the numbers printed at the beginning of the test mean?
 2) what do all of these test failed messages mean, and are they bad?
 3) what do the numbers printed at the end of the test mean?
 4) why are the numbers at the end different from the numbers at the
 beginning?
 5) did my regression run cleanly, or not?

 You may have also wondered a couple of other things:
 6) What do we expect this test to do?
 7) Where is the baseline for this test?
 8) What is the baseline search path for this test?

 Having just spent a week trying (again), to reconcile the numbers I'm
 getting on the LTTF dashboard with what we print out in the test, I'm
 thinking about drastically revising the output from the script,
 roughly as follows:

 * print the information needed to reproduce the test and look at the
 results
 * print the expected results in summary form (roughly the expanded
 version of the first table in the dashboard - # of tests by
 (wontfix/fix/defer x pass/fail/are flaky).
 * don't print out failure text to the screen during the run
 * print out any *unexpected* results at the end (like we do today)

 The goal would be that if all of your tests pass, you get less than a
 small screenful of output from running the tests.

 In addition, we would record a full log of (test,expectation,result)
 to the results directory (and this would also be available onscreen
 with --verbose)

 Lastly, I'll add a flag to re-run the tests that just failed, so it's
 easy to test if the failures were flaky.

 Then I'll rip out as much of the set logic in test_expectations.py as
 we can possibly get away with, so that no one has to spend the week I
 just did again. I'll probably replace it with much of the logic I use
 to generate the dashboard, which is much more flexible in terms of
 extracting different types of queries and numbers.

 I think the net result will be the same level of information that we
 get today, just in much more meaningful form.

 Thoughts? Comments? Is anyone particularly wedded to the existing
 output, or worried about losing a particular piece of info?

 -- Dirk




--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Lean Chromium checkout (WAS: Large commit - update your .gclient files to avoid)

2009-11-05 Thread Dirk Pranke

+1. I also wonder if it might be useful to have a names file/service
for configs so I don't have to remember the full URL when doing a
gclient config ...

-- Dirk

On Thu, Nov 5, 2009 at 12:50 PM, Ben Goodger (Google) b...@chromium.org wrote:

 +1. This would be fab. There are so many test executables now it's not
 practical to run them all (unless we have a script... which is sort of
 what the trybot is like you say).

 I like the idea of having full/lean configs. That way you don't need
 to remember to set up the right .gclient when you set up a new
 machine.

 -Ben

 On Thu, Nov 5, 2009 at 12:38 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Thu, Nov 5, 2009 at 12:33 PM, Antoine Labour pi...@google.com wrote:


 On Thu, Nov 5, 2009 at 12:44 PM, Ben Goodger (Google) b...@chromium.org
 wrote:

 it'd be nice to have a gclient config lean or something like that.

 It'd be nice for it to be the default in fact.

 I know we've avoided this in the past because we wanted everyone to run
 tests before committing.  But realistically, I think we all use the try bots
 to run tests and only run them locally for triaging a failure.  Thus it
 probably does make sense to not include hundreds if not thousands of megs of
 test files and such for the default checkout.  Do others agree?
 If so, then we may need to move some of the bulky test data into DEPS so
 that they can be turned off in gclient.  An example is
 webkit/data/layout_tests which has platform specific test expectations.
 I think this would make a lot of people on slow internet connections happy.
  :-)

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: test_shell performance is bad compared to Chromium

2009-11-05 Thread Dirk Pranke

test_shell being a test shell used mostly for non-interactive testing,
we haven't given a lot of concern to its perfomance AFAIK. I'm not
even sure how long of a lifespan it'll have since we aim to
merge/replace it with WebKit's DumpRenderTree at some point soon.

Is there some reason you're not just using Chromium in full screen mode?

-- Dirk

On Thu, Nov 5, 2009 at 1:18 PM, Alexander Teinum atei...@gmail.com wrote:

 For a personal project (well, an OS -- check out www.brevityos.org if
 you're interested), I need something like test_shell in fullscreen
 mode. The UI is basically an HTML-file with an iframe for every
 document. CSS-classes are used to describe what application is active,
 what documents are active etc.

 The problem is that for my project, test_shell performs bad compared
 to Chromium. I have compiled with mode set to release, but it's still
 noticeably slower.

 I've watched Darin Fisher and Brett Wilson's presentations about the
 Chromium architecture on YouTube. If I've got it right, then
 test_shell is below the layer that implements multi-processes. Brett
 says that test_shell is based on WebKit glue.

 What needs to be done to make test_shell perform as good as Chromium?
 I'm not suggesting that test_shell needs to be changed. I'll probably
 do this in a separate directory under chromium-dir/src, or as a Linux
 port of Chromium Embedded Framework, if Marshall wants CEF to be
 multi-processed.

 --
 Best regards,

 Alexander Teinum

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: test_shell performance is bad compared to Chromium

2009-11-05 Thread Dirk Pranke

On Thu, Nov 5, 2009 at 1:59 PM, Marshall Greenblatt
magreenbl...@gmail.com wrote:

 On Thu, Nov 5, 2009 at 4:32 PM, Dirk Pranke dpra...@chromium.org wrote:

 test_shell being a test shell used mostly for non-interactive testing,
 we haven't given a lot of concern to its perfomance AFAIK. I'm not
 even sure how long of a lifespan it'll have since we aim to
 merge/replace it with WebKit's DumpRenderTree at some point soon.

 So is the plan now for test_shell to go away completely?  #3 under *Next
 steps:* in this email seemed to suggest that it would be up-streamed:

 http://groups.google.com/group/chromium-dev/browse_thread/thread/5352c2facb46f309

 Wouldn't merging/replacing test_shell with DRT eliminate the ability to test
 the Chromium WebKit API in a simplified environment?


Good question, and I didn't actually know the answer, so that provoked
an interesting but short discussion between Ojan and Dimitri and
myself. At the moment we're leaning to keeping test_shell and
DumpRenderTree both. The latter would be the driver for the layout
test harness (as it is upstream), and test_shell would get all of the
layout test code ripped out of it and become more like an actual shell
that can be used to embed webkit for interactive work (and upstreamed,
as you say). The exact functionality and distinctions between the two
(and the justification of the existence of both) probably still needs
some edges smoothed.

-- Dirk

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: The future for test shell (WAS: test_shell performance is bad compared to Chromium)

2009-11-05 Thread Dirk Pranke

Yeah, we would have to work out a way of handling these sorts of features.

-- Dirk

On Thu, Nov 5, 2009 at 3:01 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Thu, Nov 5, 2009 at 2:46 PM, Dirk Pranke dpra...@chromium.org wrote:

 On Thu, Nov 5, 2009 at 1:59 PM, Marshall Greenblatt
 magreenbl...@gmail.com wrote:
 
  On Thu, Nov 5, 2009 at 4:32 PM, Dirk Pranke dpra...@chromium.org
  wrote:
 
  test_shell being a test shell used mostly for non-interactive testing,
  we haven't given a lot of concern to its perfomance AFAIK. I'm not
  even sure how long of a lifespan it'll have since we aim to
  merge/replace it with WebKit's DumpRenderTree at some point soon.
 
  So is the plan now for test_shell to go away completely?  #3 under *Next
  steps:* in this email seemed to suggest that it would be up-streamed:
 
 
  http://groups.google.com/group/chromium-dev/browse_thread/thread/5352c2facb46f309
 
  Wouldn't merging/replacing test_shell with DRT eliminate the ability to
  test
  the Chromium WebKit API in a simplified environment?
 

 Good question, and I didn't actually know the answer, so that provoked
 an interesting but short discussion between Ojan and Dimitri and
 myself. At the moment we're leaning to keeping test_shell and
 DumpRenderTree both. The latter would be the driver for the layout
 test harness (as it is upstream), and test_shell would get all of the
 layout test code ripped out of it and become more like an actual shell
 that can be used to embed webkit for interactive work (and upstreamed,
 as you say). The exact functionality and distinctions between the two
 (and the justification of the existence of both) probably still needs
 some edges smoothed.

 Features like AppCache and WebDatabase depend on code that will probably
 never be unstreamed to WebKit's repo. So either we need to always run at
 least some tests under test shell or we'll need to be content with not
 running these layout tests.
 Note that we have a somewhat hacky way of running layout tests in the UI
 test framework that might suffice for this stuff.  Right now, I run
 LocalStorage tests in both test_shell and the ui_test framework.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] moderators for the list?

2009-11-10 Thread Dirk Pranke
Hi all,

Who are the moderators for this list? There was someone (djodoin) who
has joined as a new member and apparently is waiting for his message
to be sent, but I'm wondering if all of the likely moderators are on
vacation or otherwise out ...

-- Dirk

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


Re: [chromium-dev] [GTTF] running tests that fail most often early in the queue

2009-11-12 Thread Dirk Pranke
+1. Great idea!

-- Dirk

On Thu, Nov 12, 2009 at 1:52 AM, Paweł Hajdan Jr.
phajdan...@chromium.org wrote:
 I was just looking at the buildbot cycle stats
 at http://build.chromium.org/buildbot/waterfall/stats and realized that on
 many bots the most frequently failing tests are browser_tests and ui_tests.
 Then I checked how early they are run by each bot (the earlier we know about
 the failure, the earlier we can react). So, for example Chromium XP runs a
 lot of slow page cycler tests before browser_tests, and then another bunch
 of page cycler tests. page_cycler tests don't fail so frequently, and when
 they fail, it's frequently some evil flakiness. When browser_tests do fail
 however, it may indicate something more serious.
 A similar thing is with XP Tests: we're running mini_installer_tests (which
 take about 2 minutes), and then some other things which rarely fail, then UI
 tests (which fail frequently), and browser_tests at the end!
 I know that some of these cases are just flaky failures. But by knowing
 earlier about a failure, we'd have more time to decide what to do with it.

 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


Re: [chromium-dev] Getting 'This is a read-only checkout' error from gcl

2009-11-13 Thread Dirk Pranke
I started getting this on my mac at home, and haven't taken the time
to track it down yet. Is it possible your environment got switched
from the svn:// checkout to an https:// read-only checkout?

-- Dirk

On Fri, Nov 13, 2009 at 10:41 AM, Jens Alfke s...@chromium.org wrote:
 A few days ago I started getting an error when using gcl to create
 changelists in my WebKit tree:

 $ pwd
 /Volumes/Yttrium/src/third_party/WebKit
 $ gcl change foo
 This is a read-only checkout.  Retry in a read-write checkout or use --
 force to override.

 I don't get the error if I use gcl in the top-level Chromium directory.

 I recently upgraded my Mac's svn from the stock 1.4.4 to 1.6.6; but
 this doesn't explain why gcl would work in one repo but not in another.

 Any idea? AFAIK I need to use gcl to be able to submit WebKit patches
 to the try bots...

 —Jens

 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
    http://groups.google.com/group/chromium-dev


-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


Re: [chromium-dev] More sheriffs?

2009-11-13 Thread Dirk Pranke
Having just come off sheriffing four days in the past two weeks ...

On Fri, Nov 13, 2009 at 12:31 PM, Peter Kasting pkast...@google.com wrote:
 At lunch today, a few of us discussed the idea of moving from two sheriffs
 to four.
 There are several reasons we contemplated such a change:
 * The team is large enough that on the current schedule, you go months
 between sheriffing, which is so long that you forget things like what tools
 help you do what.

This is perhaps true, but I think it's more an issue that people don't
run more of the tests on their own machines (or, alternatively, are
asked to sheriff for areas of the system they never touch).

 * Sheriffing is a heavy burden, and getting moreso with more team members.
 * Either the two sheriffs are in different time zones, in which case you
 have effectively one sheriff on duty who has to do everything (bad due to
 point above), or they're not, in which case a chunk of the day is not
 covered at all.

I think two sheriffs in US/Pacific during US/Pacific work hours is
plenty. I can't speak to how much an issue the lack of sheriffs are to
people outside that window.

 * New sheriffs could really use a mentor sheriff with them, which is
 pretty difficult to schedule.

Last week was actually my first time, and I didn't think it was a big
deal, although I did ask a few people a few questions.

I was pretty much full time on keeping the tree green and cleaning up
flaky tests. Given that I'm otherwise full time on LTTF, this wasn't
much of a change. I think it's unrealistic to expect to do anything
real on a project while sheriffing, because you can't context-switch
that fast to do a good job on either (at least, I can't).

I also think the bots would've been green most of the time except that
someone has clearly been ignoring the memory tests for a long time. If
bots fails for a couple days straight, it's beyond a sheriff to try
and fix it - I think someone needs to get assigned that problem
specifically.

So, I'd probably leave things mostly the way they are unless there's a
desire to have better sheriffing outside of the MTV hours. I fully
support always having two sheriffs during MTV hours.

-- Dirk

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


[chromium-dev] FYI ... you might find some issues missing in issue_tracker

2009-11-18 Thread Dirk Pranke
Hi all,

Just a heads' up ... we've discovered a bug in Issue Tracker that has
caused a few of our issues (along with a issues from other projects)
to be partially deleted from the database. The team is aware of the
problem and working on a solution, but it may not be fully patched
until after Thanksgiving because they're in the midst of a release.

So far I've only found two issues missing, so you probably won't
notice this unless you're exceptionally anal, like I am :)

-- Dirk

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


Re: [chromium-dev] FYI ... you might find some issues missing in issue_tracker

2009-11-19 Thread Dirk Pranke
I suspect this is an unrelated issue (but I've seen this one, too).

-- Dirk

On Thu, Nov 19, 2009 at 8:49 AM, Paweł Hajdan Jr.
phajdan...@chromium.org wrote:
 Not sure if that's related, but bugdroid started to behave strangely. I see
 new comments for commits from before a week.

 On Wed, Nov 18, 2009 at 20:27, Dirk Pranke dpra...@chromium.org wrote:

 Hi all,

 Just a heads' up ... we've discovered a bug in Issue Tracker that has
 caused a few of our issues (along with a issues from other projects)
 to be partially deleted from the database. The team is aware of the
 problem and working on a solution, but it may not be fully patched
 until after Thanksgiving because they're in the midst of a release.

 So far I've only found two issues missing, so you probably won't
 notice this unless you're exceptionally anal, like I am :)

 -- Dirk

 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
    http://groups.google.com/group/chromium-dev



-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


Re: [chromium-dev] workflow for cross-platform development

2009-11-22 Thread Dirk Pranke
On Sat, Nov 21, 2009 at 10:52 AM, Chris Bentzel cbent...@google.com wrote:
 How do most people do cross-platform validation prior to submitting code?
 Do you mostly rely on the try-bots, or do you also patch the diffs to your
 different dev environments and build and test locally?

I build on the different environments and test locally when I am doing
something that I know
is very platform specific (like tweaking code that changes layout test
baselines). I think I am unusual in this regard.

Otherwise I tend to use the trybots.

 If you do the patching, do you tend to do a gcl upload and grab the diffs
 from there, or do you copy the diffs from machine to machine prior to the
 upload? If you do an initial gcl upload, do you skip the trybots until you
 validate that it works on all platforms to reduce load on the trybots?

It depends. Sometimes I will use rietveld (gcl upload) as a way to
checkpoint progress, in which case I will skip the trybots
until I'm ready (usually. sometimes I forget). I usually compute the
diffs myself and move them between machines. I have
found rietveld's diffs to be a bit unreliable :(

 Have there been any thoughts about adding gcl patch and unpatch commands
 which will grab the file diffs as well as duplicate the CL metadata in
 src/.svn/gcl_info?

Not as far as I know, but that would be really cool. Feel free to add a patch :)

-- Dirk

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


Re: [chromium-dev] Core Text

2009-11-23 Thread Dirk Pranke
As an aside, have we looked at using DirectWrite() on Windows?

-- Dirk

On Mon, Nov 23, 2009 at 12:58 PM, Jeremy Moskovich jer...@chromium.org wrote:
 Re http://crbug.com/27195  https://bugs.webkit.org/show_bug.cgi?id=31802 :

 Dan Bernstein says that Core Text on Leopard has performance issues vs ATSUI
 so I'm going to look into switching APIs at runtime rather than compile
 time.

 So we'd use ATSUI  10.6  Core Text = 10.6 .

 Best regards,
 Jeremy


 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


[chromium-dev] Re: Something smells weird on the buildbot

2009-12-03 Thread Dirk Pranke
Our messages crossed in the ether ...

On Thu, Dec 3, 2009 at 10:07 AM, Ojan Vafai o...@google.com wrote:
 +chromium-dev as others who look at the waterfall might also be confused.
 On Thu, Dec 3, 2009 at 8:50 AM, Dimitri Glazkov dglaz...@google.com wrote:

 Sending out random people, because it's early :)

 There's a couple of things I see on the bot this morning:

 1) There's a crashing test on all bots -- and the tree is still green!

 http://src.chromium.org/viewvc/chrome/trunk/src/webkit/tools/layout_tests/flakiness_dashboard.html#tests=LayoutTests/plugins/embed-attributes-setting.html


 The test is consistently crashing when run with all the other tests, but
 passing when we retry it in isolation. Note that the test is listed as an
 unexpected flaky test on the waterfall. This is one of the downsides of
 retrying failing tests. We can't distinguish flakiness from this case. We
 just need to careful to not ignore unexpected flakiness on the waterfall.
 Note that the dashboard only shows the result from the first run. Including
 the retry results from the bots seems like more trouble than it's worth.


Agreed. However, why aren't the webkit bots orange in the main waterfall?


 2) WebKit (dbg) shows insane fixable numbers -- 6561 fixable?

 This is a bug from Dirk's commit yesterday. Dirk, PrepareListsAndPrintOutput
 creates the ResultSummary object with the full test list, then shards the
 test list, then runs the tests. So the value used for number of tests run is
 the total number of tests, not the sharded total number of tests.

Yup. I'll fix this when I get in (~11:30ish).

-- Dirk

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


[chromium-dev] Re: Something smells weird on the buildbot

2009-12-03 Thread Dirk Pranke
On Thu, Dec 3, 2009 at 10:25 AM, Ojan Vafai o...@chromium.org wrote:
 On Thu, Dec 3, 2009 at 10:21 AM, Dimitri
 Glazkov dglaz...@google.com wrote:

 How about we turn red for unexpected crashiness?

 Makes sense to me. We can just not retry tests that unexpectedly crash.

I'll make this change if we have a consensus. Do we?

-- Dirk

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


Re: [chromium-dev] Reflecting SSL state in a world with SharedWorkers and cross-window sharing/x-domain messaging

2009-12-09 Thread Dirk Pranke
Isn't the shared worker tied to the SOP? If so, then it seems to me
that you would want the secure status to change just as if the parent
window had done the request directly.

If I think I'm on a secure page, I certainly don't want my app doing
insecure activities behind my back.

I almost feel the same about PostMessage(), but since that is
explicitly designed to cross domains, it may be different.

-- Dirk

On Wed, Dec 9, 2009 at 3:53 PM, Adam Barth aba...@chromium.org wrote:
 In principle, the correct thing to do is keep track of the mixed
 content state of the shared worker and infect whichever windows
 interact with the worker.  However, I suspect this is one of those
 cases where the perfect is the enemy of the good.  For the time being,
 I'm fine with having the SharedWorker not trigger the mixed content
 indicator.  These sorts of attacks are much less severe than an HTTPS
 page executing an HTTP script.  In the latter case, an active network
 attacker can completely control the HTTPS page.  In the case of a
 SharedWorker, the attacker doesn't get nearly as much leverage.

 Adam


 On Wed, Dec 9, 2009 at 1:47 PM, Drew Wilson atwil...@chromium.org wrote:
 I'm trying to get my head around how our HTTPS status display (lock icon)
 should work now that we support cross-window/cross-domain communication and
 SharedWorkers.
 Currently, if an HTTPS page loads a worker and that worker subsequently does
 a non-HTTPS network operation, the parent window's UI updates to reflect the
 new non-secure state, just as if the page itself had performed a non-secure
 network op. With SharedWorkers, you may have a number of parent windows - it
 seems likely that all of those windows should update their state.
 However, SharedWorkers can accumulate new parents over time. Should a
 SharedWorker that has made an insecure connection in the past be permanently
 tainted (meaning that if a page connects to a tainted worker, it should
 immediately lose its own secure status, regardless of whether that worker
 ever makes another insecure connection)? Do we have any mechanisms currently
 in place that would facilitate implementing this behavior?
 Similarly, we can now exchange messages between windows (even across
 domains) using window.postMessage() - if my window posts/receives a message
 with an insecure window, does that affect my window's HTTPS status? Does
 this somehow cascade to other windows I may have exchanged messages with in
 the past? This situation seems directly analogous to SharedWorkers, since
 essentially a SharedWorker is just another execution context that multiple
 windows can communicate with via postMessage().
 Finally, now that multiple pages share things like local storage, it seems
 like loading an insecure resource in one page could impact every page in the
 same origin (if my secure page is storing stuff in localstorage, but some
 other page in the same origin is compromised from loading an insecure
 resource, then it seems like my secure storage is compromised).
 Anyhow, people have been thinking about these issues far longer than I have
 - have we come to any conclusions about how our lock icon should work in
 these various situations?
 -atw

 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev

 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
    http://groups.google.com/group/chromium-dev


-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


[chromium-dev] revised output for run_webkit_tests

2009-12-10 Thread Dirk Pranke
Hi all,

If you never run the webkit layout tests, you can stop reading.

Otherwise, earlier today I checked in a patch that should make the
output much less verbose in the normal case. From the CL:

First, a number of log messages have had their levels changed (mostly to
make them quieter).

Second, the script outputs a meter that shows progress through the
test run, which is a one line summary of where it's at current
(e.g. parsing expectations, gathering files. During the actual test
execution, the meter displays %d tests completed as expected, %d didn't,
%d remain. The meter uses carriage returns but no linefeeds, so the output
is overwritten as it progresses. The meter is disabled if --verbose is
specified, to avoid unnecessary confusion.

Third, I removed the --find-baselines option. I think I was the only one
using it, and --sources is good enough (but added the baseline for
the checksum as well as the .png when using --sources).

Fourth, there is a new --log option that can be used to provide finer
granularity of logging. It accepts a comma-separated list of options, like:
--log 'actual,expected,timing':

  actual: the actual test results (# of failures by type and timeline)
  config: the test settings (results dir, platform, etc.)
  expected: the results we expected by type and timeline
  timing: test timing results (slow files, total execution, etc.)

All of this information is logged at the logging.info level (if the
appropriate option is enabled).

Using the --verbose switch will cause all of options to be logged, as well
as the normal verbose output.  In addition, the verbose output will disable
the meter (as mentioned above). Note that the actual results will be logged
to stdout, not stderr, for compatibility with the buildbot log parser.

Finally, the list of unexpected results (if any) will be logged to stdout,
along with a one-line summary of the test run.

The net result is that when run with no command line options (and when no
tests fail), only one line of output will be produced.

Feedback / problems / questions to me.

Pam, sorry for making all of your examples in your tech talk
immediately out of date :)

-- Dirk

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


Re: [chromium-dev] revised output for run_webkit_tests

2009-12-10 Thread Dirk Pranke
Yes, I did consider that. The fatal flaw in that plan is that the
webkit test script is single-threaded and runs through the tests in
order. Ours doesn't, and so we can't easily guarantee the same sort of
output they have. Eric and I will probably work through this as we
upstream the code. I'm actually hoping to get them to adopt my output,
but we'll see.

-- Dirk

On Thu, Dec 10, 2009 at 7:45 PM, David Levin le...@chromium.org wrote:
 Have you considered making the output closer to that of WebKit's
 run-webkit-tests?
 It seems that would ease the hopeful transition to this version upstream.
 dave
 On Thu, Dec 10, 2009 at 7:23 PM, Dirk Pranke dpra...@chromium.org wrote:

 Hi all,

 If you never run the webkit layout tests, you can stop reading.

 Otherwise, earlier today I checked in a patch that should make the
 output much less verbose in the normal case. From the CL:

 First, a number of log messages have had their levels changed (mostly to
 make them quieter).

 Second, the script outputs a meter that shows progress through the
 test run, which is a one line summary of where it's at current
 (e.g. parsing expectations, gathering files. During the actual test
 execution, the meter displays %d tests completed as expected, %d didn't,
 %d remain. The meter uses carriage returns but no linefeeds, so the
 output
 is overwritten as it progresses. The meter is disabled if --verbose is
 specified, to avoid unnecessary confusion.

 Third, I removed the --find-baselines option. I think I was the only one
 using it, and --sources is good enough (but added the baseline for
 the checksum as well as the .png when using --sources).

 Fourth, there is a new --log option that can be used to provide finer
 granularity of logging. It accepts a comma-separated list of options,
 like:
 --log 'actual,expected,timing':

  actual: the actual test results (# of failures by type and timeline)
  config: the test settings (results dir, platform, etc.)
  expected: the results we expected by type and timeline
  timing: test timing results (slow files, total execution, etc.)

 All of this information is logged at the logging.info level (if the
 appropriate option is enabled).

 Using the --verbose switch will cause all of options to be logged, as well
 as the normal verbose output.  In addition, the verbose output will
 disable
 the meter (as mentioned above). Note that the actual results will be
 logged
 to stdout, not stderr, for compatibility with the buildbot log parser.

 Finally, the list of unexpected results (if any) will be logged to stdout,
 along with a one-line summary of the test run.

 The net result is that when run with no command line options (and when no
 tests fail), only one line of output will be produced.

 Feedback / problems / questions to me.

 Pam, sorry for making all of your examples in your tech talk
 immediately out of date :)

 -- Dirk

 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
    http://groups.google.com/group/chromium-dev



-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


Re: [chromium-dev] revised output for run_webkit_tests

2009-12-11 Thread Dirk Pranke
On Thu, Dec 10, 2009 at 11:28 PM, David Levin le...@chromium.org wrote:


 On Thu, Dec 10, 2009 at 10:57 PM, Dirk Pranke dpra...@chromium.org wrote:

 We could do this, but we'd have to add logic to track when directories
 were done, and arbitrarily delay printing results about other
 directories (hence delaying and serializing results). This might end

 up causing weirdly irregular bursts of output.

 The irregular bursts of output isn't that bad. (I had a version of
 run-webkit-test that did this.  Unfortunately, perl is not a fun language
 for me at least, and I have to admit that the perl code I had would have
 been hard to maintain/fragile.)

 Worst case, since we
 intentionally run the http tests first, and they're the long pole in
 the run, this might delay printing all the other directories until
 near the end.

 Not a big deal either. My version did this as well. (I started this behavior
 in my webkit version and talked to Ojan about doing it.)


Well, I can certainly try to hack up a version of the script that
generates the output you're looking for to see how it works.

 I'm not sure what the real benefit of this would be.


 The benefit is working in a community and understanding how they do things
 and adapting to that as opposed to trying to push something very different
 on them.

Sure. Of course, there's not necessarily a reason to leave things the
same just because that's the way it's always been done, especially
when trying to keep things the same imposes costs. Sometimes different
is better.


 (A) Have you looked at the new output yet?

 (B) Is getting output by directory really that useful?

 I understood your description. Having run the webkit version, I much prefer
 it due to knowing when certain directories are done and knowing what test(s)
 failed in those directories as the test goes along (even in the parallel
 version where the failures may be slightly delayed).
 The output by directory also adapts better to the buildbot output instead of
 the huge test-by-test list that chromium buildbots have (which takes a while
 to download when you click the link for stdio).

There's no arguing that the --verbose output is in fact extremely
verbose. But, that is because there is a lot of information there.
Personally, I find the webkit output a bad compromise between
terseness and verbosity - it's too much text for interactive use where
you expect things to pass, and too little if you actually want to
debug what happened. In particular I think this would be very true in
a multithreaded scenarios, since you would lose any grasp of what was
actually happening in parallel.

The current implementation tells you that tests have failed as it
goes, but not which tests (of course, the webkit script doesn't tell
you either, apart from which directory the failure might be in). That
would be easy to add if there is demand.

 dave

 -- Dirk

 On Thu, Dec 10, 2009 at 10:10 PM, David Levin le...@chromium.org wrote:
  Actually, you can have a similar output even with the multi-threading.
  You can display the results for one only directory (even though multiple
  directories are being processed at the same time) and when that
  directory is
  done, display the results for the next directory until it is done,
  etc. The
  ordering of the directories may be different but the output is very
  similar
  to what they have now.
  The effect is quite satisfying and clear.
  dave
  On Thu, Dec 10, 2009 at 10:04 PM, Dirk Pranke dpra...@chromium.org
  wrote:
 
  Yes, I did consider that. The fatal flaw in that plan is that the
  webkit test script is single-threaded and runs through the tests in
  order. Ours doesn't, and so we can't easily guarantee the same sort of
  output they have. Eric and I will probably work through this as we
  upstream the code. I'm actually hoping to get them to adopt my output,
  but we'll see.
 
  -- Dirk
 
  On Thu, Dec 10, 2009 at 7:45 PM, David Levin le...@chromium.org
  wrote:
   Have you considered making the output closer to that of WebKit's
   run-webkit-tests?
   It seems that would ease the hopeful transition to this version
   upstream.
   dave
   On Thu, Dec 10, 2009 at 7:23 PM, Dirk Pranke dpra...@chromium.org
   wrote:
  
   Hi all,
  
   If you never run the webkit layout tests, you can stop reading.
  
   Otherwise, earlier today I checked in a patch that should make the
   output much less verbose in the normal case. From the CL:
  
   First, a number of log messages have had their levels changed
   (mostly
   to
   make them quieter).
  
   Second, the script outputs a meter that shows progress through the
   test run, which is a one line summary of where it's at current
   (e.g. parsing expectations, gathering files. During the actual
   test
   execution, the meter displays %d tests completed as expected, %d
   didn't,
   %d remain. The meter uses carriage returns but no linefeeds, so the
   output
   is overwritten as it progresses. The meter is disabled

Re: [chromium-dev] Extensions and the Mac

2009-12-11 Thread Dirk Pranke
If I'm running on Windows, I know to ignore the latter. That's a
pretty big difference.

-- Dirk

On Fri, Dec 11, 2009 at 7:39 AM, Avi Drissman a...@chromium.org wrote:
 What the difference between:

 ★ this extension doesn't work at all wh

 and

 ★ As mentioned, this extension is incompatible with my Linux box. Bad
 show. Bad show.

 Avi

 On Fri, Dec 11, 2009 at 10:29 AM, Mike Pinkerton pinker...@google.com
 wrote:

 One viewpoint I haven't seen mentioned on this thread is from that of
 the extension developer. Suppose they write, from their perspective, a
 perfectly good extension that uses binary components. After being
 around for a few weeks, they notice they have a 2-star rating and a
 lot of angry comments saying this extension doesn't work at all
 wh

 That doesn't really seem fair to the extension writer. People are
 complaining because they haven't been informed and we've not put a
 mechanism in place to inform them, and they take it out on the
 extension in terms of a really bad rating.

 On Fri, Dec 11, 2009 at 6:29 AM, PhistucK phist...@chromium.org wrote:
  I believe the most elegant and quick (seemingly) solution is to provide
  the
  extension developers a field (in the extension gallery, not in the
  extension
  itself) that will include the platform and the version.
  Going farther, you can add a check if the platform and the version (or
  even
  let the developer enter the search string) exist in the user agent or
  anywhere else you can think of and show a warning next to the install
  button.
  And an automatic quick solution can be to go over the manifest (which
  you
  already do to search for NPAPI to add it to the approval queue) and see
  if
  there is a DLL, SO or whatever Macintosh is using in them. If there is a
  DLL, add a Compatible with the Windows platform and so on, or the
  opposite, if it does not contain, then you surely know - Not compatible
  with the Macintosh or Linux platforms.
  ☆PhistucK
 
 
  On Fri, Dec 11, 2009 at 03:54, Aaron Boodman a...@google.com wrote:
 
  Yes, extensions that include NPAPI are a very small minority. Last
  time I checked there were something like 5. It is a way out for people
  who already have binary code that they would like to reuse, or who
  need to talk to the platform.
 
  I don't see what the big deal is about a few extensions only
  supporting a particular platform. As long as it is clear to users
  (you're right, we need to do this), I think this is ok.
 
  - a
 
  --
  Chromium Developers mailing list: chromium-dev@googlegroups.com
  View archives, change email options, or unsubscribe:
     http://groups.google.com/group/chromium-dev
 
  --
  Chromium Developers mailing list: chromium-dev@googlegroups.com
  View archives, change email options, or unsubscribe:
  http://groups.google.com/group/chromium-dev



 --
 Mike Pinkerton
 Mac Weenie
 pinker...@google.com

 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


Re: [chromium-dev] revised output for run_webkit_tests

2009-12-11 Thread Dirk Pranke
On Fri, Dec 11, 2009 at 11:36 AM, Ojan Vafai o...@google.com wrote:
 I thought we had agreed on printing out any unexpected failures in
 real-time, no?
 Also, I do think it would be worthwhile to print each directory as it
 finishes. We're getting to the point where we shard all the big directories,
 so the largest shard takes 90 seconds (this is true on the mac release bot
 now!). So printing directories as they finish would actually give you decent
 insight into what tests have been run.

No, I don't think we had necessarily agreed to this, although we did
talk about it.

At any rate, I'm working on a webkit-style-output patch that should
get both of these things, and we can see which we like better.

-- Dirk

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


[chromium-dev] The plan for upstreaming all of our layout tests infrastructure

2009-12-15 Thread Dirk Pranke
Hi all,

A number of us have drawn up a plan for getting the rest of our layout
tests infrastructure upstreamed to webkit.org. You can read about it
here:

http://www.chromium.org/developers/design-documents/upstreaminglayouttests

In addition to making us resemble other webkit ports by doing this
(and definitely easing both webkit gardening and our ability to keep
our webkit port from breaking), we also hope to bring added features
to the webkit community.

We're tentatively hoping to get this all done by the end of Q1 2010.

Comments to me, Eric Seidel, Dimitry Glazkov, or the list :)

-- Dirk

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


Re: [chromium-dev] The plan for upstreaming all of our layout tests infrastructure

2009-12-15 Thread Dirk Pranke
Right. I didn't actually mean to imply that the tasks had to run in
the order specified. I've added a note to that effect, but if I find a
few minutes I'll add a proper GANTT chart ;)

-- Dirk

On Tue, Dec 15, 2009 at 5:29 PM, Ojan Vafai o...@google.com wrote:
 (3) forks run_webkit_tests.py upstream.
 (4) Make it possible to run the tests with TestShell while
 run_webkit_tests.py is upstream.
 (5) Change the chromium bots to use the upstream run_webkit_tests.py

 That all seems fine. But then we have to leave the forked copy in there
 until the last step (13) is done of making the other tools use the upstream
 copy. We should minimize as much as possible, the amount of time that we
 have these scripts forked. Can we move (13) to be (6)? Then we can delete
 the downstream run_webkit_tests.py.
 On Tue, Dec 15, 2009 at 5:18 PM, Dirk Pranke dpra...@chromium.org wrote:

 Hi all,

 A number of us have drawn up a plan for getting the rest of our layout
 tests infrastructure upstreamed to webkit.org. You can read about it
 here:

 http://www.chromium.org/developers/design-documents/upstreaminglayouttests

 In addition to making us resemble other webkit ports by doing this
 (and definitely easing both webkit gardening and our ability to keep
 our webkit port from breaking), we also hope to bring added features
 to the webkit community.

 We're tentatively hoping to get this all done by the end of Q1 2010.

 Comments to me, Eric Seidel, Dimitry Glazkov, or the list :)

 -- Dirk

 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
    http://groups.google.com/group/chromium-dev



-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


[chromium-dev] all chrome/ LayoutTests have been upstreamed or removed

2009-12-18 Thread Dirk Pranke
I have either deleted or submitted patches to upstream all of the
remaining tests under chrome/ . There are a few that appear to be
platform-specific, but most weren't. Assuming they clear the review
process over the weekend, I intended to remove the chrome/ dir on
Monday and submit patches to run_webkit_tests to only handle tests
under LayoutTests/.

If you happen to be gardening in the meantime, just mark the tests as
SKIP and I will update the expectations and baselines as necessary.

-- Dirk

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev


[chromium-dev] on not displaying the home page in a new tab or window

2010-01-11 Thread Dirk Pranke
As a side thread to the Core Principles / home button thread that
just went around, I have the following question:

Is it by design that if I click on a new tab or a new window, and I
have my preference set to open this page on the home page rather
than use the new tab page, we still show the new tab page?

I.e., we only show the home page when the browser is launched or when
I click on the home button?

If it is (and I can understand the logic that says that clicking on
new tab should display the new tab page), I have a feeling that this
might be really confusing to users:

http://code.google.com/p/chromium/issues/detail?id=25711
http://code.google.com/p/chromium/issues/detail?id=29637
http://code.google.com/p/chromium/issues/detail?id=28940

All look like examples of this. Personally I find it counterintuitive
as well, probably because I don't think any other browser does this.

There seem to be extensions (e.g.,
https://chrome.google.com/extensions/detail/jbnkijekempmdlleaimfelifcejbkmcd
, thanks Nico) that override this behavior, but that's not a
particularly discoverable solution for new users. (I'm not sure what
the answer is to fixing that particular problem).

Comments? Thoughts?

-- Dirk
-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

[chromium-dev] changes to specifying test names (and test_expectations.txt) for Layout tests

2010-01-13 Thread Dirk Pranke
Hi all,

If you never run layout tests or modify test_expectations.txt, you can
stop reading.

As part of the process to upstream our layout test infrastructure, I
have just checked in a change that changes our test infrastructure to
conform to WebKit's in how we specify test names.

Specifically, instead of saying LayoutTests/fast/html/foo.html, you
now just say fast/html/foo.html. This affects specifying tests on
the command line to run_webkit_tests, and also specifying test names
in test_expectations.txt .

In the near future, we will also be moving all of the baselines from
src/webkit/data/layout_tests/platform/chromium-{mac,win,linux}/LayoutTests/*
to platform/chromium-{mac,win,linux}/*, again to match WebKit's
structure.

Two notes:

1) I believe the rebaselining tool is working correctly, but I'm not
100% sure. If you have any problems, let me know.

2) I may have just corrupted the data sets used by the flakiness
dashboard. I will be checking this with Ojan and (hopefully) fixing it
later this evening if I did.

Any questions or problems to me. Thanks!

-- Dirk
-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] WebKit Layout Tests page seems obsolete

2010-01-16 Thread Dirk Pranke
Hi,

That's something that changed a couple days ago in revision 36214. If
your build is older than that, you do need to put the Layout/ in
front.

-- Dirk

On Sat, Jan 16, 2010 at 12:27 AM, MORITA Hajime morr...@gmail.com wrote:
 Hi folks,
 I've tried to run a subset of webkit layout test using run_webkit_tests.sh,
 and found that the instruction at
 http://www.chromium.org/developers/testing/webkit-layout-tests
 seems obsolete.

 The doc says :
To run only some of the tests, specify their directories or filenames as 
arguments to run_webkit_tests.sh relative to the layout test directory 
(/trunk/third_party/WebKit/LayoutTests). For example, to run the fast form 
tests, use

$ ./run_webkit_tests.sh fast/forms

Or you could use

$ ./run_webkit_tests.sh fast/fo\*

 But at now we need such like:

 $ ./run_webkit_tests.sh LayoutTests/fast/forms

 It looks due to support for tests under chromium tree, but I'm not sure.
 Anyway, updating the document would be helpful.

 Thanks in advance.

 --
 morita

 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
    http://groups.google.com/group/chromium-dev

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] Re: opening local files with chrome from command line, relative paths

2010-01-18 Thread Dirk Pranke
I'm not sure if we reached a conclusion on this, but I don't like a
couple of aspects of the Firefox logic (assuming I understood it
correctly).

The way I would look at it, the browser always has a current context
that indicates the current base to use for relative URIs. In the case
where you are launching the browser for the command line, the current
base = $CWD (and, implicitly, file://).

So, you then get the following algorithm:

1) if there is a ':' in the URI, you split the URI into scheme and
scheme-specific part.

2) If there is a scheme:

  2.1) If the scheme is a recognized/supported one, dispatch the URL
as you would normally.

  2.2) If scheme matches [a-zA-Z] and you are on Windows, treat as an
absolute local file URL

  2.3) Else, treat this as a syntax error in the URI

3) If there is no scheme:

  3.1) If the URI starts with a /, treat it as a full path relative
to the current context (e.g., current scheme, host and port. If your
current context is a local filesystem, then treat it as a file://
scheme

  3.2) If the URI starts with a \, you're on Windows, and the
context is a local file system point, repeat as above, prepending with
the current drive

  3.3) If the URI doesn't start with a / or a \, then, optionally,
check to see if the URI resolves to a valid hostname. This catches the
chrome.exe www.google.com use case

  3.4) If the URI doesn't resolve to a valid hostname, then interpret
it as a relative URL

I think this is mostly the same as the FF algorithm, except for 3.3. I
agree that trying the local then remote logic is probably going to
lead to weird and/or unintended consequences. I could be convinced to
omit 3.3, but I don't see any real risks here. The worst case is that
you'd have to specify ./www.google.com to open a relative local file
called www.google.com, but that's a pretty obscure corner case.

I wouldn't add the -url switch, since it is actually misleadingly
named (relative URLs are URLs).

-- Dirk

On Mon, Jan 11, 2010 at 2:23 PM, Benjamin Smedberg bsmedb...@gmail.com wrote:
 For what it's worth, the way Firefox solves this is:

 * Check if the file is an absolute file path
 ** on Windows, X:\... or \\...
 ** on Posix, /...
 * Otherwise, it's a URL relative to the current working directory
 ** So index.html resolves using the URL machinery to
 file:///c:/cwd/index.html
 ** while http://www.google.com resolves to itself

 This doesn't deal with the case firefox.exe www.google.com (which would try
 to resolve as a file), but we decided not to care about this case. We do
 have the explicit firefox.exe -url www.google.com which will perform URI
 fixup to guess the correct URL.

 --BDS


 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
    http://groups.google.com/group/chromium-dev

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] Re: opening local files with chrome from command line, relative paths

2010-01-18 Thread Dirk Pranke
On Mon, Jan 18, 2010 at 7:16 PM, Peter Kasting pkast...@google.com wrote:
 On Mon, Jan 18, 2010 at 4:54 PM, Dirk Pranke dpra...@chromium.org wrote:

 So, you then get the following algorithm:

 1) if there is a ':' in the URI, you split the URI into scheme and
 scheme-specific part.

 2) If there is a scheme:

  2.1) If the scheme is a recognized/supported one, dispatch the URL
 as you would normally.

  2.2) If scheme matches [a-zA-Z] and you are on Windows, treat as an
 absolute local file URL

  2.3) Else, treat this as a syntax error in the URI

 3) If there is no scheme:

  3.1) If the URI starts with a /, treat it as a full path relative
 to the current context (e.g., current scheme, host and port. If your
 current context is a local filesystem, then treat it as a file://
 scheme

  3.2) If the URI starts with a \, you're on Windows, and the
 context is a local file system point, repeat as above, prepending with
 the current drive

  3.3) If the URI doesn't start with a / or a \, then, optionally,
 check to see if the URI resolves to a valid hostname. This catches the
 chrome.exe www.google.com use case

  3.4) If the URI doesn't resolve to a valid hostname, then interpret
 it as a relative URL

 I'd pretty strongly like to not specify steps like these.  They duplicate
 code we already have for interpreting user input, except with less fidelity.
  We have quite sophisticated heuristics for how to figure out the correct
 scheme, parse drive letters, UNC paths, etc.  We don't need to write another
 set.
 I also don't really care about trying to fix up www.google.com; if it
 falls out of the existing code, fine, but I wouldn't bother spending time on
 it.  I'm definitely opposed to doing anything like try DNS resolution on X
 and fall back to Y since it makes the results unpredictable based on your
 network.  What if the user specifies a filename that's also an intranet
 host?  What if the user says www.google.com but the network is currently
 down -- does the browser show a couldn't open local file www.google.com
 error?  etc.
 It's enough to simply say run the input through fixup, supplying the CWD as
 the relative file path base.  We have code that can basically take it from
 there.

This sounds fine, although I'd be curious to see where the logic in
fixup differs from the above. I certainly agree that we don't want an
additional code path.

I would perhaps amend resolve to a valid hostname to looks like a
hostname, although I don't know what heuristics we use for that (if
any other than passing it to name resolution).

 PK
-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev