[chromium-dev] Re: Tab Thumbnails and Aero Peek (of Windows 7)

2009-10-23 Thread Brett Wilson

2009/10/23 Hironori Bono (坊野 博典) hb...@google.com:
 Hi Brett,

 Thank you so much for noticing this. I'm integrating the
 ThumbnailGenertor class into my prototype now. :)
 By the way, I'm a little wondering if there is a function that changes
 the width and the height of a thumbnail image generated by the
 ThumbnailGenerator class at run time. If not, I would like to hear
 there is a plan to add it.
 When Windows 7 shows tab thumbnails, it automatically calculates the
 width and the height of a thumbnail image so that it can show all
 thumbnails in one line. (When we open too many tabs and Windows 7
 cannot show their thumbnails, it shows a list of tab titles.) In
 brief, we cannot assume the width or the height of a thumbnail as
 constant values on Windows 7.
 So, it may be better for the ThumbnailGenerator class to have an
 interface that changes the width and the height of its thumbnails. (I
 has added code that resizes the thumbnails retrieved from the
 ThumbnailGenerator object and sends the resized thumbnails to Windows
 7. So, I don't mind it at all.)

There isn't a way to do this. The reason is that the thumbnail
generator precalculates thumbnails in advance for many pages (for
example, when we're discarding the backing store). This means we have
to know basically when we start Chrome what the max thumbnail size
will be. It uses power-of-two downsampling to be as fast as possible,
so the actual thumbnails aren't guaranteed to be any particular size.

I would just use the default size for now. If it doesn't look OK, we
can add a future enhancement to request the size, so the ones it
generates on-demand (the ones where there is a backing store) can have
a larger size if needed.

Brett

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Tab Thumbnails and Aero Peek (of Windows 7)

2009-10-23 Thread Mike Pinkerton

All the screenshots show this with a single window. What happens if
you have multiple windows open? Does it only show the selected window?

As an alternate suggestion, we have a mode in Camino2 called
tabspose which is like OS X Expose, but shows the (large) thumbnails
of all open tabs overlaying the content area of the browser. Allows
much better visualization of the tabs because you're not limited to
small squares but can use as much of the window as available.

This does involve interacting with the window already, so it doesn't
work until you have the window in the foreground, but I wanted to
bring it up as an idea.

http://www.flickr.com/photos/spaunsglo/987375693/ for an early
screenshot (currently looks more polished, but same idea).

2009/10/23 Brett Wilson bre...@chromium.org:

 2009/10/23 Hironori Bono (坊野 博典) hb...@google.com:
 Hi Brett,

 Thank you so much for noticing this. I'm integrating the
 ThumbnailGenertor class into my prototype now. :)
 By the way, I'm a little wondering if there is a function that changes
 the width and the height of a thumbnail image generated by the
 ThumbnailGenerator class at run time. If not, I would like to hear
 there is a plan to add it.
 When Windows 7 shows tab thumbnails, it automatically calculates the
 width and the height of a thumbnail image so that it can show all
 thumbnails in one line. (When we open too many tabs and Windows 7
 cannot show their thumbnails, it shows a list of tab titles.) In
 brief, we cannot assume the width or the height of a thumbnail as
 constant values on Windows 7.
 So, it may be better for the ThumbnailGenerator class to have an
 interface that changes the width and the height of its thumbnails. (I
 has added code that resizes the thumbnails retrieved from the
 ThumbnailGenerator object and sends the resized thumbnails to Windows
 7. So, I don't mind it at all.)

 There isn't a way to do this. The reason is that the thumbnail
 generator precalculates thumbnails in advance for many pages (for
 example, when we're discarding the backing store). This means we have
 to know basically when we start Chrome what the max thumbnail size
 will be. It uses power-of-two downsampling to be as fast as possible,
 so the actual thumbnails aren't guaranteed to be any particular size.

 I would just use the default size for now. If it doesn't look OK, we
 can add a future enhancement to request the size, so the ones it
 generates on-demand (the ones where there is a backing store) can have
 a larger size if needed.

 Brett

 




-- 
Mike Pinkerton
Mac Weenie
pinker...@google.com

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Chrome Layout Tests Task Force progress update

2009-10-23 Thread Jeffrey Chang
*Failing Test Count*
*
*

The number of failing WebKit layout tests (WinXP) has been reduced to *622*.
(We had ~800 when we started keeping track.)


*LTTF Dashboard*

dpranke@ has put together a nice
dashboardhttp://chromiumlttf.appspot.com/ showing
some important metrics, along with corresponding issue tracker links. These
include:

   - flaky/fail/skip test counts
   - tests with invalid bug ids
   - non LayoutTests-labeled bugs
   - other good info

Check it out at: http://chromiumlttf.appspot.com/.
(Click on each line to expand.)

*
*
*Recent Work*

http://tinyurl.com/lttfstatus has the details, but some of the things being
worked on include:

   - getClientRects
   - accessibilityController
   - CSS2.1 test suites
   - drag-related tests
   - EventSendingController
   - setTimeout
   - PlainTextController
   - canvas
   - Skia
   - paste-xml
   - type=search input



-- Jeff, on behalf of the LTTF

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Flaky layout tests and WebKit Linux (dbg)(3)

2009-10-23 Thread Andrew Scherkus
I've been trying to get the media layout tests passing consistently, but
WebKit Linux (dbg)(3) takes an absurdly longer time to run tests and I don't
know why.
For example:
http://src.chromium.org/viewvc/chrome/trunk/src/webkit/tools/layout_tests/flakiness_dashboard.html#tests=video-played

To keep the tree green (and collect data), I've marked all media layout
tests on Linux debug as pass/fail/timeout.  My hope is if the bot was less
bogged down, it would lead to faster build times (GTTF) and less flaky
results/timeouts (LTTF).

Any ideas?
Andrew

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Flaky layout tests and WebKit Linux (dbg)(3)

2009-10-23 Thread Andrew Scherkus
I've never witnessed these tests taking an extra 10-20 seconds on my local
machine, no.

I don't doubt that some of the tests might be flaky themselves, but that
machine does run tests slower.  Take a look at the SVG tests, for example:
http://src.chromium.org/viewvc/chrome/trunk/src/webkit/tools/layout_tests/flakiness_dashboard.html#tests=LayoutTests%2Fsvg

Are there other tricks I do on my local machine to simulate running on the
bots?  I usually try to test for these things by maxing out my CPU then
running layout tests but even then they run smoothly.

Andrew

On Fri, Oct 23, 2009 at 12:02 PM, Nicolas Sylvain nsylv...@chromium.orgwrote:



 On Fri, Oct 23, 2009 at 11:59 AM, Andrew Scherkus 
 scher...@chromium.orgwrote:

 I've been trying to get the media layout tests passing consistently, but
 WebKit Linux (dbg)(3) takes an absurdly longer time to run tests and I don't
 know why.
 For example:

 http://src.chromium.org/viewvc/chrome/trunk/src/webkit/tools/layout_tests/flakiness_dashboard.html#tests=video-played

 To keep the tree green (and collect data), I've marked all media layout
 tests on Linux debug as pass/fail/timeout.  My hope is if the bot was less
 bogged down, it would lead to faster build times (GTTF) and less flaky
 results/timeouts (LTTF).

 This machine is supposed to be fast.

 Are you saying that this flakiness never happens on your machine?

 Are you sure the bot is really to blame here?

 Nicolas

 Any ideas?
 Andrew


 



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] How to use PauseRequest

2009-10-23 Thread Paweł Hajdan Jr .
I'm going to use PauseRequest for privacy blacklists. It seems that I should
create a new ResourceHandler, and resource handlers seem to wrap another
resource handlers. Then I'd have to add code to use the new ResourceHandler
in ResourceDispatcherHost.
I'd need to write a ResourceHandler which would pause any requests until the
BlacklistManager has loaded its blacklist (I can get a notification for
that, this is the easy part). After receiving the notification, I'd un-pause
all pending requests. What's a good way to do that (writing such
ResourceHandler)?

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How to use PauseRequest

2009-10-23 Thread John Abd-El-Malek
Check out BufferedResourceHandler, it pauses requests until plugins are
loaded (needed to know which mime types are available).

On Fri, Oct 23, 2009 at 1:23 PM, Paweł Hajdan Jr.
phajdan...@chromium.orgwrote:

 I'm going to use PauseRequest for privacy blacklists. It seems that I
 should create a new ResourceHandler, and resource handlers seem to wrap
 another resource handlers. Then I'd have to add code to use the new
 ResourceHandler in ResourceDispatcherHost.

 I'd need to write a ResourceHandler which would pause any requests until
 the BlacklistManager has loaded its blacklist (I can get a notification for
 that, this is the easy part). After receiving the notification, I'd un-pause
 all pending requests. What's a good way to do that (writing such
 ResourceHandler)?

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Do we care about HttpBrowserCapabilities (ASP.Net)

2009-10-23 Thread cpu

dotNet ASP pages have a class named HttpBrowserCapabilities that
returns what the name implies. If you go to the url below using chrome
you'll see an echo (in column 3) of what it thinks of your browser:

http://www.on-the-matrix.com/webtools/HttpBrowserCapabilities.aspx

Is there anything you see there that worries you? I see some strange
things.

Do we care about this? care as in crafting a test to detect
regressions?

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Flaky layout tests and WebKit Linux (dbg)(3)

2009-10-23 Thread Andrew Scherkus
On Fri, Oct 23, 2009 at 12:28 PM, Nicolas Sylvain nsylv...@chromium.orgwrote:



 On Fri, Oct 23, 2009 at 12:21 PM, Andrew Scherkus 
 scher...@chromium.orgwrote:

 I've never witnessed these tests taking an extra 10-20 seconds on my local
 machine, no.

 I don't doubt that some of the tests might be flaky themselves, but that
 machine does run tests slower.  Take a look at the SVG tests, for example:

 http://src.chromium.org/viewvc/chrome/trunk/src/webkit/tools/layout_tests/flakiness_dashboard.html#tests=LayoutTests%2Fsvg

 Are there other tricks I do on my local machine to simulate running on the
 bots?  I usually try to test for these things by maxing out my CPU then
 running layout tests but even then they run smoothly.


 This machine is one of the oldest linux machine we have in the lab. I'll
 recreate it, make sure it run fast, and see if it helps.


Thanks Nicolas.

I'm wondering if it's slow disk access... know of any simple commands I can
use to simulate disk thrashing?



 Nicolas


 Andrew

 On Fri, Oct 23, 2009 at 12:02 PM, Nicolas Sylvain 
 nsylv...@chromium.orgwrote:



 On Fri, Oct 23, 2009 at 11:59 AM, Andrew Scherkus scher...@chromium.org
  wrote:

 I've been trying to get the media layout tests passing consistently, but
 WebKit Linux (dbg)(3) takes an absurdly longer time to run tests and I 
 don't
 know why.
 For example:

 http://src.chromium.org/viewvc/chrome/trunk/src/webkit/tools/layout_tests/flakiness_dashboard.html#tests=video-played

 To keep the tree green (and collect data), I've marked all media layout
 tests on Linux debug as pass/fail/timeout.  My hope is if the bot was less
 bogged down, it would lead to faster build times (GTTF) and less flaky
 results/timeouts (LTTF).

 This machine is supposed to be fast.

 Are you saying that this flakiness never happens on your machine?

 Are you sure the bot is really to blame here?

 Nicolas

 Any ideas?
 Andrew


 





--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Do we care about HttpBrowserCapabilities (ASP.Net)

2009-10-23 Thread cpu

My unsubstantiated gut feeling is that they might be using it and they
don't know it. In ASPNet you deal with higher-level, say datagrid,
trees, pager objects. What html they generate you don't care.

This fear stems from seeing that class sports a factory-like method:
CreateHtmlTextWriter(). The documentation about HtmlTextWriter says:

The HtmlTextWriter class is used to render HTML 4.0 to desktop
browsers. The HtmlTextWriter is also the base class for all markup
writers in the System.Web.UI namespace, including the ChtmlTextWriter,
Html32TextWriter, and XhtmlTextWriter classes. These classes are used
to write the elements, attributes, and style and layout information
for different types of markup. In addition, these classes are used by
the page and control adapter classes that are associated with each
markup language.

In most circumstances, ASP.NET automatically uses the appropriate
writer for the requesting device. However, if you create a custom text
writer or if you want to specify a particular writer to render a page
for a specific device, you must map the writer to the page in the
controlAdapters section of the application .browser file.


On Oct 23, 2:06 pm, Peter Kasting pkast...@chromium.org wrote:
 On Fri, Oct 23, 2009 at 2:04 PM, cpu c...@chromium.org wrote:
  Do we care about this? care as in crafting a test to detect
  regressions?

 How much we care is probably directly proportional to how much real web
 developers use this.

 PK
--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How to use PauseRequest

2009-10-23 Thread Ricardo Vargas
SafeBrowsingResourceHandler may be even closer to what you want.

On Fri, Oct 23, 2009 at 1:35 PM, John Abd-El-Malek j...@chromium.org wrote:

 Check out BufferedResourceHandler, it pauses requests until plugins are
 loaded (needed to know which mime types are available).

 On Fri, Oct 23, 2009 at 1:23 PM, Paweł Hajdan Jr. phajdan...@chromium.org
  wrote:

 I'm going to use PauseRequest for privacy blacklists. It seems that I
 should create a new ResourceHandler, and resource handlers seem to wrap
 another resource handlers. Then I'd have to add code to use the new
 ResourceHandler in ResourceDispatcherHost.

 I'd need to write a ResourceHandler which would pause any requests until
 the BlacklistManager has loaded its blacklist (I can get a notification for
 that, this is the easy part). After receiving the notification, I'd un-pause
 all pending requests. What's a good way to do that (writing such
 ResourceHandler)?

 



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] revising the output from run_webkit_tests

2009-10-23 Thread Dirk Pranke

If you've never run run_webkit_tests to run the layout test
regression, or don't care about it, you can stop reading ...

If you have run it, and you're like me, you've probably wondered a lot
about the output ... questions like:

1) what do the numbers printed at the beginning of the test mean?
2) what do all of these test failed messages mean, and are they bad?
3) what do the numbers printed at the end of the test mean?
4) why are the numbers at the end different from the numbers at the beginning?
5) did my regression run cleanly, or not?

You may have also wondered a couple of other things:
6) What do we expect this test to do?
7) Where is the baseline for this test?
8) What is the baseline search path for this test?

Having just spent a week trying (again), to reconcile the numbers I'm
getting on the LTTF dashboard with what we print out in the test, I'm
thinking about drastically revising the output from the script,
roughly as follows:

* print the information needed to reproduce the test and look at the results
* print the expected results in summary form (roughly the expanded
version of the first table in the dashboard - # of tests by
(wontfix/fix/defer x pass/fail/are flaky).
* don't print out failure text to the screen during the run
* print out any *unexpected* results at the end (like we do today)

The goal would be that if all of your tests pass, you get less than a
small screenful of output from running the tests.

In addition, we would record a full log of (test,expectation,result)
to the results directory (and this would also be available onscreen
with --verbose)

Lastly, I'll add a flag to re-run the tests that just failed, so it's
easy to test if the failures were flaky.

Then I'll rip out as much of the set logic in test_expectations.py as
we can possibly get away with, so that no one has to spend the week I
just did again. I'll probably replace it with much of the logic I use
to generate the dashboard, which is much more flexible in terms of
extracting different types of queries and numbers.

I think the net result will be the same level of information that we
get today, just in much more meaningful form.

Thoughts? Comments? Is anyone particularly wedded to the existing
output, or worried about losing a particular piece of info?

-- Dirk

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: revising the output from run_webkit_tests

2009-10-23 Thread Ojan Vafai
Can you give example outputs for the common cases? It would be easier to
discuss those.

On Fri, Oct 23, 2009 at 3:43 PM, Dirk Pranke dpra...@chromium.org wrote:

 If you've never run run_webkit_tests to run the layout test
 regression, or don't care about it, you can stop reading ...

 If you have run it, and you're like me, you've probably wondered a lot
 about the output ... questions like:

 1) what do the numbers printed at the beginning of the test mean?
 2) what do all of these test failed messages mean, and are they bad?
 3) what do the numbers printed at the end of the test mean?
 4) why are the numbers at the end different from the numbers at the
 beginning?
 5) did my regression run cleanly, or not?

 You may have also wondered a couple of other things:
 6) What do we expect this test to do?
 7) Where is the baseline for this test?
 8) What is the baseline search path for this test?

 Having just spent a week trying (again), to reconcile the numbers I'm
 getting on the LTTF dashboard with what we print out in the test, I'm
 thinking about drastically revising the output from the script,
 roughly as follows:

 * print the information needed to reproduce the test and look at the
 results
 * print the expected results in summary form (roughly the expanded
 version of the first table in the dashboard - # of tests by
 (wontfix/fix/defer x pass/fail/are flaky).
 * don't print out failure text to the screen during the run
 * print out any *unexpected* results at the end (like we do today)

 The goal would be that if all of your tests pass, you get less than a
 small screenful of output from running the tests.

 In addition, we would record a full log of (test,expectation,result)
 to the results directory (and this would also be available onscreen
 with --verbose)

 Lastly, I'll add a flag to re-run the tests that just failed, so it's
 easy to test if the failures were flaky.

 Then I'll rip out as much of the set logic in test_expectations.py as
 we can possibly get away with, so that no one has to spend the week I
 just did again. I'll probably replace it with much of the logic I use
 to generate the dashboard, which is much more flexible in terms of
 extracting different types of queries and numbers.

 I think the net result will be the same level of information that we
 get today, just in much more meaningful form.

 Thoughts? Comments? Is anyone particularly wedded to the existing
 output, or worried about losing a particular piece of info?

 -- Dirk


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: revising the output from run_webkit_tests

2009-10-23 Thread Nicolas Sylvain
On Fri, Oct 23, 2009 at 3:43 PM, Dirk Pranke dpra...@chromium.org wrote:


 If you've never run run_webkit_tests to run the layout test
 regression, or don't care about it, you can stop reading ...

 If you have run it, and you're like me, you've probably wondered a lot
 about the output ... questions like:

 1) what do the numbers printed at the beginning of the test mean?
 2) what do all of these test failed messages mean, and are they bad?
 3) what do the numbers printed at the end of the test mean?
 4) why are the numbers at the end different from the numbers at the
 beginning?
 5) did my regression run cleanly, or not?

 You may have also wondered a couple of other things:
 6) What do we expect this test to do?
 7) Where is the baseline for this test?
 8) What is the baseline search path for this test?

 Having just spent a week trying (again), to reconcile the numbers I'm
 getting on the LTTF dashboard with what we print out in the test, I'm
 thinking about drastically revising the output from the script,
 roughly as follows:

 * print the information needed to reproduce the test and look at the
 results
 * print the expected results in summary form (roughly the expanded
 version of the first table in the dashboard - # of tests by
 (wontfix/fix/defer x pass/fail/are flaky).
 * don't print out failure text to the screen during the run
 * print out any *unexpected* results at the end (like we do today)

 The goal would be that if all of your tests pass, you get less than a
 small screenful of output from running the tests.

 In addition, we would record a full log of (test,expectation,result)
 to the results directory (and this would also be available onscreen
 with --verbose)

 Lastly, I'll add a flag to re-run the tests that just failed, so it's
 easy to test if the failures were flaky.

This would be nice for the buildbots. We would also need to add a new
section
in the results for Unexpected Flaky Tests (failed then passed).

Nicolas



 Then I'll rip out as much of the set logic in test_expectations.py as
 we can possibly get away with, so that no one has to spend the week I
 just did again. I'll probably replace it with much of the logic I use
 to generate the dashboard, which is much more flexible in terms of
 extracting different types of queries and numbers.

 I think the net result will be the same level of information that we
 get today, just in much more meaningful form.

 Thoughts? Comments? Is anyone particularly wedded to the existing
 output, or worried about losing a particular piece of info?

 -- Dirk

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---