Re: [webkit-dev] Buildbot Performance

2010-10-19 Thread Eric Seidel
That does not sound expected or desired.  Could you point me to which
Chromium builders are responsible for so much data?

I suspect this is an artifact of new-run-webkit-tests or how the
Chromium builders are set up.

On Tue, Oct 19, 2010 at 9:12 AM, William Siegrist wsiegr...@apple.com wrote:
 Right now, /results/ is served from the new storage and is receiving test 
 results data since a day or two ago. For anything older, you will get 
 redirected to /old-results/ which is on the old storage. This probably breaks 
 your code if you are trying to load /results/ and walk backwards in 
 revisions. We should probably look at adding some sort of map to the 
 /json/builders/ data instead.

 On a side note, Chromium test results account for 75% of the 700GB of result 
 data, SnowLeopard is 11%, then everyone else. I assume Chromium generating so 
 much more data than everyone else is expected and desired?

 -Bill




 On Oct 18, 2010, at 5:04 PM, Eric Seidel wrote:

 The most frequent consumer of the historical data is webkit-patch,
 which uses it to map from revisions to builds:
 http://trac.webkit.org/browser/trunk/WebKitTools/Scripts/webkitpy/common/net/buildbot.py#L109

 It's used when we're walking back through revisions trying to find
 when the build broke, or when the user passes us a revision and
 expects us to know build information about such.

 It's possible we could move off that map with some re-design.


 One thing which would *hugely* speed up webkit-patch failure-reason
 (and sherriff-bot, and other commands which use the
 build_to_revision_map) is if we could make the results/ pages
 paginated.  :)


 I would be nice to keep all the build data for forever.  Even if after
 some date in the past its on a slower server.

 -eric


 On Sat, Oct 16, 2010 at 12:38 AM, William Siegrist wsiegr...@apple.com 
 wrote:
 On Oct 14, 2010, at 10:13 AM, William Siegrist wrote:

 On Oct 14, 2010, at 9:27 AM, William Siegrist wrote:

 I am in the process of moving buildbot onto faster storage which should 
 help with performance. However, during the move, performance will be even 
 worse due to the extra i/o. There will be a downtime period in the next 
 few days to do the final switchover, but I won't know when that will be 
 until the preliminary copying is done. I am trying not to kill the master 
 completely, but there have been some slave disconnects due to the load 
 already this morning. I'll let everyone know when the downtime will be 
 once I know.



 The copying of data will take days at the rate we're going, and the server 
 is exhibiting some strange memory paging in the process. I am going to 
 reboot the server and try copying with the buildbot master down. The 
 master will be down for about 15m, if I can't get the copy done in that 
 time I will schedule a longer downtime at a better time. Sorry for the 
 churn.



 Most of build.webkit.org is now running on the newer/faster storage. 
 However, the results data[1] is hundreds of gigabytes, going back 6 months, 
 and the new storage is not big enough. Does anyone have any opinion on how 
 much data to keep in results? Does anyone ever look back more than a month 
 or two? For now, the results will still come up a slowly, but hopefully the 
 rest of buildbot is a little more responsive. We're still planning to move 
 all of webkit.org to better hardware soon, but we hit some delays in that 
 process.

 [1] http://build.webkit.org/results/

 Thanks
 -Bill

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev



___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Buildbot Performance

2010-10-19 Thread Ryosuke Niwa
On Tue, Oct 19, 2010 at 9:20 AM, Ryosuke Niwa ryosuke.n...@gmail.comwrote:

 Ins't this due to the fact Chromium bots run pixel tests and others don't?

 - Ryosuke


 On Tue, Oct 19, 2010 at 9:17 AM, Eric Seidel e...@webkit.org wrote:

 That does not sound expected or desired.  Could you point me to which
 Chromium builders are responsible for so much data?

 I suspect this is an artifact of new-run-webkit-tests or how the
 Chromium builders are set up.

 On Tue, Oct 19, 2010 at 9:12 AM, William Siegrist wsiegr...@apple.com
 wrote:
  Right now, /results/ is served from the new storage and is receiving
 test results data since a day or two ago. For anything older, you will get
 redirected to /old-results/ which is on the old storage. This probably
 breaks your code if you are trying to load /results/ and walk backwards in
 revisions. We should probably look at adding some sort of map to the
 /json/builders/ data instead.
 
  On a side note, Chromium test results account for 75% of the 700GB of
 result data, SnowLeopard is 11%, then everyone else. I assume Chromium
 generating so much more data than everyone else is expected and desired?
 
  -Bill
 
 
 
 
  On Oct 18, 2010, at 5:04 PM, Eric Seidel wrote:
 
  The most frequent consumer of the historical data is webkit-patch,
  which uses it to map from revisions to builds:
 
 http://trac.webkit.org/browser/trunk/WebKitTools/Scripts/webkitpy/common/net/buildbot.py#L109
 
  It's used when we're walking back through revisions trying to find
  when the build broke, or when the user passes us a revision and
  expects us to know build information about such.
 
  It's possible we could move off that map with some re-design.
 
 
  One thing which would *hugely* speed up webkit-patch failure-reason
  (and sherriff-bot, and other commands which use the
  build_to_revision_map) is if we could make the results/ pages
  paginated.  :)
 
 
  I would be nice to keep all the build data for forever.  Even if after
  some date in the past its on a slower server.
 
  -eric
 
 
  On Sat, Oct 16, 2010 at 12:38 AM, William Siegrist 
 wsiegr...@apple.com wrote:
  On Oct 14, 2010, at 10:13 AM, William Siegrist wrote:
 
  On Oct 14, 2010, at 9:27 AM, William Siegrist wrote:
 
  I am in the process of moving buildbot onto faster storage which
 should help with performance. However, during the move, performance will be
 even worse due to the extra i/o. There will be a downtime period in the next
 few days to do the final switchover, but I won't know when that will be
 until the preliminary copying is done. I am trying not to kill the master
 completely, but there have been some slave disconnects due to the load
 already this morning. I'll let everyone know when the downtime will be once
 I know.
 
 
 
  The copying of data will take days at the rate we're going, and the
 server is exhibiting some strange memory paging in the process. I am going
 to reboot the server and try copying with the buildbot master down. The
 master will be down for about 15m, if I can't get the copy done in that time
 I will schedule a longer downtime at a better time. Sorry for the churn.
 
 
 
  Most of build.webkit.org is now running on the newer/faster storage.
 However, the results data[1] is hundreds of gigabytes, going back 6 months,
 and the new storage is not big enough. Does anyone have any opinion on how
 much data to keep in results? Does anyone ever look back more than a month
 or two? For now, the results will still come up a slowly, but hopefully the
 rest of buildbot is a little more responsive. We're still planning to move
 all of webkit.org to better hardware soon, but we hit some delays in that
 process.
 
  [1] http://build.webkit.org/results/
 
  Thanks
  -Bill
 
  ___
  webkit-dev mailing list
  webkit-dev@lists.webkit.org
  http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
 
 
 
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev



___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Buildbot Performance

2010-10-19 Thread Dimitri Glazkov
I would venture to guess that's probably it.

:DG

On Tue, Oct 19, 2010 at 9:21 AM, Ryosuke Niwa rn...@webkit.org wrote:

 On Tue, Oct 19, 2010 at 9:20 AM, Ryosuke Niwa ryosuke.n...@gmail.com
 wrote:

 Ins't this due to the fact Chromium bots run pixel tests and others don't?

 - Ryosuke

 On Tue, Oct 19, 2010 at 9:17 AM, Eric Seidel e...@webkit.org wrote:

 That does not sound expected or desired.  Could you point me to which
 Chromium builders are responsible for so much data?

 I suspect this is an artifact of new-run-webkit-tests or how the
 Chromium builders are set up.

 On Tue, Oct 19, 2010 at 9:12 AM, William Siegrist wsiegr...@apple.com
 wrote:
  Right now, /results/ is served from the new storage and is receiving
  test results data since a day or two ago. For anything older, you will get
  redirected to /old-results/ which is on the old storage. This probably
  breaks your code if you are trying to load /results/ and walk backwards in
  revisions. We should probably look at adding some sort of map to the
  /json/builders/ data instead.
 
  On a side note, Chromium test results account for 75% of the 700GB of
  result data, SnowLeopard is 11%, then everyone else. I assume Chromium
  generating so much more data than everyone else is expected and desired?
 
  -Bill
 
 
 
 
  On Oct 18, 2010, at 5:04 PM, Eric Seidel wrote:
 
  The most frequent consumer of the historical data is webkit-patch,
  which uses it to map from revisions to builds:
 
  http://trac.webkit.org/browser/trunk/WebKitTools/Scripts/webkitpy/common/net/buildbot.py#L109
 
  It's used when we're walking back through revisions trying to find
  when the build broke, or when the user passes us a revision and
  expects us to know build information about such.
 
  It's possible we could move off that map with some re-design.
 
 
  One thing which would *hugely* speed up webkit-patch failure-reason
  (and sherriff-bot, and other commands which use the
  build_to_revision_map) is if we could make the results/ pages
  paginated.  :)
 
 
  I would be nice to keep all the build data for forever.  Even if after
  some date in the past its on a slower server.
 
  -eric
 
 
  On Sat, Oct 16, 2010 at 12:38 AM, William Siegrist
  wsiegr...@apple.com wrote:
  On Oct 14, 2010, at 10:13 AM, William Siegrist wrote:
 
  On Oct 14, 2010, at 9:27 AM, William Siegrist wrote:
 
  I am in the process of moving buildbot onto faster storage which
  should help with performance. However, during the move, performance 
  will be
  even worse due to the extra i/o. There will be a downtime period in 
  the next
  few days to do the final switchover, but I won't know when that will 
  be
  until the preliminary copying is done. I am trying not to kill the 
  master
  completely, but there have been some slave disconnects due to the load
  already this morning. I'll let everyone know when the downtime will 
  be once
  I know.
 
 
 
  The copying of data will take days at the rate we're going, and the
  server is exhibiting some strange memory paging in the process. I am 
  going
  to reboot the server and try copying with the buildbot master down. The
  master will be down for about 15m, if I can't get the copy done in 
  that time
  I will schedule a longer downtime at a better time. Sorry for the 
  churn.
 
 
 
  Most of build.webkit.org is now running on the newer/faster storage.
  However, the results data[1] is hundreds of gigabytes, going back 6 
  months,
  and the new storage is not big enough. Does anyone have any opinion on 
  how
  much data to keep in results? Does anyone ever look back more than a 
  month
  or two? For now, the results will still come up a slowly, but hopefully 
  the
  rest of buildbot is a little more responsive. We're still planning to 
  move
  all of webkit.org to better hardware soon, but we hit some delays in 
  that
  process.
 
  [1] http://build.webkit.org/results/
 
  Thanks
  -Bill
 
  ___
  webkit-dev mailing list
  webkit-dev@lists.webkit.org
  http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
 
 
 
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev



 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Buildbot Performance

2010-10-19 Thread William Siegrist
Linux, Mac, and Win Release Tests. Note the size of the ZIP files at the bottom 
of these pages:

http://build.webkit.org/old-results/Chromium%20Linux%20Release%20%28Tests%29/

versus:

http://build.webkit.org/old-results/SnowLeopard%20Intel%20Leaks/


If it's useful data, I'll store it for you. I have plenty of disk space. It is 
just that when the directory list gets so large, accessing/iterating gets 
slower, and the larger disk array is not our fastest media.

-Bill



On Oct 19, 2010, at 9:17 AM, Eric Seidel wrote:

 That does not sound expected or desired.  Could you point me to which
 Chromium builders are responsible for so much data?
 
 I suspect this is an artifact of new-run-webkit-tests or how the
 Chromium builders are set up.
 
 On Tue, Oct 19, 2010 at 9:12 AM, William Siegrist wsiegr...@apple.com wrote:
 Right now, /results/ is served from the new storage and is receiving test 
 results data since a day or two ago. For anything older, you will get 
 redirected to /old-results/ which is on the old storage. This probably 
 breaks your code if you are trying to load /results/ and walk backwards in 
 revisions. We should probably look at adding some sort of map to the 
 /json/builders/ data instead.
 
 On a side note, Chromium test results account for 75% of the 700GB of result 
 data, SnowLeopard is 11%, then everyone else. I assume Chromium generating 
 so much more data than everyone else is expected and desired?
 
 -Bill
 
 
 
 
 On Oct 18, 2010, at 5:04 PM, Eric Seidel wrote:
 
 The most frequent consumer of the historical data is webkit-patch,
 which uses it to map from revisions to builds:
 http://trac.webkit.org/browser/trunk/WebKitTools/Scripts/webkitpy/common/net/buildbot.py#L109
 
 It's used when we're walking back through revisions trying to find
 when the build broke, or when the user passes us a revision and
 expects us to know build information about such.
 
 It's possible we could move off that map with some re-design.
 
 
 One thing which would *hugely* speed up webkit-patch failure-reason
 (and sherriff-bot, and other commands which use the
 build_to_revision_map) is if we could make the results/ pages
 paginated.  :)
 
 
 I would be nice to keep all the build data for forever.  Even if after
 some date in the past its on a slower server.
 
 -eric
 
 
 On Sat, Oct 16, 2010 at 12:38 AM, William Siegrist wsiegr...@apple.com 
 wrote:
 On Oct 14, 2010, at 10:13 AM, William Siegrist wrote:
 
 On Oct 14, 2010, at 9:27 AM, William Siegrist wrote:
 
 I am in the process of moving buildbot onto faster storage which should 
 help with performance. However, during the move, performance will be 
 even worse due to the extra i/o. There will be a downtime period in the 
 next few days to do the final switchover, but I won't know when that 
 will be until the preliminary copying is done. I am trying not to kill 
 the master completely, but there have been some slave disconnects due to 
 the load already this morning. I'll let everyone know when the downtime 
 will be once I know.
 
 
 
 The copying of data will take days at the rate we're going, and the 
 server is exhibiting some strange memory paging in the process. I am 
 going to reboot the server and try copying with the buildbot master down. 
 The master will be down for about 15m, if I can't get the copy done in 
 that time I will schedule a longer downtime at a better time. Sorry for 
 the churn.
 
 
 
 Most of build.webkit.org is now running on the newer/faster storage. 
 However, the results data[1] is hundreds of gigabytes, going back 6 
 months, and the new storage is not big enough. Does anyone have any 
 opinion on how much data to keep in results? Does anyone ever look back 
 more than a month or two? For now, the results will still come up a 
 slowly, but hopefully the rest of buildbot is a little more responsive. 
 We're still planning to move all of webkit.org to better hardware soon, 
 but we hit some delays in that process.
 
 [1] http://build.webkit.org/results/
 
 Thanks
 -Bill
 
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
 
 
 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Buildbot Performance

2010-10-19 Thread Ojan Vafai
It's that combined with the fact that Chromium runs some tests that are
expected to fail. As the number of tests chromium is failing and runs
shrinks, the storage growth will shrink as well.

Do we store results for tests that are marked WONTFIX? We probably
shouldn't. IMO we shouldn't even run them.

Ojan

On Tue, Oct 19, 2010 at 9:46 AM, Dimitri Glazkov dglaz...@chromium.orgwrote:

 I would venture to guess that's probably it.

 :DG

 On Tue, Oct 19, 2010 at 9:21 AM, Ryosuke Niwa rn...@webkit.org wrote:
 
  On Tue, Oct 19, 2010 at 9:20 AM, Ryosuke Niwa ryosuke.n...@gmail.com
  wrote:
 
  Ins't this due to the fact Chromium bots run pixel tests and others
 don't?
 
  - Ryosuke
 
  On Tue, Oct 19, 2010 at 9:17 AM, Eric Seidel e...@webkit.org wrote:
 
  That does not sound expected or desired.  Could you point me to which
  Chromium builders are responsible for so much data?
 
  I suspect this is an artifact of new-run-webkit-tests or how the
  Chromium builders are set up.
 
  On Tue, Oct 19, 2010 at 9:12 AM, William Siegrist wsiegr...@apple.com
 
  wrote:
   Right now, /results/ is served from the new storage and is receiving
   test results data since a day or two ago. For anything older, you
 will get
   redirected to /old-results/ which is on the old storage. This
 probably
   breaks your code if you are trying to load /results/ and walk
 backwards in
   revisions. We should probably look at adding some sort of map to the
   /json/builders/ data instead.
  
   On a side note, Chromium test results account for 75% of the 700GB of
   result data, SnowLeopard is 11%, then everyone else. I assume
 Chromium
   generating so much more data than everyone else is expected and
 desired?
  
   -Bill
  
  
  
  
   On Oct 18, 2010, at 5:04 PM, Eric Seidel wrote:
  
   The most frequent consumer of the historical data is webkit-patch,
   which uses it to map from revisions to builds:
  
  
 http://trac.webkit.org/browser/trunk/WebKitTools/Scripts/webkitpy/common/net/buildbot.py#L109
  
   It's used when we're walking back through revisions trying to find
   when the build broke, or when the user passes us a revision and
   expects us to know build information about such.
  
   It's possible we could move off that map with some re-design.
  
  
   One thing which would *hugely* speed up webkit-patch failure-reason
   (and sherriff-bot, and other commands which use the
   build_to_revision_map) is if we could make the results/ pages
   paginated.  :)
  
  
   I would be nice to keep all the build data for forever.  Even if
 after
   some date in the past its on a slower server.
  
   -eric
  
  
   On Sat, Oct 16, 2010 at 12:38 AM, William Siegrist
   wsiegr...@apple.com wrote:
   On Oct 14, 2010, at 10:13 AM, William Siegrist wrote:
  
   On Oct 14, 2010, at 9:27 AM, William Siegrist wrote:
  
   I am in the process of moving buildbot onto faster storage which
   should help with performance. However, during the move,
 performance will be
   even worse due to the extra i/o. There will be a downtime period
 in the next
   few days to do the final switchover, but I won't know when that
 will be
   until the preliminary copying is done. I am trying not to kill
 the master
   completely, but there have been some slave disconnects due to the
 load
   already this morning. I'll let everyone know when the downtime
 will be once
   I know.
  
  
  
   The copying of data will take days at the rate we're going, and
 the
   server is exhibiting some strange memory paging in the process. I
 am going
   to reboot the server and try copying with the buildbot master
 down. The
   master will be down for about 15m, if I can't get the copy done in
 that time
   I will schedule a longer downtime at a better time. Sorry for the
 churn.
  
  
  
   Most of build.webkit.org is now running on the newer/faster
 storage.
   However, the results data[1] is hundreds of gigabytes, going back 6
 months,
   and the new storage is not big enough. Does anyone have any opinion
 on how
   much data to keep in results? Does anyone ever look back more than
 a month
   or two? For now, the results will still come up a slowly, but
 hopefully the
   rest of buildbot is a little more responsive. We're still planning
 to move
   all of webkit.org to better hardware soon, but we hit some delays
 in that
   process.
  
   [1] http://build.webkit.org/results/
  
   Thanks
   -Bill
  
   ___
   webkit-dev mailing list
   webkit-dev@lists.webkit.org
   http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
  
  
  
  ___
  webkit-dev mailing list
  webkit-dev@lists.webkit.org
  http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
 
 
 
  ___
  webkit-dev mailing list
  webkit-dev@lists.webkit.org
  http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
 
 

___
webkit-dev mailing 

Re: [webkit-dev] Buildbot Performance

2010-10-16 Thread William Siegrist
On Oct 14, 2010, at 10:13 AM, William Siegrist wrote:

 On Oct 14, 2010, at 9:27 AM, William Siegrist wrote:
 
 I am in the process of moving buildbot onto faster storage which should help 
 with performance. However, during the move, performance will be even worse 
 due to the extra i/o. There will be a downtime period in the next few days 
 to do the final switchover, but I won't know when that will be until the 
 preliminary copying is done. I am trying not to kill the master completely, 
 but there have been some slave disconnects due to the load already this 
 morning. I'll let everyone know when the downtime will be once I know. 
 
 
 
 The copying of data will take days at the rate we're going, and the server is 
 exhibiting some strange memory paging in the process. I am going to reboot 
 the server and try copying with the buildbot master down. The master will be 
 down for about 15m, if I can't get the copy done in that time I will schedule 
 a longer downtime at a better time. Sorry for the churn.
 


Most of build.webkit.org is now running on the newer/faster storage. However, 
the results data[1] is hundreds of gigabytes, going back 6 months, and the new 
storage is not big enough. Does anyone have any opinion on how much data to 
keep in results? Does anyone ever look back more than a month or two? For now, 
the results will still come up a slowly, but hopefully the rest of buildbot is 
a little more responsive. We're still planning to move all of webkit.org to 
better hardware soon, but we hit some delays in that process. 

[1] http://build.webkit.org/results/

Thanks
-Bill

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] Buildbot Performance

2010-10-14 Thread William Siegrist
I am in the process of moving buildbot onto faster storage which should help 
with performance. However, during the move, performance will be even worse due 
to the extra i/o. There will be a downtime period in the next few days to do 
the final switchover, but I won't know when that will be until the preliminary 
copying is done. I am trying not to kill the master completely, but there have 
been some slave disconnects due to the load already this morning. I'll let 
everyone know when the downtime will be once I know. 

Thanks,
-Bill
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Buildbot Performance

2010-10-14 Thread William Siegrist
On Oct 14, 2010, at 9:27 AM, William Siegrist wrote:

 I am in the process of moving buildbot onto faster storage which should help 
 with performance. However, during the move, performance will be even worse 
 due to the extra i/o. There will be a downtime period in the next few days to 
 do the final switchover, but I won't know when that will be until the 
 preliminary copying is done. I am trying not to kill the master completely, 
 but there have been some slave disconnects due to the load already this 
 morning. I'll let everyone know when the downtime will be once I know. 
 


The copying of data will take days at the rate we're going, and the server is 
exhibiting some strange memory paging in the process. I am going to reboot the 
server and try copying with the buildbot master down. The master will be down 
for about 15m, if I can't get the copy done in that time I will schedule a 
longer downtime at a better time. Sorry for the churn.

-Bill



___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev