[chromium-dev] Re: Throttling URLRequestFileJobs rate?

2009-06-12 Thread Dan Kegel

On Thu, Jun 11, 2009 at 10:24 PM, Stuart
Morganstuartmor...@chromium.org wrote:
 Also, 256 is a pretty low limit.

 Dialing it up a few notches (say to 1024) to improve the performance
 of a better overall solution certainly isn't an issue.

Don't you need root for that?
- Dan

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Throttling URLRequestFileJobs rate?

2009-06-12 Thread Stuart Morgan

On Fri, Jun 12, 2009 at 6:59 AM, Dan Kegeldaniel.r.ke...@gmail.com wrote:
 Don't you need root for that?

Changing the soft limit doesn't require root, and there's apparently
no hard limit at all on file descriptors (at least as of Leopard; I
don't think that was always true).

-Stuart

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Throttling URLRequestFileJobs rate?

2009-06-11 Thread Scott Hess

In my experience, taking code which assumes a low number of file
descriptors and just ramping up the file descriptor limits to
accommodate a particular case doesn't work out well.  You end up
finding out that there are three or four other edge cases which cause
problems, things like O(N^2) code paths, or other places where people
assume there will only be 2 to 4 file descriptors.  If we were some
other kind of program, like a database server, well, darn well we had
better make it work.  But we aren't that kind of program.

-scott


On Thu, Jun 11, 2009 at 5:23 PM, Jeremy Orlow jor...@google.com wrote:
 On Thu, Jun 11, 2009 at 5:13 PM, Stuart Morgan stuartmor...@chromium.org
 wrote:

 I'm working on getting the Intl2 test set from the page cycler up and
 running on the Mac, which currently crashes very quickly. It turns out
 that one of the test pages has hundreds of images on it, and we
 simultaneously make hundreds of URLRequestFileJobs to load them. Each
 of those uses a SharedMemory for communication, each of which requires
 a file descriptor. This test page generates enough requests at once
 that we blow out the file descriptor limit (which defaults to 256 on
 the Mac) and fall apart.

 It's tempting to say that we should just
  a) bump up the limit, and
  b) make failure to create a SharedMemory non-fatal
 At least some degree of b) is probably a good idea, but it's not
 entirely clear that we *want* all the layers involved to silently
 accept failure. Even if we do, local pages with more images than
 whatever limit we set in a) won't load correctly, and making that
 limit too high can get ugly.

 A seemingly better option would be to limit the number of simultaneous
 URLRequestFileJobs we will allow.

 Personally this seems like the only sane way to do it.  Even if you bump the
 limits, you will hit pretty major slow downs in most OS's (last time I saw
 anyone try).


 I assume we have plumbing in place
 to deal with limiting the number of simultaneous URLRequestJobs we
 make per server; is it flexible enough that it could be extended to
 handle file URLs as well? If so, is there any reason that would be a
 bad idea? (And can someone point me to the relevant code?)

 -Stuart




 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Throttling URLRequestFileJobs rate?

2009-06-11 Thread Aaron Boodman

On Thu, Jun 11, 2009 at 5:13 PM, Stuart Morganstuartmor...@chromium.org wrote:

 I'm working on getting the Intl2 test set from the page cycler up and
 running on the Mac, which currently crashes very quickly. It turns out
 that one of the test pages has hundreds of images on it, and we
 simultaneously make hundreds of URLRequestFileJobs to load them. Each
 of those uses a SharedMemory for communication, each of which requires
 a file descriptor. This test page generates enough requests at once
 that we blow out the file descriptor limit (which defaults to 256 on
 the Mac) and fall apart.

 It's tempting to say that we should just
  a) bump up the limit, and
  b) make failure to create a SharedMemory non-fatal
 At least some degree of b) is probably a good idea, but it's not
 entirely clear that we *want* all the layers involved to silently
 accept failure. Even if we do, local pages with more images than
 whatever limit we set in a) won't load correctly, and making that
 limit too high can get ugly.

FYI, the extension system uses URLRequestFileJob extensively. So I
don't think any solution that could lead to silent failures is
acceptable. Rate-limiting seems better.

- a

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Throttling URLRequestFileJobs rate?

2009-06-11 Thread Darin Fisher
On Thu, Jun 11, 2009 at 5:13 PM, Stuart Morgan stuartmor...@chromium.orgwrote:


 I'm working on getting the Intl2 test set from the page cycler up and
 running on the Mac, which currently crashes very quickly. It turns out
 that one of the test pages has hundreds of images on it, and we
 simultaneously make hundreds of URLRequestFileJobs to load them. Each
 of those uses a SharedMemory for communication, each of which requires
 a file descriptor. This test page generates enough requests at once
 that we blow out the file descriptor limit (which defaults to 256 on
 the Mac) and fall apart.

 It's tempting to say that we should just
  a) bump up the limit, and
  b) make failure to create a SharedMemory non-fatal
 At least some degree of b) is probably a good idea, but it's not
 entirely clear that we *want* all the layers involved to silently
 accept failure. Even if we do, local pages with more images than
 whatever limit we set in a) won't load correctly, and making that
 limit too high can get ugly.

 A seemingly better option would be to limit the number of simultaneous
 URLRequestFileJobs we will allow. I assume we have plumbing in place
 to deal with limiting the number of simultaneous URLRequestJobs we
 make per server; is it flexible enough that it could be extended to
 handle file URLs as well? If so, is there any reason that would be a
 bad idea? (And can someone point me to the relevant code?)

 -Stuart

 


Hmm... we have a couple limiters already:

1-  FileStream uses a thread pool to read files asynchronously to the
caller.  That thread pool is limited in size.

2-  ResourceDispatcherHost limits the number of data payloads it will send
to a renderer at any given time.  It looks for ACKs from the renderer, and
if it does not get them fast enough, then it backs off.

It seems like this issue, since it is about the shared memory used for
streaming resources to a renderer, is not particular to file://.  It could
happen with http:// as well assuming we had a fast enough network or a janky
enough local system, right?

-Darin

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Throttling URLRequestFileJobs rate?

2009-06-11 Thread Darin Fisher
On Thu, Jun 11, 2009 at 7:45 PM, Darin Fisher da...@chromium.org wrote:

 On Thu, Jun 11, 2009 at 5:13 PM, Stuart Morgan 
 stuartmor...@chromium.orgwrote:


 I'm working on getting the Intl2 test set from the page cycler up and
 running on the Mac, which currently crashes very quickly. It turns out
 that one of the test pages has hundreds of images on it, and we
 simultaneously make hundreds of URLRequestFileJobs to load them. Each
 of those uses a SharedMemory for communication, each of which requires
 a file descriptor. This test page generates enough requests at once
 that we blow out the file descriptor limit (which defaults to 256 on
 the Mac) and fall apart.

 It's tempting to say that we should just
  a) bump up the limit, and
  b) make failure to create a SharedMemory non-fatal
 At least some degree of b) is probably a good idea, but it's not
 entirely clear that we *want* all the layers involved to silently
 accept failure. Even if we do, local pages with more images than
 whatever limit we set in a) won't load correctly, and making that
 limit too high can get ugly.

 A seemingly better option would be to limit the number of simultaneous
 URLRequestFileJobs we will allow. I assume we have plumbing in place
 to deal with limiting the number of simultaneous URLRequestJobs we
 make per server; is it flexible enough that it could be extended to
 handle file URLs as well? If so, is there any reason that would be a
 bad idea? (And can someone point me to the relevant code?)

 -Stuart

 


 Hmm... we have a couple limiters already:

 1-  FileStream uses a thread pool to read files asynchronously to the
 caller.  That thread pool is limited in size.

 2-  ResourceDispatcherHost limits the number of data payloads it will send
 to a renderer at any given time.  It looks for ACKs from the renderer, and
 if it does not get them fast enough, then it backs off.

 It seems like this issue, since it is about the shared memory used for
 streaming resources to a renderer, is not particular to file://.  It could
 happen with http:// as well assuming we had a fast enough network or a
 janky enough local system, right?

 -Darin



I meant to add:  so tweaking #2 would probably work here.

-Darin

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Throttling URLRequestFileJobs rate?

2009-06-11 Thread Stuart Morgan

On Thu, Jun 11, 2009 at 7:45 PM, Darin Fisherda...@chromium.org wrote:
 It seems like this issue, since it is about the shared memory used for
 streaming resources to a renderer, is not particular to file://.  It could
 happen with http:// as well assuming we had a fast enough network or a janky
 enough local system, right?

I was assuming that in most cases the max-connections-per-server would
tend to prevent us from getting to that point, but it's probably
possible given the right circumstances.

-Stuart

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Throttling URLRequestFileJobs rate?

2009-06-11 Thread Stuart Morgan

On Thu, Jun 11, 2009 at 8:03 PM, Michael Nordmanmicha...@google.com wrote:
 Sounds like the underlying issue is not the number of requests (or
 type of request), but the number of SharedMemory instances in use on
 behalf of request handling at any one time.

True; I'll take a look at how else SharedMemory is used, and where I
can introduce delays/blocking without causing problems for handling of
other in-flight requests.

 Also, 256 is a pretty low limit.

Dialing it up a few notches (say to 1024) to improve the performance
of a better overall solution certainly isn't an issue.

-Stuart

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---