In my experience, taking code which assumes a low number of file
descriptors and just ramping up the file descriptor limits to
accommodate a particular case doesn't work out well.  You end up
finding out that there are three or four other edge cases which cause
problems, things like O(N^2) code paths, or other places where people
assume there will only be 2 to 4 file descriptors.  If we were some
other kind of program, like a database server, well, darn well we had
better make it work.  But we aren't that kind of program.

-scott


On Thu, Jun 11, 2009 at 5:23 PM, Jeremy Orlow <jor...@google.com> wrote:
> On Thu, Jun 11, 2009 at 5:13 PM, Stuart Morgan <stuartmor...@chromium.org>
> wrote:
>>
>> I'm working on getting the Intl2 test set from the page cycler up and
>> running on the Mac, which currently crashes very quickly. It turns out
>> that one of the test pages has hundreds of images on it, and we
>> simultaneously make hundreds of URLRequestFileJobs to load them. Each
>> of those uses a SharedMemory for communication, each of which requires
>> a file descriptor. This test page generates enough requests at once
>> that we blow out the file descriptor limit (which defaults to 256 on
>> the Mac) and fall apart.
>>
>> It's tempting to say that we should just
>>  a) bump up the limit, and
>>  b) make failure to create a SharedMemory non-fatal
>> At least some degree of b) is probably a good idea, but it's not
>> entirely clear that we *want* all the layers involved to silently
>> accept failure. Even if we do, local pages with more images than
>> whatever limit we set in a) won't load correctly, and making that
>> limit too high can get ugly.
>>
>> A seemingly better option would be to limit the number of simultaneous
>> URLRequestFileJobs we will allow.
>
> Personally this seems like the only sane way to do it.  Even if you bump the
> limits, you will hit pretty major slow downs in most OS's (last time I saw
> anyone try).
>
>>
>> I assume we have plumbing in place
>> to deal with limiting the number of simultaneous URLRequestJobs we
>> make per server; is it flexible enough that it could be extended to
>> handle file URLs as well? If so, is there any reason that would be a
>> bad idea? (And can someone point me to the relevant code?)
>>
>> -Stuart
>>
>>
>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-dev
-~----------~----~----~----~------~----~------~--~---

Reply via email to