On Thu, Jun 11, 2009 at 5:13 PM, Stuart Morgan<[email protected]> wrote:
>
> I'm working on getting the Intl2 test set from the page cycler up and
> running on the Mac, which currently crashes very quickly. It turns out
> that one of the test pages has hundreds of images on it, and we
> simultaneously make hundreds of URLRequestFileJobs to load them. Each
> of those uses a SharedMemory for communication, each of which requires
> a file descriptor. This test page generates enough requests at once
> that we blow out the file descriptor limit (which defaults to 256 on
> the Mac) and fall apart.
>
> It's tempting to say that we should just
>  a) bump up the limit, and
>  b) make failure to create a SharedMemory non-fatal
> At least some degree of b) is probably a good idea, but it's not
> entirely clear that we *want* all the layers involved to silently
> accept failure. Even if we do, local pages with more images than
> whatever limit we set in a) won't load correctly, and making that
> limit too high can get ugly.

FYI, the extension system uses URLRequestFileJob extensively. So I
don't think any solution that could lead to silent failures is
acceptable. Rate-limiting seems better.

- a

--~--~---------~--~----~------------~-------~--~----~
Chromium Developers mailing list: [email protected] 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-dev
-~----------~----~----~----~------~----~------~--~---

Reply via email to