On Mon, Mar 1, 2010 at 4:48 PM, varname <[email protected]> wrote:
> Luis EG Ontanon wrote:
>>> Don't know if it's the only way, but changing the limit to 10MB fixed it
>>> for my situation.
>>
>> It might have worked it arround until an 11Mb request overflows it again.
>
> sure. That's why I wrote "for my situation". I 'never' expect to have to
> allocate more than 10MB at a time, but that was probably the reasoning
> of the developer that implemented the check in the first place.
>
>
>> What it should be done IMHO is to g_malloc()ate the block directly if
>> it happens to be bigger than the limit instead of failing. (and of
>> course that would need to be freed as the ep-memory gets renewed).
>
> I can't comment on that. One thought though: what if a really large
> block needs to be allocated (100MB reassembled http download for
> instance)? Might not be too nice on the machine running wireshark?

Nope, I think it won't...

Could you revert your changes and try the attached patch against your captures?

Thanks,
\L

-- 
This information is top security. When you have read it, destroy yourself.
-- Marshall McLuhan

Attachment: large_alloc.patch
Description: Binary data

___________________________________________________________________________
Sent via:    Wireshark-dev mailing list <[email protected]>
Archives:    http://www.wireshark.org/lists/wireshark-dev
Unsubscribe: https://wireshark.org/mailman/options/wireshark-dev
             mailto:[email protected]?subject=unsubscribe

Reply via email to