[ 
https://issues.apache.org/jira/browse/THRIFT-3175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14568732#comment-14568732
 ] 

ASF GitHub Bot commented on THRIFT-3175:
----------------------------------------

GitHub user dvirsky opened a pull request:

    https://github.com/apache/thrift/pull/511

    Limit lists to 10,000 items in fastbinary decoding to avoid crashing …

    …servers from huge allocations on junk/mallicious input
    
    https://issues.apache.org/jira/browse/THRIFT-3175

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/EverythingMe/thrift THRIFT-3175

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/thrift/pull/511.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #511
    
----
commit 988defdbdc12d2f087c0a87f8f6205b41f8588aa
Author: Dvir Volk <[email protected]>
Date:   2015-06-02T07:55:43Z

    Limit lists to 10,000 items in fastbinary decoding to avoid crashing 
servers from huge allocations on junk/mallicious input

----


> fastbinary.c python deserialize can cause huge allocations from garbage
> -----------------------------------------------------------------------
>
>                 Key: THRIFT-3175
>                 URL: https://issues.apache.org/jira/browse/THRIFT-3175
>             Project: Thrift
>          Issue Type: Bug
>          Components: Python - Library
>            Reporter: Dvir Volk
>
> In the fastbinary python deserializer, allocating a list is done like so:
> {code}
>     len = readI32(input);
>     if (!check_ssize_t_32(len)) {
>       return NULL;
>     }
>     ret = PyList_New(len);
> {code}
> The only validation of len is that it's under INT_MAX. I've encountered a 
> situation where upon receiving garbage input, and having len be read as 
> something like 1 billion, the library treated this as a valid input, 
> allocated gigs of RAM,  and caused a server to crash. 
> The quick fix I made was to limit list sizes to a sane value of a few 
> thousands that more than suits my personal needs. 
> But IMO this should be dealt with properly. One way that comes to mind is not 
> pre-allocating the entire list in advance in case it's really big, and 
> resizing it in smaller steps while reading the input. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to