[ 
https://issues.apache.org/jira/browse/AVRO-1111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13451827#comment-13451827
 ] 

Mike Percy commented on AVRO-1111:
----------------------------------

Phil, thank you very much for the ideas! I agree with trying harder to count 
the actual received bytes (deep count). Controlling the max-size on the server, 
while not requiring the client to provide it, looks more future proof to me.
                
> Malformed data can cause OutOfMemoryError in Avro IPC
> -----------------------------------------------------
>
>                 Key: AVRO-1111
>                 URL: https://issues.apache.org/jira/browse/AVRO-1111
>             Project: Avro
>          Issue Type: Bug
>          Components: java
>    Affects Versions: 1.6.3
>            Reporter: Hari Shreedharan
>         Attachments: AVRO-1111-1.patch
>
>
> If the data that comes in through the Netty channel buffer is not framed 
> correctly/is not valid Avro data, then the incoming data can cause 
> arbitrarily large array lists to be created, causing OutOfMemoryError. 
> The relevant code(org.apache.avro.ipc.NettyTransportCodec):
> private boolean decodePackHeader(ChannelHandlerContext ctx, Channel channel,
> ChannelBuffer buffer) throws Exception {
> if (buffer.readableBytes()<8) { return false; }
> int serial = buffer.readInt();
> listSize = buffer.readInt();
> dataPack = new NettyDataPack(serial, new ArrayList<ByteBuffer>(listSize));
> return true;
> }
> If the buffer does not have valid Avro data, the listSize variable can have 
> arbitrary values, causing massive ArrayLists to be created, leading to 
> OutOfMemoryErrors.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to