[ 
https://issues.apache.org/jira/browse/CASSANDRA-5981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13778704#comment-13778704
 ] 

Daniel Norberg commented on CASSANDRA-5981:
-------------------------------------------

Right, that's annoying.

I'd be tempted to actually close the connection immediately. It doesn't seem 
very attractive to read and discard that huge frame, potentially using up a lot 
of bandwidth doing only that. IMO better to prioritize well behaved clients and 
let the offending client reconnect.

If you still want to keep the connection open and fail the request nicely I'd 
probably go for implementing a custom frame decoder.


                
> Netty frame length exception when storing data to Cassandra using binary 
> protocol
> ---------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-5981
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5981
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: Linux, Java 7
>            Reporter: Justin Sweeney
>            Assignee: Sylvain Lebresne
>            Priority: Minor
>             Fix For: 1.2.11
>
>         Attachments: 0001-Correctly-catch-frame-too-long-exceptions.txt, 
> 0002-Allow-to-configure-the-max-frame-length.txt
>
>
> Using Cassandra 1.2.8, I am running into an issue where when I send a large 
> amount of data using the binary protocol, I get the following netty exception 
> in the Cassandra log file:
> {quote}
> ERROR 09:08:35,845 Unexpected exception during request
> org.jboss.netty.handler.codec.frame.TooLongFrameException: Adjusted frame 
> length exceeds 268435456: 292413714 - discarded
>         at 
> org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:441)
>         at 
> org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:412)
>         at 
> org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:372)
>         at org.apache.cassandra.transport.Frame$Decoder.decode(Frame.java:181)
>         at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:422)
>         at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
>         at 
> org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
>         at 
> org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
>         at 
> org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84)
>         at 
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:472)
>         at 
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:333)
>         at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>         at java.lang.Thread.run(Thread.java:722)
> {quote}
> I am using the Datastax driver and using CQL to execute insert queries. The 
> query that is failing is using atomic batching executing a large number of 
> statements (~55).
> Looking into the code a bit, I saw that in the 
> org.apache.cassandra.transport.Frame$Decoder class, the MAX_FRAME_LENGTH is 
> hard coded to 256 mb.
> Is this something that should be configurable or is this a hard limit that 
> will prevent batch statements of this size from executing for some reason?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to