[
https://issues.apache.org/jira/browse/CASSANDRA-7695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14095316#comment-14095316
]
Norman Maurer commented on CASSANDRA-7695:
------------------------------------------
Hey guys I finally found the root cause of the problem and fixed it in Netty.
That said I think it is also possible to see the same problem when using
non-unsafe ByteBufs (if you are lucky enough). This problem was easier to
reproduce on OSX as for this to happen you need to produce some series of
incomplete / complete writes to trigger that, and this is easier on OSX stock
network configuration.
The issue and fix can be found here:
https://github.com/netty/netty/issues/2761
> Inserting the same row in parallel causes bad data to be returned to the
> client
> -------------------------------------------------------------------------------
>
> Key: CASSANDRA-7695
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7695
> Project: Cassandra
> Issue Type: Bug
> Environment: Linux 3.12.21, JVM 1.7u60
> Cassandra server 2.1.0 RC 5
> Cassandra datastax client version 2.1.0RC1
> Reporter: Johan Bjork
> Assignee: T Jake Luciani
> Priority: Blocker
> Labels: qa-resolved
> Fix For: 2.1.0
>
> Attachments: 7695-workaround.txt, PutFailureRepro.java,
> bad-data-tid43-get, bad-data-tid43-put
>
>
> Running the attached test program against a cassandra 2.1 server results in
> scrambled data returned by the SELECT statement. Running it against latest
> stable works fine.
> Attached:
> * Program that reproduces the failure
> * Example output files from mentioned test-program with the scrambled output.
> Failure mode:
> The value returned by 'get' is scrambled, the size is correct but some bytes
> have shifted locations in the returned buffer.
> Cluster info:
> For the test we set up a single cassandra node using the stock configuration
> file.
--
This message was sent by Atlassian JIRA
(v6.2#6252)