[
https://issues.apache.org/jira/browse/CASSANDRA-5724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Or Sher updated CASSANDRA-5724:
-------------------------------
Component/s: Core
Description:
When doing a rolling upgrade from 1.0.* or 1.1.* to 1.2.* some slice or range
slice queries executed against a 1.2.* node fails due to timeout exception:
[default@orTestKS] list orTestCF;
Using default limit of 100
Using default column limit of 100
null
TimedOutException()
at
org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12932)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at
org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:734)
at
org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:718)
at org.apache.cassandra.cli.CliClient.executeList(CliClient.java:1489)
at
org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:273)
at
org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:210)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:337)
It seems this issue is because of the new parameter in 1.2.*:
internode_compression which is set to "all" by default.
It seems that by setting this parameter to none solves the problem.
I think the question is if Cassandra should support somehow nodes with
different configuration for this parameter?
Affects Version/s: 1.2.0
Summary: Timeouts for slice/rangeslice queries while some nodes
versions are lower than 1.2 and some higher. (was: Cassandra upgrade )
> Timeouts for slice/rangeslice queries while some nodes versions are lower
> than 1.2 and some higher.
> ---------------------------------------------------------------------------------------------------
>
> Key: CASSANDRA-5724
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5724
> Project: Cassandra
> Issue Type: Improvement
> Components: Core
> Affects Versions: 1.2.0
> Reporter: Or Sher
>
> When doing a rolling upgrade from 1.0.* or 1.1.* to 1.2.* some slice or range
> slice queries executed against a 1.2.* node fails due to timeout exception:
> [default@orTestKS] list orTestCF;
> Using default limit of 100
> Using default column limit of 100
> null
> TimedOutException()
> at
> org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12932)
> at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
> at
> org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:734)
> at
> org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:718)
> at org.apache.cassandra.cli.CliClient.executeList(CliClient.java:1489)
> at
> org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:273)
> at
> org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:210)
> at org.apache.cassandra.cli.CliMain.main(CliMain.java:337)
> It seems this issue is because of the new parameter in 1.2.*:
> internode_compression which is set to "all" by default.
> It seems that by setting this parameter to none solves the problem.
> I think the question is if Cassandra should support somehow nodes with
> different configuration for this parameter?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira